yaicli 0.0.19__py3-none-any.whl → 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: yaicli
3
- Version: 0.0.19
3
+ Version: 0.1.0
4
4
  Summary: A simple CLI tool to interact with LLM
5
5
  Project-URL: Homepage, https://github.com/belingud/yaicli
6
6
  Project-URL: Repository, https://github.com/belingud/yaicli
@@ -222,282 +222,310 @@ Requires-Dist: socksio>=1.0.0
222
222
  Requires-Dist: typer>=0.15.2
223
223
  Description-Content-Type: text/markdown
224
224
 
225
- # YAICLI - Your AI Interface in Command Line
225
+ # YAICLI - Your AI Command Line Interface
226
226
 
227
227
  [![PyPI version](https://img.shields.io/pypi/v/yaicli?style=for-the-badge)](https://pypi.org/project/yaicli/)
228
228
  ![GitHub License](https://img.shields.io/github/license/belingud/yaicli?style=for-the-badge)
229
229
  ![PyPI - Downloads](https://img.shields.io/pypi/dm/yaicli?logo=pypi&style=for-the-badge)
230
230
  ![Pepy Total Downloads](https://img.shields.io/pepy/dt/yaicli?style=for-the-badge&logo=python)
231
231
 
232
- YAICLI is a compact yet potent command-line AI assistant, allowing you to engage with Large Language Models (LLMs) such as ChatGPT's gpt-4o directly via your terminal. It offers multiple operation modes for everyday conversations, generating and executing shell commands, and one-shot quick queries.
232
+ YAICLI is a powerful yet lightweight command-line AI assistant that brings the capabilities of Large Language Models (LLMs) like GPT-4o directly to your terminal. Interact with AI through multiple modes: have natural conversations, generate and execute shell commands, or get quick answers without leaving your workflow.
233
233
 
234
- Support regular and deep thinking models.
234
+ **Supports both standard and deep reasoning models across all major LLM providers.**
235
235
 
236
- > [!WARNING]
237
- > This is a work in progress, some features could change or be removed in the future.
236
+ <p align="center">
237
+ <img src="https://vhs.charm.sh/vhs-5U1BBjJkTUBReRswsSgIVx.gif" alt="YAICLI Demo" width="85%">
238
+ </p>
238
239
 
239
- ## Features
240
+ > [!NOTE]
241
+ > YAICLI is actively developed. While core functionality is stable, some features may evolve in future releases.
240
242
 
241
- - **Smart Interaction Modes**:
242
- - 💬 Chat Mode: Persistent dialogue with context tracking
243
- - 🚀 Execute Mode: Generate & verify OS-specific commands (Windows/macOS/Linux)
244
- - ⚡ Quick Query: Single-shot responses without entering REPL
243
+ ## Key Features
245
244
 
246
- - **Environment Intelligence**:
247
- - Auto-detects shell type (CMD/PowerShell/bash/zsh)
248
- - Dynamic command validation with 3-step confirmation
249
- - Pipe input support (`cat log.txt | ai "analyze errors"`)
245
+ ### 🔄 Multiple Interaction Modes
246
+ - **💬 Chat Mode**: Engage in persistent conversations with full context tracking
247
+ - **🚀 Execute Mode**: Generate and safely run OS-specific shell commands
248
+ - **⚡ Quick Query**: Get instant answers without entering interactive mode
250
249
 
251
- - **Enterprise LLM Support**:
252
- - OpenAI API compatible endpoints
253
- - Claude/Gemini/Cohere integration guides
254
- - Custom JSON parsing with jmespath
250
+ ### 🧠 Smart Environment Awareness
251
+ - **Auto-detection**: Identifies your shell (bash/zsh/PowerShell/CMD) and OS
252
+ - **Safe Command Execution**: 3-step verification before running any command
253
+ - **Flexible Input**: Pipe content directly (`cat log.txt | ai "analyze this"`)
255
254
 
256
- - **Terminal Experience**:
257
- - Real-time streaming with cursor animation
258
- - LRU history management (500 entries default)
255
+ ### 🔌 Universal LLM Compatibility
256
+ - **OpenAI-Compatible**: Works with any OpenAI-compatible API endpoint
257
+ - **Multi-Provider Support**: Easy configuration for Claude, Gemini, Cohere, etc.
258
+ - **Custom Response Parsing**: Extract exactly what you need with jmespath
259
259
 
260
- - **DevOps Ready**:
261
- - Layered configuration (Env > File > Defaults)
262
- - Verbose debug mode with API tracing
260
+ ### 💻 Enhanced Terminal Experience
261
+ - **Real-time Streaming**: See responses as they're generated with cursor animation
262
+ - **Rich History Management**: LRU-based history with 500 entries by default
263
+ - **Syntax Highlighting**: Beautiful code formatting with customizable themes
263
264
 
264
- ## Installation
265
+ ### 🛠️ Developer-Friendly
266
+ - **Layered Configuration**: Environment variables > Config file > Sensible defaults
267
+ - **Debugging Tools**: Verbose mode with detailed API tracing
268
+ - **Lightweight**: Minimal dependencies with focused functionality
269
+
270
+ ## 📦 Installation
265
271
 
266
272
  ### Prerequisites
267
273
 
268
274
  - Python 3.9 or higher
269
275
 
270
- ### Install from PyPI
276
+ ### Quick Install
271
277
 
272
278
  ```bash
273
- # Install by pip
279
+ # Using pip (recommended for most users)
274
280
  pip install yaicli
275
281
 
276
- # Install by pipx
282
+ # Using pipx (isolated environment)
277
283
  pipx install yaicli
278
284
 
279
- # Install by uv
285
+ # Using uv (faster installation)
280
286
  uv tool install yaicli
281
287
  ```
282
288
 
283
289
  ### Install from Source
284
290
 
285
291
  ```bash
286
- git clone https://github.com/yourusername/yaicli.git
292
+ git clone https://github.com/belingud/yaicli.git
287
293
  cd yaicli
288
294
  pip install .
289
295
  ```
290
296
 
291
- ## Configuration
297
+ ## ⚙️ Configuration
298
+
299
+ YAICLI uses a simple configuration file to store your preferences and API keys.
292
300
 
293
- On first run, YAICLI will create a default configuration file at `~/.config/yaicli/config.ini`. You'll need to edit this file to add your API key and customize other settings.
301
+ ### First-time Setup
294
302
 
295
- Just run `ai`, and it will create the config file for you. Then you can edit it to add your api key.
303
+ 1. Run `ai` once to generate the default configuration file
304
+ 2. Edit `~/.config/yaicli/config.ini` to add your API key
305
+ 3. Customize other settings as needed
296
306
 
297
- ### Configuration File
307
+ ### Configuration File Structure
298
308
 
299
- The default configuration file is located at `~/.config/yaicli/config.ini`. Look at the example below:
309
+ The default configuration file is located at `~/.config/yaicli/config.ini`. You can use `ai --template` to see default settings, just as below:
300
310
 
301
311
  ```ini
302
312
  [core]
303
- PROVIDER=OPENAI
313
+ PROVIDER=openai
304
314
  BASE_URL=https://api.openai.com/v1
305
- API_KEY=your_api_key_here
315
+ API_KEY=
306
316
  MODEL=gpt-4o
307
317
 
308
- # auto detect shell and os
318
+ # auto detect shell and os (or specify manually, e.g., bash, zsh, powershell.exe)
309
319
  SHELL_NAME=auto
310
320
  OS_NAME=auto
311
321
 
312
- # if you want to use custom completions path, you can set it here
313
- COMPLETION_PATH=/chat/completions
314
- # if you want to use custom answer path, you can set it here
322
+ # API paths (usually no need to change for OpenAI compatible APIs)
323
+ COMPLETION_PATH=chat/completions
315
324
  ANSWER_PATH=choices[0].message.content
316
325
 
317
- # true: streaming response
318
- # false: non-streaming response
326
+ # true: streaming response, false: non-streaming
319
327
  STREAM=true
320
- CODE_THEME=monokia
321
328
 
329
+ # LLM parameters
322
330
  TEMPERATURE=0.7
323
331
  TOP_P=1.0
324
332
  MAX_TOKENS=1024
333
+ TIMEOUT=60
334
+
335
+ # Interactive mode parameters
336
+ INTERACTIVE_ROUND=25
337
+
338
+ # UI/UX
339
+ CODE_THEME=monokai
340
+ MAX_HISTORY=500 # Max entries kept in history file
341
+ AUTO_SUGGEST=true
325
342
  ```
326
343
 
327
- ### Configuration Options
328
-
329
- Below are the available configuration options and override environment variables:
330
-
331
- - **BASE_URL**: API endpoint URL (default: OpenAI API), env: YAI_BASE_URL
332
- - **API_KEY**: Your API key for the LLM provider, env: YAI_API_KEY
333
- - **MODEL**: The model to use (e.g., gpt-4o, gpt-3.5-turbo), default: gpt-4o, env: YAI_MODEL
334
- - **SHELL_NAME**: Shell to use (auto for automatic detection), default: auto, env: YAI_SHELL_NAME
335
- - **OS_NAME**: OS to use (auto for automatic detection), default: auto, env: YAI_OS_NAME
336
- - **COMPLETION_PATH**: Path for completions endpoint, default: /chat/completions, env: YAI_COMPLETION_PATH
337
- - **ANSWER_PATH**: Json path expression to extract answer from response, default: choices[0].message.content, env: YAI_ANSWER_PATH
338
- - **STREAM**: Enable/disable streaming responses, default: true, env: YAI_STREAM
339
- - **CODE_THEME**: Theme for code blocks, default: monokia, env: YAI_CODE_THEME
340
- - **TEMPERATURE**: Temperature for response generation (default: 0.7), env: YAI_TEMPERATURE
341
- - **TOP_P**: Top-p sampling for response generation (default: 1.0), env: YAI_TOP_P
342
- - **MAX_TOKENS**: Maximum number of tokens for response generation (default: 1024), env: YAI_MAX_TOKENS
343
- - **MAX_HISTORY**: Max history size, default: 500, env: YAI_MAX_HISTORY
344
- - **AUTO_SUGGEST**: Auto suggest from history, default: true, env: YAI_AUTO_SUGGEST
345
-
346
- Default config of `COMPLETION_PATH` and `ANSWER_PATH` is OpenAI compatible. If you are using OpenAI or other OpenAI compatible LLM provider, you can use the default config.
347
-
348
- If you wish to use other providers that are not compatible with the openai interface, you can use the following config:
349
-
350
- - claude:
351
- - BASE_URL: https://api.anthropic.com/v1
352
- - COMPLETION_PATH: /messages
353
- - ANSWER_PATH: content.0.text
354
- - cohere:
355
- - BASE_URL: https://api.cohere.com/v2
356
- - COMPLETION_PATH: /chat
357
- - ANSWER_PATH: message.content.[0].text
358
- - google:
359
- - BASE_URL: https://generativelanguage.googleapis.com/v1beta/openai
360
- - COMPLETION_PATH: /chat/completions
361
- - ANSWER_PATH: choices[0].message.content
362
-
363
- You can use google OpenAI complete endpoint and leave `COMPLETION_PATH` and `ANSWER_PATH` as default. BASE_URL: https://generativelanguage.googleapis.com/v1beta/openai. See https://ai.google.dev/gemini-api/docs/openai
364
-
365
- Claude also has a testable OpenAI-compatible interface, you can just use Calude endpoint and leave `COMPLETION_PATH` and `ANSWER_PATH` as default. See: https://docs.anthropic.com/en/api/openai-sdk
366
-
367
- If you not sure how to config `COMPLETION_PATH` and `ANSWER_PATH`, here is a guide:
344
+ ### Configuration Options Reference
345
+
346
+ | Option | Description | Default | Env Variable |
347
+ |--------|-------------|---------|---------------|
348
+ | `BASE_URL` | API endpoint URL | `https://api.openai.com/v1` | `YAI_BASE_URL` |
349
+ | `API_KEY` | Your API key | - | `YAI_API_KEY` |
350
+ | `MODEL` | LLM model to use | `gpt-4o` | `YAI_MODEL` |
351
+ | `SHELL_NAME` | Shell type | `auto` | `YAI_SHELL_NAME` |
352
+ | `OS_NAME` | Operating system | `auto` | `YAI_OS_NAME` |
353
+ | `COMPLETION_PATH` | API completion path | `chat/completions` | `YAI_COMPLETION_PATH` |
354
+ | `ANSWER_PATH` | JSON path for response | `choices[0].message.content` | `YAI_ANSWER_PATH` |
355
+ | `STREAM` | Enable streaming | `true` | `YAI_STREAM` |
356
+ | `TIMEOUT` | API timeout (seconds) | `60` | `YAI_TIMEOUT` |
357
+ | `INTERACTIVE_ROUND` | Interactive mode rounds | `25` | `YAI_INTERACTIVE_ROUND` |
358
+ | `CODE_THEME` | Syntax highlighting theme | `monokai` | `YAI_CODE_THEME` |
359
+ | `TEMPERATURE` | Response randomness | `0.7` | `YAI_TEMPERATURE` |
360
+ | `TOP_P` | Top-p sampling | `1.0` | `YAI_TOP_P` |
361
+ | `MAX_TOKENS` | Max response tokens | `1024` | `YAI_MAX_TOKENS` |
362
+ | `MAX_HISTORY` | Max history entries | `500` | `YAI_MAX_HISTORY` |
363
+ | `AUTO_SUGGEST` | Enable history suggestions | `true` | `YAI_AUTO_SUGGEST` |
364
+
365
+ ### LLM Provider Configuration
366
+
367
+ YAICLI works with all major LLM providers. The default configuration is set up for OpenAI, but you can easily switch to other providers.
368
+
369
+ #### Pre-configured Provider Settings
370
+
371
+ | Provider | BASE_URL | COMPLETION_PATH | ANSWER_PATH |
372
+ |----------|----------|-----------------|-------------|
373
+ | **OpenAI** (default) | `https://api.openai.com/v1` | `chat/completions` | `choices[0].message.content` |
374
+ | **Claude** (native API) | `https://api.anthropic.com/v1` | `messages` | `content[0].text` |
375
+ | **Claude** (OpenAI-compatible) | `https://api.anthropic.com/v1/openai` | `chat/completions` | `choices[0].message.content` |
376
+ | **Cohere** | `https://api.cohere.com/v2` | `chat` | `message.content[0].text` |
377
+ | **Google Gemini** | `https://generativelanguage.googleapis.com/v1beta/openai` | `chat/completions` | `choices[0].message.content` |
378
+
379
+ > **Note**: Many providers offer OpenAI-compatible endpoints that work with the default settings.
380
+ > - Google Gemini: https://ai.google.dev/gemini-api/docs/openai
381
+ > - Claude: https://docs.anthropic.com/en/api/openai-sdk
382
+
383
+ #### Custom Provider Configuration Guide
384
+
385
+ To configure a custom provider:
386
+
368
387
  1. **Find the API Endpoint**:
369
- - Visit the documentation of the LLM provider you want to use.
370
- - Find the API endpoint for the completion task. This is usually under the "API Reference" or "Developer Documentation" section.
388
+ - Check the provider's API documentation for their chat completion endpoint
389
+
371
390
  2. **Identify the Response Structure**:
372
- - Look for the structure of the response. This typically includes fields like `choices`, `completion`, etc.
373
- 3. **Identify the Path Expression**:
374
- Forexample, claude response structure like this:
375
- ```json
376
- {
377
- "content": [
378
- {
379
- "text": "Hi! My name is Claude.",
380
- "type": "text"
381
- }
382
- ],
383
- "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
384
- "model": "claude-3-7-sonnet-20250219",
385
- "role": "assistant",
386
- "stop_reason": "end_turn",
387
- "stop_sequence": null,
388
- "type": "message",
389
- "usage": {
390
- "input_tokens": 2095,
391
- "output_tokens": 503
392
- }
391
+ - Look at the JSON response format to find where the text content is located
392
+
393
+ 3. **Set the Path Expression**:
394
+ - Use jmespath syntax to specify the path to the text content
395
+
396
+ **Example**: For Claude's native API, the response looks like:
397
+ ```json
398
+ {
399
+ "content": [
400
+ {
401
+ "text": "Hi! My name is Claude.",
402
+ "type": "text"
393
403
  }
394
- ```
395
- We are looking for the `text` field, so the path should be 1.Key `content`, 2.First obj `[0]`, 3.Key `text`. So it should be `content.[0].text`.
404
+ ],
405
+ "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
406
+ "model": "claude-3-7-sonnet-20250219",
407
+ "role": "assistant"
408
+ }
409
+ ```
410
+
411
+ The path to extract the text is: `content.0.text`
412
+
413
+ ### Syntax Highlighting Themes
396
414
 
397
- **CODE_THEME**
415
+ YAICLI supports all Pygments syntax highlighting themes. You can set your preferred theme in the config file:
416
+
417
+ ```ini
418
+ CODE_THEME=monokai
419
+ ```
398
420
 
399
- You can find the list of code theme here: https://pygments.org/styles/
421
+ Browse available themes at: https://pygments.org/styles/
400
422
 
401
- Default: monokia
402
- ![monikia](artwork/monokia.png)
423
+ ![monokai theme example](artwork/monokia.png)
403
424
 
404
- ## Usage
425
+ ## 🚀 Usage
405
426
 
406
- ### Basic Usage
427
+ ### Quick Start
407
428
 
408
429
  ```bash
409
- # One-shot mode
430
+ # Get a quick answer
410
431
  ai "What is the capital of France?"
411
432
 
412
- # Chat mode
433
+ # Start an interactive chat session
413
434
  ai --chat
414
435
 
415
- # Shell command generation mode
436
+ # Generate and execute shell commands
416
437
  ai --shell "Create a backup of my Documents folder"
417
438
 
418
- # Verbose mode for debugging
439
+ # Analyze code from a file
440
+ cat app.py | ai "Explain what this code does"
441
+
442
+ # Debug with verbose mode
419
443
  ai --verbose "Explain quantum computing"
420
444
  ```
421
445
 
422
- ### Command Line Options
423
-
424
- Arguments:
425
- - `<PROMPT>`: Argument
426
-
427
- Options:
428
- - `--install-completion`: Install completion for the current shell
429
- - `--show-completion`: Show completion for the current shell, to copy it or customize the installation
430
- - `--help` or `-h`: Show this message and exit
431
- - `--template`: Show the config template.
432
-
433
- Run Options:
434
- - `--verbose` or `-V`: Show verbose information
435
- - `--chat` or `-c`: Start in chat mode
436
- - `--shell` or `-s`: Generate and execute shell command
437
-
438
- ```bash
439
- ai -h
446
+ ### Command Line Reference
440
447
 
448
+ ```
441
449
  Usage: ai [OPTIONS] [PROMPT]
442
-
443
- yaicli - Your AI interface in cli.
444
-
445
- ╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
446
- │ prompt [PROMPT] The prompt send to the LLM │
447
- ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
448
- ╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
449
- │ --template Show the config template. │
450
- │ --install-completion Install completion for the current shell. │
451
- │ --show-completion Show completion for the current shell, to copy it or customize the installation. │
452
- │ --help -h Show this message and exit. │
453
- ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
454
- ╭─ Run Option ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
455
- │ --chat -c Start in chat mode │
456
- │ --shell -s Generate and execute shell command │
457
- │ --verbose -V Show verbose information │
458
- ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
459
-
460
450
  ```
461
451
 
462
- ### Interactive Mode
463
-
464
- In interactive mode (chat or shell), you can:
465
- - Type your queries and get responses
466
- - Use `Tab` to switch between Chat and Execute modes
467
- - Type '/exit' to exit
468
- - Type '/clear' to clear history
469
- - Type '/his' to show history
470
-
471
- ### Shell Command Generation
472
-
473
- In Execute mode:
474
- 1. Enter your request in natural language
475
- 2. YAICLI will generate an appropriate shell command
476
- 3. Review the command
477
- 4. Confirm to execute or reject
478
-
479
- ### Keyboard Shortcuts
480
- - `Tab`: Switch between Chat and Execute modes
481
- - `Ctrl+C`: Exit
482
- - `Ctrl+R`: Search history
483
- - `↑/↓`: Navigate history
452
+ | Option | Short | Description |
453
+ |--------|-------|-------------|
454
+ | `--chat` | `-c` | Start in interactive chat mode |
455
+ | `--shell` | `-s` | Generate and execute shell commands |
456
+ | `--help` | `-h` | Show help message and exit |
457
+ | `--verbose` | `-V` | Show detailed debug information |
458
+ | `--template` | | Display the config template |
459
+
460
+ ### Interactive Mode Features
461
+
462
+ <table>
463
+ <tr>
464
+ <td width="50%">
465
+
466
+ **Commands**
467
+ - `/exit` - Exit the application
468
+ - `/clear` - Clear conversation history
469
+ - `/his` - Show command history
470
+ - `/mode chat|exec` - Switch modes
471
+
472
+ **Keyboard Shortcuts**
473
+ - `Tab` - Toggle between Chat/Execute modes
474
+ - `Ctrl+C` or `Ctrl+D` - Exit
475
+ - `Ctrl+R` - Search history
476
+ - `↑/↓` - Navigate through history
477
+
478
+ </td>
479
+ <td width="50%">
480
+
481
+ **Chat Mode** (💬)
482
+ - Natural conversations with context
483
+ - Markdown and code formatting
484
+ - Reasoning display for complex queries
485
+
486
+ **Execute Mode** (🚀)
487
+ - Generate shell commands from descriptions
488
+ - Review commands before execution
489
+ - Edit commands before running
490
+ - Safe execution with confirmation
491
+
492
+ </td>
493
+ </tr>
494
+ </table>
495
+
496
+ ### Input Methods
497
+
498
+ **Direct Input**
499
+ ```bash
500
+ ai "What is the capital of France?"
501
+ ```
484
502
 
485
- ### Stdin
486
- You can also pipe input to YAICLI:
503
+ **Piped Input**
487
504
  ```bash
488
505
  echo "What is the capital of France?" | ai
489
506
  ```
490
507
 
508
+ **File Analysis**
509
+ ```bash
510
+ cat demo.py | ai "Explain this code"
511
+ ```
512
+
513
+ **Combined Input**
491
514
  ```bash
492
- cat demo.py | ai "How to use this tool?"
515
+ cat error.log | ai "Why am I getting these errors in my Python app?"
493
516
  ```
494
517
 
495
- ### History
496
- Support max history size. Set MAX_HISTORY in config file. Default is 500.
518
+ ### History Management
519
+
520
+ YAICLI maintains a history of your interactions (default: 500 entries) stored in `~/.yaicli_history`. You can:
497
521
 
498
- ## Examples
522
+ - Configure history size with `MAX_HISTORY` in config
523
+ - Search history with `Ctrl+R` in interactive mode
524
+ - View recent commands with `/his` command
499
525
 
500
- ### Have a Chat
526
+ ## 📱 Examples
527
+
528
+ ### Quick Answer Mode
501
529
 
502
530
  ```bash
503
531
  $ ai "What is the capital of France?"
@@ -505,7 +533,7 @@ Assistant:
505
533
  The capital of France is Paris.
506
534
  ```
507
535
 
508
- ### Command Gen and Run
536
+ ### Command Generation & Execution
509
537
 
510
538
  ```bash
511
539
  $ ai -s 'Check the current directory size'
@@ -526,46 +554,48 @@ Output:
526
554
  ```bash
527
555
  $ ai --chat
528
556
 
529
- ██ ██ █████ ██ ██████ ██ ██
530
- ██ ██ ██ ██ ██ ██ ██ ██
531
- ████ ███████ ██ ██ ██ ██
532
- ██ ██ ██ ██ ██ ██ ██
533
- ██ ██ ██ ██ ██████ ███████ ██
557
+ ██ ██ █████ ██ ██████ ██ ██
558
+ ██ ██ ██ ██ ██ ██ ██ ██
559
+ ████ ███████ ██ ██ ██ ██
560
+ ██ ██ ██ ██ ██ ██ ██
561
+ ██ ██ ██ ██ ██████ ███████ ██
562
+
563
+ Welcome to YAICLI!
564
+ Press TAB to switch mode
565
+ /clear : Clear chat history
566
+ /his : Show chat history
567
+ /exit|Ctrl+D|Ctrl+C: Exit
568
+ /mode chat|exec : Switch mode (Case insensitive)
569
+ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
570
+ 💬 > Tell me about the solar system
534
571
 
535
- Press TAB to change in chat and exec mode
536
- Type /clear to clear chat history
537
- Type /his to see chat history
538
- Press Ctrl+C or type /exit to exit
572
+ Assistant:
573
+ Solar System Overview
539
574
 
540
- 💬 > Tell me about the solar system
575
+ Central Star: The Sun (99% of system mass, nuclear fusion).
576
+ • Planets: 8 total.
577
+ • Terrestrial (rocky): Mercury, Venus, Earth, Mars.
578
+ • Gas Giants: Jupiter, Saturn.
579
+ • Ice Giants: Uranus, Neptune.
580
+ • Moons: Over 200 (e.g., Earth: 1, Jupiter: 95).
581
+ • Smaller Bodies:
582
+ • Asteroids (between Mars/Venus), comets ( icy, distant), * dwarf planets* (Pluto, Ceres).
583
+ • Oort Cloud: spherical shell of icy objects ~1–100,000天文單位 (AU) from Sun).
584
+ • Heliosphere: Solar wind boundary protecting Earth from cosmic radiation.
541
585
 
542
- Assistant:
543
- Certainly! Here’s a brief overview of the solar system:
544
-
545
- • Sun: The central star of the solar system, providing light and energy.
546
- • Planets:
547
- • Mercury: Closest to the Sun, smallest planet.
548
- • Venus: Second planet, known for its thick atmosphere and high surface temperature.
549
- • Earth: Third planet, the only known planet to support life.
550
- • Mars: Fourth planet, often called the "Red Planet" due to its reddish appearance.
551
- • Jupiter: Largest planet, a gas giant with many moons.
552
- • Saturn: Known for its prominent ring system, also a gas giant.
553
- • Uranus: An ice giant, known for its unique axial tilt.
554
- • Neptune: Another ice giant, known for its deep blue color.
555
- • Dwarf Planets:
556
- • Pluto: Once considered the ninth planet, now classified as
586
+ Key Fact: Earth is the only confirmed habitable planet.
557
587
 
558
588
  🚀 > Check the current directory size
559
589
  Assistant:
560
590
  du -sh .
561
- ╭─ Command ─╮
562
- │ du -sh .
563
- ╰───────────╯
591
+ ╭─ Suggest Command ─╮
592
+ │ du -sh .
593
+ ╰───────────────────╯
564
594
  Execute command? [e]dit, [y]es, [n]o (n): e
565
- Edit command, press enter to execute:
566
- du -sh ./
567
- Output:
568
- 109M ./
595
+ Edit command: du -sh ./
596
+ --- Executing ---
597
+ 55M ./
598
+ --- Finished ---
569
599
  🚀 >
570
600
  ```
571
601
 
@@ -575,9 +605,9 @@ Output:
575
605
  $ ai --shell "Find all PDF files in my Downloads folder"
576
606
  Assistant:
577
607
  find ~/Downloads -type f -name "*.pdf"
578
- ╭─ Command ──────────────────────────────╮
579
- │ find ~/Downloads -type f -name "*.pdf" │
580
- ╰────────────────────────────────────────╯
608
+ ╭─ Suggest Command ───────────────────────╮
609
+ │ find ~/Downloads -type f -iname "*.pdf" │
610
+ ╰─────────────────────────────────────────╯
581
611
  Execute command? [e]dit, [y]es, [n]o (n): y
582
612
  Output:
583
613
 
@@ -586,24 +616,41 @@ Output:
586
616
  ...
587
617
  ```
588
618
 
589
- ## Technical Implementation
619
+ ## 💻 Technical Details
620
+
621
+ ### Architecture
622
+
623
+ YAICLI is designed with a modular architecture that separates concerns and makes the codebase maintainable:
624
+
625
+ - **CLI Module**: Handles user interaction and command parsing
626
+ - **API Client**: Manages communication with LLM providers
627
+ - **Config Manager**: Handles layered configuration
628
+ - **History Manager**: Maintains conversation history with LRU functionality
629
+ - **Printer**: Formats and displays responses with rich formatting
630
+
631
+ ### Dependencies
590
632
 
591
- YAICLI is built using several Python libraries:
633
+ | Library | Purpose |
634
+ |---------|----------|
635
+ | [Typer](https://typer.tiangolo.com/) | Command-line interface with type hints |
636
+ | [Rich](https://rich.readthedocs.io/) | Terminal formatting and beautiful display |
637
+ | [prompt_toolkit](https://python-prompt-toolkit.readthedocs.io/) | Interactive input with history and auto-completion |
638
+ | [httpx](https://www.python-httpx.org/) | Modern HTTP client with async support |
639
+ | [jmespath](https://jmespath.org/) | JSON data extraction |
592
640
 
593
- - **Typer**: Provides the command-line interface
594
- - **Rich**: Provides terminal content formatting and beautiful display
595
- - **prompt_toolkit**: Provides interactive command-line input experience
596
- - **httpx**: Handles API requests
597
- - **jmespath**: Parses JSON responses
641
+ ## 👨‍💻 Contributing
598
642
 
599
- ## Contributing
643
+ Contributions are welcome! Here's how you can help:
600
644
 
601
- Contributions of code, issue reports, or feature suggestions are welcome.
645
+ - **Bug Reports**: Open an issue describing the bug and how to reproduce it
646
+ - **Feature Requests**: Suggest new features or improvements
647
+ - **Code Contributions**: Submit a PR with your changes
648
+ - **Documentation**: Help improve or translate the documentation
602
649
 
603
- ## License
650
+ ## 📃 License
604
651
 
605
652
  [Apache License 2.0](LICENSE)
606
653
 
607
654
  ---
608
655
 
609
- *YAICLI - Making your terminal smarter*
656
+ <p align="center"><i>YAICLI - Your AI Command Line Interface</i></p>