yaicli 0.2.0__py3-none-any.whl → 0.3.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: yaicli
3
- Version: 0.2.0
3
+ Version: 0.3.1
4
4
  Summary: A simple CLI tool to interact with LLM
5
5
  Project-URL: Homepage, https://github.com/belingud/yaicli
6
6
  Project-URL: Repository, https://github.com/belingud/yaicli
@@ -229,12 +229,14 @@ Description-Content-Type: text/markdown
229
229
  ![PyPI - Downloads](https://img.shields.io/pypi/dm/yaicli?logo=pypi&style=for-the-badge)
230
230
  ![Pepy Total Downloads](https://img.shields.io/pepy/dt/yaicli?style=for-the-badge&logo=python)
231
231
 
232
- YAICLI is a powerful yet lightweight command-line AI assistant that brings the capabilities of Large Language Models (LLMs) like GPT-4o directly to your terminal. Interact with AI through multiple modes: have natural conversations, generate and execute shell commands, or get quick answers without leaving your workflow.
232
+ YAICLI is a powerful yet lightweight command-line AI assistant that brings the capabilities of Large Language Models (
233
+ LLMs) like GPT-4o directly to your terminal. Interact with AI through multiple modes: have natural conversations,
234
+ generate and execute shell commands, or get quick answers without leaving your workflow.
233
235
 
234
236
  **Supports both standard and deep reasoning models across all major LLM providers.**
235
237
 
236
238
  <p align="center">
237
- <img src="https://vhs.charm.sh/vhs-5U1BBjJkTUBReRswsSgIVx.gif" alt="YAICLI Demo" width="85%">
239
+ <img src="https://vhs.charm.sh/vhs-5U1BBjJkTUBReRswsSgIVx.gif" alt="YAICLI Chat Demo" width="85%">
238
240
  </p>
239
241
 
240
242
  > [!NOTE]
@@ -244,39 +246,41 @@ YAICLI is a powerful yet lightweight command-line AI assistant that brings the c
244
246
 
245
247
  ### 🔄 Multiple Interaction Modes
246
248
 
247
- - **💬 Chat Mode**: Engage in persistent conversations with full context tracking
248
- - **🚀 Execute Mode**: Generate and safely run OS-specific shell commands
249
- - **⚡ Quick Query**: Get instant answers without entering interactive mode
249
+ - **💬 Chat Mode**: Engage in persistent conversations with full context tracking
250
+ - **🚀 Execute Mode**: Generate and safely run OS-specific shell commands
251
+ - **⚡ Quick Query**: Get instant answers without entering interactive mode
250
252
 
251
253
  ### 🧠 Smart Environment Awareness
252
254
 
253
- - **Auto-detection**: Identifies your shell (bash/zsh/PowerShell/CMD) and OS
254
- - **Safe Command Execution**: 3-step verification before running any command
255
- - **Flexible Input**: Pipe content directly (`cat log.txt | ai "analyze this"`)
255
+ - **Auto-detection**: Identifies your shell (bash/zsh/PowerShell/CMD) and OS
256
+ - **Safe Command Execution**: 3-step verification before running any command
257
+ - **Flexible Input**: Pipe content directly (`cat log.txt | ai "analyze this"`)
256
258
 
257
259
  ### 🔌 Universal LLM Compatibility
258
260
 
259
- - **OpenAI-Compatible**: Works with any OpenAI-compatible API endpoint
260
- - **Multi-Provider Support**: Easy configuration for Claude, Gemini, Cohere, etc.
261
- - **Custom Response Parsing**: Extract exactly what you need with jmespath
261
+ - **OpenAI-Compatible**: Works with any OpenAI-compatible API endpoint
262
+ - **Multi-Provider Support**: Easy configuration for Claude, Gemini, Cohere, etc.
263
+ - **Custom Response Parsing**: Extract exactly what you need with jmespath
262
264
 
263
265
  ### 💻 Enhanced Terminal Experience
264
266
 
265
- - **Real-time Streaming**: See responses as they're generated with cursor animation
266
- - **Rich History Management**: LRU-based history with 500 entries by default
267
- - **Syntax Highlighting**: Beautiful code formatting with customizable themes
267
+ - **Real-time Streaming**: See responses as they're generated with cursor animation
268
+ - **Rich History Management**: LRU-based history with 500 entries by default
269
+ - **Syntax Highlighting**: Beautiful code formatting with customizable themes
268
270
 
269
271
  ### 🛠️ Developer-Friendly
270
272
 
271
- - **Layered Configuration**: Environment variables > Config file > Sensible defaults
272
- - **Debugging Tools**: Verbose mode with detailed API tracing
273
- - **Lightweight**: Minimal dependencies with focused functionality
273
+ - **Layered Configuration**: Environment variables > Config file > Sensible defaults
274
+ - **Debugging Tools**: Verbose mode with detailed API tracing
275
+ - **Lightweight**: Minimal dependencies with focused functionality
276
+
277
+ ![What is life](artwork/reasoning_example.png)
274
278
 
275
279
  ## 📦 Installation
276
280
 
277
281
  ### Prerequisites
278
282
 
279
- - Python 3.9 or higher
283
+ - Python 3.9 or higher
280
284
 
281
285
  ### Quick Install
282
286
 
@@ -311,49 +315,55 @@ YAICLI uses a simple configuration file to store your preferences and API keys.
311
315
 
312
316
  ### Configuration File Structure
313
317
 
314
- The default configuration file is located at `~/.config/yaicli/config.ini`. You can use `ai --template` to see default settings, just as below:
318
+ The default configuration file is located at `~/.config/yaicli/config.ini`. You can use `ai --template` to see default
319
+ settings, just as below:
315
320
 
316
321
  ```ini
317
322
  [core]
318
- PROVIDER=openai
319
- BASE_URL=https://api.openai.com/v1
320
- API_KEY=
321
- MODEL=gpt-4o
323
+ PROVIDER = openai
324
+ BASE_URL = https://api.openai.com/v1
325
+ API_KEY =
326
+ MODEL = gpt-4o
322
327
 
323
328
  # auto detect shell and os (or specify manually, e.g., bash, zsh, powershell.exe)
324
- SHELL_NAME=auto
325
- OS_NAME=auto
329
+ SHELL_NAME = auto
330
+ OS_NAME = auto
326
331
 
327
332
  # API paths (usually no need to change for OpenAI compatible APIs)
328
- COMPLETION_PATH=chat/completions
329
- ANSWER_PATH=choices[0].message.content
333
+ COMPLETION_PATH = chat/completions
334
+ ANSWER_PATH = choices[0].message.content
330
335
 
331
336
  # true: streaming response, false: non-streaming
332
- STREAM=true
337
+ STREAM = true
333
338
 
334
339
  # LLM parameters
335
- TEMPERATURE=0.7
336
- TOP_P=1.0
337
- MAX_TOKENS=1024
338
- TIMEOUT=60
340
+ TEMPERATURE = 0.7
341
+ TOP_P = 1.0
342
+ MAX_TOKENS = 1024
343
+ TIMEOUT = 60
339
344
 
340
345
  # Interactive mode parameters
341
- INTERACTIVE_ROUND=25
346
+ INTERACTIVE_ROUND = 25
342
347
 
343
348
  # UI/UX
344
- CODE_THEME=monokai
345
- MAX_HISTORY=500 # Max entries kept in history file
346
- AUTO_SUGGEST=true
349
+ CODE_THEME = monokai
350
+ # Max entries kept in history file
351
+ MAX_HISTORY = 500
352
+ AUTO_SUGGEST = true
353
+ # Print reasoning content or not
354
+ SHOW_REASONING = true
355
+ # Text alignment (default, left, center, right, full)
356
+ JUSTIFY = default
347
357
 
348
358
  # Chat history settings
349
- CHAT_HISTORY_DIR={DEFAULT_CONFIG_MAP["CHAT_HISTORY_DIR"]["value"]}
350
- MAX_SAVED_CHATS={DEFAULT_CONFIG_MAP["MAX_SAVED_CHATS"]["value"]}
359
+ CHAT_HISTORY_DIR = <tempdir>/yaicli/chats
360
+ MAX_SAVED_CHATS = 20
351
361
  ```
352
362
 
353
363
  ### Configuration Options Reference
354
364
 
355
365
  | Option | Description | Default | Env Variable |
356
- | ------------------- | ------------------------------------------- | ---------------------------- | ----------------------- |
366
+ |---------------------|---------------------------------------------|------------------------------|-------------------------|
357
367
  | `PROVIDER` | LLM provider (openai, claude, cohere, etc.) | `openai` | `YAI_PROVIDER` |
358
368
  | `BASE_URL` | API endpoint URL | `https://api.openai.com/v1` | `YAI_BASE_URL` |
359
369
  | `API_KEY` | Your API key | - | `YAI_API_KEY` |
@@ -371,17 +381,20 @@ MAX_SAVED_CHATS={DEFAULT_CONFIG_MAP["MAX_SAVED_CHATS"]["value"]}
371
381
  | `MAX_TOKENS` | Max response tokens | `1024` | `YAI_MAX_TOKENS` |
372
382
  | `MAX_HISTORY` | Max history entries | `500` | `YAI_MAX_HISTORY` |
373
383
  | `AUTO_SUGGEST` | Enable history suggestions | `true` | `YAI_AUTO_SUGGEST` |
384
+ | `SHOW_REASONING` | Enable reasoning display | `true` | `YAI_SHOW_REASONING` |
385
+ | `JUSTIFY` | Text alignment | `default` | `YAI_JUSTIFY` |
374
386
  | `CHAT_HISTORY_DIR` | Chat history directory | `<tempdir>/yaicli/history` | `YAI_CHAT_HISTORY_DIR` |
375
387
  | `MAX_SAVED_CHATS` | Max saved chats | `20` | `YAI_MAX_SAVED_CHATS` |
376
388
 
377
389
  ### LLM Provider Configuration
378
390
 
379
- YAICLI works with all major LLM providers. The default configuration is set up for OpenAI, but you can easily switch to other providers.
391
+ YAICLI works with all major LLM providers. The default configuration is set up for OpenAI, but you can easily switch to
392
+ other providers.
380
393
 
381
394
  #### Pre-configured Provider Settings
382
395
 
383
396
  | Provider | BASE_URL | COMPLETION_PATH | ANSWER_PATH |
384
- | ------------------------------ | --------------------------------------------------------- | ------------------ | ---------------------------- |
397
+ |--------------------------------|-----------------------------------------------------------|--------------------|------------------------------|
385
398
  | **OpenAI** (default) | `https://api.openai.com/v1` | `chat/completions` | `choices[0].message.content` |
386
399
  | **Claude** (native API) | `https://api.anthropic.com/v1` | `messages` | `content[0].text` |
387
400
  | **Claude** (OpenAI-compatible) | `https://api.anthropic.com/v1/openai` | `chat/completions` | `choices[0].message.content` |
@@ -390,8 +403,8 @@ YAICLI works with all major LLM providers. The default configuration is set up f
390
403
 
391
404
  > **Note**: Many providers offer OpenAI-compatible endpoints that work with the default settings.
392
405
  >
393
- > - Google Gemini: https://ai.google.dev/gemini-api/docs/openai
394
- > - Claude: https://docs.anthropic.com/en/api/openai-sdk
406
+ > - Google Gemini: https://ai.google.dev/gemini-api/docs/openai
407
+ > - Claude: https://docs.anthropic.com/en/api/openai-sdk
395
408
 
396
409
  #### Custom Provider Configuration Guide
397
410
 
@@ -412,15 +425,15 @@ To configure a custom provider:
412
425
 
413
426
  ```json
414
427
  {
415
- "content": [
416
- {
417
- "text": "Hi! My name is Claude.",
418
- "type": "text"
419
- }
420
- ],
421
- "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
422
- "model": "claude-3-7-sonnet-20250219",
423
- "role": "assistant"
428
+ "content": [
429
+ {
430
+ "text": "Hi! My name is Claude.",
431
+ "type": "text"
432
+ }
433
+ ],
434
+ "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
435
+ "model": "claude-3-7-sonnet-20250219",
436
+ "role": "assistant"
424
437
  }
425
438
  ```
426
439
 
@@ -431,7 +444,7 @@ The path to extract the text is: `content.[0].text`
431
444
  YAICLI supports all Pygments syntax highlighting themes. You can set your preferred theme in the config file:
432
445
 
433
446
  ```ini
434
- CODE_THEME=monokai
447
+ CODE_THEME = monokai
435
448
  ```
436
449
 
437
450
  Browse available themes at: https://pygments.org/styles/
@@ -452,6 +465,9 @@ ai --chat
452
465
  # Generate and execute shell commands
453
466
  ai --shell "Create a backup of my Documents folder"
454
467
 
468
+ # Generate code snippets, default in Python
469
+ ai --code "Write a Python function to sort a list"
470
+
455
471
  # Analyze code from a file
456
472
  cat app.py | ai "Explain what this code does"
457
473
 
@@ -462,17 +478,50 @@ ai --verbose "Explain quantum computing"
462
478
  ### Command Line Reference
463
479
 
464
480
  ```
465
- Usage: ai [OPTIONS] [PROMPT]
481
+ Usage: ai [OPTIONS] [PROMPT]
482
+
483
+ YAICLI: Your AI assistant in the command line.
484
+ Call with a PROMPT to get a direct answer, use --shell to execute as command, or use --chat for an interactive session.
485
+
486
+ ╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
487
+ │ prompt [PROMPT] The prompt to send to the LLM. Reads from stdin if available. [default: None] │
488
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
489
+ ╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
490
+ │ --install-completion Install completion for the current shell. │
491
+ │ --show-completion Show completion for the current shell, to copy it or customize the installation. │
492
+ │ --help -h Show this message and exit. │
493
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
494
+ ╭─ LLM Options ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
495
+ │ --model -M TEXT Specify the model to use. │
496
+ │ --temperature -T FLOAT RANGE [0.0<=x<=2.0] Specify the temperature to use. [default: 0.7] │
497
+ │ --top-p -P FLOAT RANGE [0.0<=x<=1.0] Specify the top-p to use. [default: 1.0] │
498
+ │ --max-tokens -M INTEGER RANGE [x>=1] Specify the max tokens to use. [default: 1024] │
499
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
500
+ ╭─ Role Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
501
+ │ --role -r TEXT Specify the assistant role to use. [default: DEFAULT] │
502
+ │ --create-role TEXT Create a new role with the specified name. │
503
+ │ --delete-role TEXT Delete a role with the specified name. │
504
+ │ --list-roles List all available roles. │
505
+ │ --show-role TEXT Show the role with the specified name. │
506
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
507
+ ╭─ Chat Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
508
+ │ --chat -c Start in interactive chat mode. │
509
+ │ --list-chats List saved chat sessions. │
510
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
511
+ ╭─ Shell Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
512
+ │ --shell -s Generate and optionally execute a shell command (non-interactive). │
513
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
514
+ ╭─ Code Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
515
+ │ --code Generate code in plaintext (non-interactive). │
516
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
517
+ ╭─ Other Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
518
+ │ --verbose -V Show verbose output (e.g., loaded config). │
519
+ │ --template Show the default config file template and exit. │
520
+ │ --show-reasoning --no-show-reasoning Show reasoning content from the LLM. (default: True) │
521
+ │ --justify -j [default|left|center|right|full] Specify the justify to use. [default: default] │
522
+ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
466
523
  ```
467
524
 
468
- | Option | Short | Description |
469
- | ------------ | ----- | ----------------------------------- |
470
- | `--chat` | `-c` | Start in interactive chat mode |
471
- | `--shell` | `-s` | Generate and execute shell commands |
472
- | `--help` | `-h` | Show help message and exit |
473
- | `--verbose` | `-V` | Show detailed debug information |
474
- | `--template` | | Display the config template |
475
-
476
525
  ### Interactive Mode Features
477
526
 
478
527
  <table>
@@ -481,37 +530,37 @@ Usage: ai [OPTIONS] [PROMPT]
481
530
 
482
531
  **Commands**
483
532
 
484
- - `/clear` - Clear conversation history
485
- - `/his` - Show command history
486
- - `/list` - List saved chats
487
- - `/save <title>` - Save current chat with title
488
- - `/load <index>` - Load a saved chat
489
- - `/del <index>` - Delete a saved chat
490
- - `/exit` - Exit the application
491
- - `/mode chat|exec` - Switch modes
533
+ - `/clear` - Clear conversation history
534
+ - `/his` - Show command history
535
+ - `/list` - List saved chats
536
+ - `/save <title>` - Save current chat with title
537
+ - `/load <index>` - Load a saved chat
538
+ - `/del <index>` - Delete a saved chat
539
+ - `/exit` - Exit the application
540
+ - `/mode chat|exec` - Switch modes
492
541
 
493
542
  **Keyboard Shortcuts**
494
543
 
495
- - `Tab` - Toggle between Chat/Execute modes
496
- - `Ctrl+C` or `Ctrl+D` - Exit
497
- - `Ctrl+R` - Search history
498
- - `↑/↓` - Navigate through history
544
+ - `Tab` - Toggle between Chat/Execute modes
545
+ - `Ctrl+C` or `Ctrl+D` - Exit
546
+ - `Ctrl+R` - Search history
547
+ - `↑/↓` - Navigate through history
499
548
 
500
549
  </td>
501
550
  <td width="50%">
502
551
 
503
552
  **Chat Mode** (💬)
504
553
 
505
- - Natural conversations with context
506
- - Markdown and code formatting
507
- - Reasoning display for complex queries
554
+ - Natural conversations with context
555
+ - Markdown and code formatting
556
+ - Reasoning display for complex queries
508
557
 
509
558
  **Execute Mode** (🚀)
510
559
 
511
- - Generate shell commands from descriptions
512
- - Review commands before execution
513
- - Edit commands before running
514
- - Safe execution with confirmation
560
+ - Generate shell commands from descriptions
561
+ - Review commands before execution
562
+ - Edit commands before running
563
+ - Safe execution with confirmation
515
564
 
516
565
  </td>
517
566
  </tr>
@@ -519,9 +568,12 @@ Usage: ai [OPTIONS] [PROMPT]
519
568
 
520
569
  ### Chat Persistent
521
570
 
522
- The `<PROMPT>` parameter in the chat mode will be used as a title to persist the chat content to the file system, with the save directory being a temporary directory, which may vary between machines, and it is determined on the first run.
571
+ The `<PROMPT>` parameter in the chat mode will be used as a title to persist the chat content to the file system, with
572
+ the save directory being a temporary directory, which may vary between machines, and it is determined on the first run.
523
573
 
524
- If the `<PROMPT>` parameter is not specified when entering `chat` mode, the session will be treated as a temporary session and will not be persisted. Of course, you can also manually call the `/save <title>` command to save during the chat.
574
+ If the `<PROMPT>` parameter is not specified when entering `chat` mode, the session will be treated as a temporary
575
+ session and will not be persisted. Of course, you can also manually call the `/save <title>` command to save during the
576
+ chat.
525
577
  When you run the same `chat` command again, the previous session will be automatically loaded.
526
578
 
527
579
  ```bash
@@ -624,13 +676,39 @@ cat demo.py | ai "Explain this code"
624
676
  cat error.log | ai "Why am I getting these errors in my Python app?"
625
677
  ```
626
678
 
679
+ ### Role Management
680
+
681
+ ```bash
682
+ # Create a new role, you need to input the role description
683
+ ai --create-role "Philosopher Master"
684
+
685
+ # List all roles
686
+ ai --list-roles
687
+
688
+ # Show a role
689
+ ai --show-role "Philosopher Master"
690
+
691
+ # Delete a role
692
+ ai --delete-role "Philosopher Master"
693
+ ```
694
+
695
+ Once you create a role, you can use it in the `--role` option.
696
+
697
+ ```bash
698
+ # Use a specific role
699
+ ai --role "Philosopher Master" "What is the meaning of life?"
700
+
701
+ # Use a role in chat
702
+ ai --chat --role "Philosopher Master"
703
+ ```
704
+
627
705
  ### History Management
628
706
 
629
707
  YAICLI maintains a history of your interactions (default: 500 entries) stored in `~/.yaicli_history`. You can:
630
708
 
631
- - Configure history size with `MAX_HISTORY` in config
632
- - Search history with `Ctrl+R` in interactive mode
633
- - View recent commands with `/his` command
709
+ - Configure history size with `MAX_HISTORY` in config
710
+ - Search history with `Ctrl+R` in interactive mode
711
+ - View recent commands with `/his` command
634
712
 
635
713
  ## 📱 Examples
636
714
 
@@ -658,6 +736,23 @@ Output:
658
736
  109M ./
659
737
  ```
660
738
 
739
+ ### Code Generation
740
+
741
+ In code mode, select the language for code generation. If none is specified, Python is the default.
742
+
743
+ The `--code` mode outputs plain text, making it easy to copy, paste, or redirect to a file, especially when using the standard model.
744
+
745
+ When using a deep reasoning model, the thinking content is displayed with syntax highlighting. To disable this, use the `--no-show-reasoning` option or set `SHOW_REASONING` to `false` in the configuration.
746
+
747
+ ```bash
748
+ $ ai --code 'Write a fib generator'
749
+ def fib_generator():
750
+ a, b = 0, 1
751
+ while True:
752
+ yield a
753
+ a, b = b, a + b
754
+ ```
755
+
661
756
  ### Chat Mode Example
662
757
 
663
758
  ```bash
@@ -731,22 +826,30 @@ Output:
731
826
  ...
732
827
  ```
733
828
 
829
+ ### Code Mode Example
830
+
831
+ ```bash
832
+ $ ai --code "write a fib generator" --model deepseek-r1
833
+ ```
834
+
835
+ ![fib code example](artwork/reasoning_code_example.png)
836
+
734
837
  ## 💻 Technical Details
735
838
 
736
839
  ### Architecture
737
840
 
738
841
  YAICLI is designed with a modular architecture that separates concerns and makes the codebase maintainable:
739
842
 
740
- - **CLI Module**: Handles user interaction and command parsing
741
- - **API Client**: Manages communication with LLM providers
742
- - **Config Manager**: Handles layered configuration
743
- - **History Manager**: Maintains conversation history with LRU functionality
744
- - **Printer**: Formats and displays responses with rich formatting
843
+ - **CLI Module**: Handles user interaction and command parsing
844
+ - **API Client**: Manages communication with LLM providers
845
+ - **Config Manager**: Handles layered configuration
846
+ - **History Manager**: Maintains conversation history with LRU functionality
847
+ - **Printer**: Formats and displays responses with rich formatting
745
848
 
746
849
  ### Dependencies
747
850
 
748
851
  | Library | Purpose |
749
- | --------------------------------------------------------------- | -------------------------------------------------- |
852
+ |-----------------------------------------------------------------|----------------------------------------------------|
750
853
  | [Typer](https://typer.tiangolo.com/) | Command-line interface with type hints |
751
854
  | [Rich](https://rich.readthedocs.io/) | Terminal formatting and beautiful display |
752
855
  | [prompt_toolkit](https://python-prompt-toolkit.readthedocs.io/) | Interactive input with history and auto-completion |
@@ -757,10 +860,10 @@ YAICLI is designed with a modular architecture that separates concerns and makes
757
860
 
758
861
  Contributions are welcome! Here's how you can help:
759
862
 
760
- - **Bug Reports**: Open an issue describing the bug and how to reproduce it
761
- - **Feature Requests**: Suggest new features or improvements
762
- - **Code Contributions**: Submit a PR with your changes
763
- - **Documentation**: Help improve or translate the documentation
863
+ - **Bug Reports**: Open an issue describing the bug and how to reproduce it
864
+ - **Feature Requests**: Suggest new features or improvements
865
+ - **Code Contributions**: Submit a PR with your changes
866
+ - **Documentation**: Help improve or translate the documentation
764
867
 
765
868
  ## 📃 License
766
869
 
@@ -0,0 +1,20 @@
1
+ pyproject.toml,sha256=C4r3nsve6z4u7vKI2eIKvQ4YWoiGwAYL72w_YcjTkcc,1519
2
+ yaicli/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
3
+ yaicli/api.py,sha256=9kRozzxBKduQsda3acnxzvOD9wRLL0cH182L4ddHY8E,13666
4
+ yaicli/chat_manager.py,sha256=I7BAMz91FLYT6x69wbomtAGLx0WsoTwS4Wo0MgP6P9I,10644
5
+ yaicli/cli.py,sha256=HnfmD-7mLsaUCJqQYehsfxonuyJ11B5zi3lbKbQxrH0,22684
6
+ yaicli/config.py,sha256=xtzgXApM93zCqSUxmVSBdph0co_NKfEUU3hWtPe8qvM,6236
7
+ yaicli/console.py,sha256=291F4hGksJtxYpg_mehepCIJ-eB2MaDNIyv1JAMgJ1Y,1985
8
+ yaicli/const.py,sha256=iOQNG6M4EBmKgbwZdNqRsHrcQ7Od1nKOyLAqUhfMEBM,7020
9
+ yaicli/entry.py,sha256=Yp0Z--x-7dowrz-h8hJJ4_BoCzuDjS11NcM8YgFzUoY,7460
10
+ yaicli/exceptions.py,sha256=ndedSdE0uaxxHrWN944BkbhMfRMSMxGDfmqmCKCGJco,924
11
+ yaicli/history.py,sha256=s-57X9FMsaQHF7XySq1gGH_jpd_cHHTYafYu2ECuG6M,2472
12
+ yaicli/printer.py,sha256=nXpralD5qZJQga3OTdEPhj22g7UoF-4mJbZeOtWXojo,12430
13
+ yaicli/render.py,sha256=mB1OT9859_PTwI9f-KY802lPaeQXKRw6ls_5jN21jWc,511
14
+ yaicli/roles.py,sha256=bhXpLnGTPRZp3-K1Tt6ppTsuG2v9S0RAXikfMFhDs_U,9144
15
+ yaicli/utils.py,sha256=MLvb-C5n19AD9Z1nW4Z3Z43ZKNH8STxQmNDnL7mq26E,4490
16
+ yaicli-0.3.1.dist-info/METADATA,sha256=fAGgZNCTckGuIS8Us6un9AC-60I5ZsEfw6k5RRHK7XU,45320
17
+ yaicli-0.3.1.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
18
+ yaicli-0.3.1.dist-info/entry_points.txt,sha256=iMhGm3btBaqrknQoF6WCg5sdx69ZyNSC73tRpCcbcLw,63
19
+ yaicli-0.3.1.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
20
+ yaicli-0.3.1.dist-info/RECORD,,
@@ -1,16 +0,0 @@
1
- pyproject.toml,sha256=7jXO8UZoTNdyWSkPxoi-5AbItKRXpXiA6l5vSEcht78,1519
2
- yaicli/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
3
- yaicli/api.py,sha256=gKqkRc-sGg5uKfg97mMErxrRoLvQH9nb62yn4pp_7K0,13925
4
- yaicli/chat_manager.py,sha256=McYkizT2dsRaI1LQLFij1oUsst8f00R3zAxnbpt9vCw,9888
5
- yaicli/cli.py,sha256=pQpQkm2a1ppSS2T4UJa3UTCULUD1Ipk3s5co79XwU7M,21598
6
- yaicli/config.py,sha256=EU2m5Bm3uv0300LvKmP6wTIq-n9eQ-SgSddmdoHW8AI,5802
7
- yaicli/const.py,sha256=XfDwiFvpEOdFkggWMO-2kEIubaKEa0QpKdEaAnBof7o,5480
8
- yaicli/entry.py,sha256=Wo3gdmgjQnQIGdbsZ7wIHiwkO-pvB-LZ6ZlUu8DYh_0,3500
9
- yaicli/history.py,sha256=s-57X9FMsaQHF7XySq1gGH_jpd_cHHTYafYu2ECuG6M,2472
10
- yaicli/printer.py,sha256=HdV-eiJan8VpNXUFV7nckkCVnGkBQ6GMXxKf-SzY4Qw,9966
11
- yaicli/utils.py,sha256=dchhz1s6XCDTxCT_PdPbSx93fPYcD9FUByHde8xWKMs,4024
12
- yaicli-0.2.0.dist-info/METADATA,sha256=-xdaYADLKA8CMQunNezFwxUefQbR2qYgQ27LIdsQQmI,34471
13
- yaicli-0.2.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
14
- yaicli-0.2.0.dist-info/entry_points.txt,sha256=iMhGm3btBaqrknQoF6WCg5sdx69ZyNSC73tRpCcbcLw,63
15
- yaicli-0.2.0.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
16
- yaicli-0.2.0.dist-info/RECORD,,
File without changes