yaicli 0.3.2__tar.gz → 0.4.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: yaicli
3
- Version: 0.3.2
3
+ Version: 0.4.0
4
4
  Summary: A simple CLI tool to interact with LLM
5
5
  Project-URL: Homepage, https://github.com/belingud/yaicli
6
6
  Project-URL: Repository, https://github.com/belingud/yaicli
@@ -213,9 +213,10 @@ Classifier: License :: OSI Approved :: MIT License
213
213
  Classifier: Operating System :: OS Independent
214
214
  Classifier: Programming Language :: Python :: 3
215
215
  Requires-Python: >=3.9
216
+ Requires-Dist: cohere>=5.15.0
216
217
  Requires-Dist: distro>=1.9.0
217
218
  Requires-Dist: httpx>=0.28.1
218
- Requires-Dist: jmespath>=1.0.1
219
+ Requires-Dist: openai>=1.76.0
219
220
  Requires-Dist: prompt-toolkit>=3.0.50
220
221
  Requires-Dist: rich>=13.9.4
221
222
  Requires-Dist: socksio>=1.0.0
@@ -235,9 +236,7 @@ generate and execute shell commands, or get quick answers without leaving your w
235
236
 
236
237
  **Supports both standard and deep reasoning models across all major LLM providers.**
237
238
 
238
- <p align="center">
239
- <img src="https://vhs.charm.sh/vhs-5U1BBjJkTUBReRswsSgIVx.gif" alt="YAICLI Chat Demo" width="85%">
240
- </p>
239
+ <a href="https://asciinema.org/a/vyreM0n576GjGL2asjI3QzUIY" target="_blank"><img src="https://asciinema.org/a/vyreM0n576GjGL2asjI3QzUIY.svg" width="85%"/></a>
241
240
 
242
241
  > [!NOTE]
243
242
  > YAICLI is actively developed. While core functionality is stable, some features may evolve in future releases.
@@ -329,10 +328,6 @@ MODEL = gpt-4o
329
328
  SHELL_NAME = auto
330
329
  OS_NAME = auto
331
330
 
332
- # API paths (usually no need to change for OpenAI compatible APIs)
333
- COMPLETION_PATH = chat/completions
334
- ANSWER_PATH = choices[0].message.content
335
-
336
331
  # true: streaming response, false: non-streaming
337
332
  STREAM = true
338
333
 
@@ -362,29 +357,28 @@ MAX_SAVED_CHATS = 20
362
357
 
363
358
  ### Configuration Options Reference
364
359
 
365
- | Option | Description | Default | Env Variable |
366
- |---------------------|---------------------------------------------|------------------------------|-------------------------|
367
- | `PROVIDER` | LLM provider (openai, claude, cohere, etc.) | `openai` | `YAI_PROVIDER` |
368
- | `BASE_URL` | API endpoint URL | `https://api.openai.com/v1` | `YAI_BASE_URL` |
369
- | `API_KEY` | Your API key | - | `YAI_API_KEY` |
370
- | `MODEL` | LLM model to use | `gpt-4o` | `YAI_MODEL` |
371
- | `SHELL_NAME` | Shell type | `auto` | `YAI_SHELL_NAME` |
372
- | `OS_NAME` | Operating system | `auto` | `YAI_OS_NAME` |
373
- | `COMPLETION_PATH` | API completion path | `chat/completions` | `YAI_COMPLETION_PATH` |
374
- | `ANSWER_PATH` | JSON path for response | `choices[0].message.content` | `YAI_ANSWER_PATH` |
375
- | `STREAM` | Enable streaming | `true` | `YAI_STREAM` |
376
- | `TIMEOUT` | API timeout (seconds) | `60` | `YAI_TIMEOUT` |
377
- | `INTERACTIVE_ROUND` | Interactive mode rounds | `25` | `YAI_INTERACTIVE_ROUND` |
378
- | `CODE_THEME` | Syntax highlighting theme | `monokai` | `YAI_CODE_THEME` |
379
- | `TEMPERATURE` | Response randomness | `0.7` | `YAI_TEMPERATURE` |
380
- | `TOP_P` | Top-p sampling | `1.0` | `YAI_TOP_P` |
381
- | `MAX_TOKENS` | Max response tokens | `1024` | `YAI_MAX_TOKENS` |
382
- | `MAX_HISTORY` | Max history entries | `500` | `YAI_MAX_HISTORY` |
383
- | `AUTO_SUGGEST` | Enable history suggestions | `true` | `YAI_AUTO_SUGGEST` |
384
- | `SHOW_REASONING` | Enable reasoning display | `true` | `YAI_SHOW_REASONING` |
385
- | `JUSTIFY` | Text alignment | `default` | `YAI_JUSTIFY` |
386
- | `CHAT_HISTORY_DIR` | Chat history directory | `<tempdir>/yaicli/history` | `YAI_CHAT_HISTORY_DIR` |
387
- | `MAX_SAVED_CHATS` | Max saved chats | `20` | `YAI_MAX_SAVED_CHATS` |
360
+ | Option | Description | Default | Env Variable |
361
+ | --------------------- | ------------------------------------------- | --------------------------- | ------------------------- |
362
+ | `PROVIDER` | LLM provider (openai, claude, cohere, etc.) | `openai` | `YAI_PROVIDER` |
363
+ | `BASE_URL` | API endpoint URL | `https://api.openai.com/v1` | `YAI_BASE_URL` |
364
+ | `API_KEY` | Your API key | - | `YAI_API_KEY` |
365
+ | `MODEL` | LLM model to use | `gpt-4o` | `YAI_MODEL` |
366
+ | `SHELL_NAME` | Shell type | `auto` | `YAI_SHELL_NAME` |
367
+ | `OS_NAME` | Operating system | `auto` | `YAI_OS_NAME` |
368
+ | `STREAM` | Enable streaming | `true` | `YAI_STREAM` |
369
+ | `TIMEOUT` | API timeout (seconds) | `60` | `YAI_TIMEOUT` |
370
+ | `INTERACTIVE_ROUND` | Interactive mode rounds | `25` | `YAI_INTERACTIVE_ROUND` |
371
+ | `CODE_THEME` | Syntax highlighting theme | `monokai` | `YAI_CODE_THEME` |
372
+ | `TEMPERATURE` | Response randomness | `0.7` | `YAI_TEMPERATURE` |
373
+ | `TOP_P` | Top-p sampling | `1.0` | `YAI_TOP_P` |
374
+ | `MAX_TOKENS` | Max response tokens | `1024` | `YAI_MAX_TOKENS` |
375
+ | `MAX_HISTORY` | Max history entries | `500` | `YAI_MAX_HISTORY` |
376
+ | `AUTO_SUGGEST` | Enable history suggestions | `true` | `YAI_AUTO_SUGGEST` |
377
+ | `SHOW_REASONING` | Enable reasoning display | `true` | `YAI_SHOW_REASONING` |
378
+ | `JUSTIFY` | Text alignment | `default` | `YAI_JUSTIFY` |
379
+ | `CHAT_HISTORY_DIR` | Chat history directory | `<tempdir>/yaicli/history` | `YAI_CHAT_HISTORY_DIR` |
380
+ | `MAX_SAVED_CHATS` | Max saved chats | `20` | `YAI_MAX_SAVED_CHATS` |
381
+ | `ROLE_MODIFY_WARNING` | Warn user when modifying role | `true` | `YAI_ROLE_MODIFY_WARNING` |
388
382
 
389
383
  ### LLM Provider Configuration
390
384
 
@@ -393,52 +387,23 @@ other providers.
393
387
 
394
388
  #### Pre-configured Provider Settings
395
389
 
396
- | Provider | BASE_URL | COMPLETION_PATH | ANSWER_PATH |
397
- |--------------------------------|-----------------------------------------------------------|--------------------|------------------------------|
398
- | **OpenAI** (default) | `https://api.openai.com/v1` | `chat/completions` | `choices[0].message.content` |
399
- | **Claude** (native API) | `https://api.anthropic.com/v1` | `messages` | `content[0].text` |
400
- | **Claude** (OpenAI-compatible) | `https://api.anthropic.com/v1/openai` | `chat/completions` | `choices[0].message.content` |
401
- | **Cohere** | `https://api.cohere.com/v2` | `chat` | `message.content[0].text` |
402
- | **Google Gemini** | `https://generativelanguage.googleapis.com/v1beta/openai` | `chat/completions` | `choices[0].message.content` |
390
+ `provider` is not case sensitive.
391
+
392
+ Claude and gemini native api will support soon.
393
+
394
+ | Provider | BASE_URL |
395
+ | ------------------------------ | --------------------------------------------------------- |
396
+ | **OpenAI** (default) | `https://api.openai.com/v1` |
397
+ | **Claude** (native API) | `https://api.anthropic.com/v1` |
398
+ | **Claude** (OpenAI-compatible) | `https://api.anthropic.com/v1/openai` |
399
+ | **Cohere** | `https://api.cohere.com` |
400
+ | **Gemini** | `https://generativelanguage.googleapis.com/v1beta/openai` |
403
401
 
404
402
  > **Note**: Many providers offer OpenAI-compatible endpoints that work with the default settings.
405
403
  >
406
404
  > - Google Gemini: https://ai.google.dev/gemini-api/docs/openai
407
405
  > - Claude: https://docs.anthropic.com/en/api/openai-sdk
408
406
 
409
- #### Custom Provider Configuration Guide
410
-
411
- To configure a custom provider:
412
-
413
- 1. **Find the API Endpoint**:
414
-
415
- - Check the provider's API documentation for their chat completion endpoint
416
-
417
- 2. **Identify the Response Structure**:
418
-
419
- - Look at the JSON response format to find where the text content is located
420
-
421
- 3. **Set the Path Expression**:
422
- - Use jmespath syntax to specify the path to the text content
423
-
424
- **Example**: For Claude's native API, the response looks like:
425
-
426
- ```json
427
- {
428
- "content": [
429
- {
430
- "text": "Hi! My name is Claude.",
431
- "type": "text"
432
- }
433
- ],
434
- "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
435
- "model": "claude-3-7-sonnet-20250219",
436
- "role": "assistant"
437
- }
438
- ```
439
-
440
- The path to extract the text is: `content.[0].text`
441
-
442
407
  ### Syntax Highlighting Themes
443
408
 
444
409
  YAICLI supports all Pygments syntax highlighting themes. You can set your preferred theme in the config file:
@@ -449,7 +414,7 @@ CODE_THEME = monokai
449
414
 
450
415
  Browse available themes at: https://pygments.org/styles/
451
416
 
452
- ![monokai theme example](artwork/monokia.png)
417
+ ![monokia theme example](artwork/monokia.png)
453
418
 
454
419
  ## 🚀 Usage
455
420
 
@@ -849,12 +814,10 @@ YAICLI is designed with a modular architecture that separates concerns and makes
849
814
  ### Dependencies
850
815
 
851
816
  | Library | Purpose |
852
- |-----------------------------------------------------------------|----------------------------------------------------|
817
+ | --------------------------------------------------------------- | -------------------------------------------------- |
853
818
  | [Typer](https://typer.tiangolo.com/) | Command-line interface with type hints |
854
819
  | [Rich](https://rich.readthedocs.io/) | Terminal formatting and beautiful display |
855
820
  | [prompt_toolkit](https://python-prompt-toolkit.readthedocs.io/) | Interactive input with history and auto-completion |
856
- | [httpx](https://www.python-httpx.org/) | Modern HTTP client with async support |
857
- | [jmespath](https://jmespath.org/) | JSON data extraction |
858
821
 
859
822
  ## 👨‍💻 Contributing
860
823
 
@@ -11,9 +11,7 @@ generate and execute shell commands, or get quick answers without leaving your w
11
11
 
12
12
  **Supports both standard and deep reasoning models across all major LLM providers.**
13
13
 
14
- <p align="center">
15
- <img src="https://vhs.charm.sh/vhs-5U1BBjJkTUBReRswsSgIVx.gif" alt="YAICLI Chat Demo" width="85%">
16
- </p>
14
+ <a href="https://asciinema.org/a/vyreM0n576GjGL2asjI3QzUIY" target="_blank"><img src="https://asciinema.org/a/vyreM0n576GjGL2asjI3QzUIY.svg" width="85%"/></a>
17
15
 
18
16
  > [!NOTE]
19
17
  > YAICLI is actively developed. While core functionality is stable, some features may evolve in future releases.
@@ -105,10 +103,6 @@ MODEL = gpt-4o
105
103
  SHELL_NAME = auto
106
104
  OS_NAME = auto
107
105
 
108
- # API paths (usually no need to change for OpenAI compatible APIs)
109
- COMPLETION_PATH = chat/completions
110
- ANSWER_PATH = choices[0].message.content
111
-
112
106
  # true: streaming response, false: non-streaming
113
107
  STREAM = true
114
108
 
@@ -138,29 +132,28 @@ MAX_SAVED_CHATS = 20
138
132
 
139
133
  ### Configuration Options Reference
140
134
 
141
- | Option | Description | Default | Env Variable |
142
- |---------------------|---------------------------------------------|------------------------------|-------------------------|
143
- | `PROVIDER` | LLM provider (openai, claude, cohere, etc.) | `openai` | `YAI_PROVIDER` |
144
- | `BASE_URL` | API endpoint URL | `https://api.openai.com/v1` | `YAI_BASE_URL` |
145
- | `API_KEY` | Your API key | - | `YAI_API_KEY` |
146
- | `MODEL` | LLM model to use | `gpt-4o` | `YAI_MODEL` |
147
- | `SHELL_NAME` | Shell type | `auto` | `YAI_SHELL_NAME` |
148
- | `OS_NAME` | Operating system | `auto` | `YAI_OS_NAME` |
149
- | `COMPLETION_PATH` | API completion path | `chat/completions` | `YAI_COMPLETION_PATH` |
150
- | `ANSWER_PATH` | JSON path for response | `choices[0].message.content` | `YAI_ANSWER_PATH` |
151
- | `STREAM` | Enable streaming | `true` | `YAI_STREAM` |
152
- | `TIMEOUT` | API timeout (seconds) | `60` | `YAI_TIMEOUT` |
153
- | `INTERACTIVE_ROUND` | Interactive mode rounds | `25` | `YAI_INTERACTIVE_ROUND` |
154
- | `CODE_THEME` | Syntax highlighting theme | `monokai` | `YAI_CODE_THEME` |
155
- | `TEMPERATURE` | Response randomness | `0.7` | `YAI_TEMPERATURE` |
156
- | `TOP_P` | Top-p sampling | `1.0` | `YAI_TOP_P` |
157
- | `MAX_TOKENS` | Max response tokens | `1024` | `YAI_MAX_TOKENS` |
158
- | `MAX_HISTORY` | Max history entries | `500` | `YAI_MAX_HISTORY` |
159
- | `AUTO_SUGGEST` | Enable history suggestions | `true` | `YAI_AUTO_SUGGEST` |
160
- | `SHOW_REASONING` | Enable reasoning display | `true` | `YAI_SHOW_REASONING` |
161
- | `JUSTIFY` | Text alignment | `default` | `YAI_JUSTIFY` |
162
- | `CHAT_HISTORY_DIR` | Chat history directory | `<tempdir>/yaicli/history` | `YAI_CHAT_HISTORY_DIR` |
163
- | `MAX_SAVED_CHATS` | Max saved chats | `20` | `YAI_MAX_SAVED_CHATS` |
135
+ | Option | Description | Default | Env Variable |
136
+ | --------------------- | ------------------------------------------- | --------------------------- | ------------------------- |
137
+ | `PROVIDER` | LLM provider (openai, claude, cohere, etc.) | `openai` | `YAI_PROVIDER` |
138
+ | `BASE_URL` | API endpoint URL | `https://api.openai.com/v1` | `YAI_BASE_URL` |
139
+ | `API_KEY` | Your API key | - | `YAI_API_KEY` |
140
+ | `MODEL` | LLM model to use | `gpt-4o` | `YAI_MODEL` |
141
+ | `SHELL_NAME` | Shell type | `auto` | `YAI_SHELL_NAME` |
142
+ | `OS_NAME` | Operating system | `auto` | `YAI_OS_NAME` |
143
+ | `STREAM` | Enable streaming | `true` | `YAI_STREAM` |
144
+ | `TIMEOUT` | API timeout (seconds) | `60` | `YAI_TIMEOUT` |
145
+ | `INTERACTIVE_ROUND` | Interactive mode rounds | `25` | `YAI_INTERACTIVE_ROUND` |
146
+ | `CODE_THEME` | Syntax highlighting theme | `monokai` | `YAI_CODE_THEME` |
147
+ | `TEMPERATURE` | Response randomness | `0.7` | `YAI_TEMPERATURE` |
148
+ | `TOP_P` | Top-p sampling | `1.0` | `YAI_TOP_P` |
149
+ | `MAX_TOKENS` | Max response tokens | `1024` | `YAI_MAX_TOKENS` |
150
+ | `MAX_HISTORY` | Max history entries | `500` | `YAI_MAX_HISTORY` |
151
+ | `AUTO_SUGGEST` | Enable history suggestions | `true` | `YAI_AUTO_SUGGEST` |
152
+ | `SHOW_REASONING` | Enable reasoning display | `true` | `YAI_SHOW_REASONING` |
153
+ | `JUSTIFY` | Text alignment | `default` | `YAI_JUSTIFY` |
154
+ | `CHAT_HISTORY_DIR` | Chat history directory | `<tempdir>/yaicli/history` | `YAI_CHAT_HISTORY_DIR` |
155
+ | `MAX_SAVED_CHATS` | Max saved chats | `20` | `YAI_MAX_SAVED_CHATS` |
156
+ | `ROLE_MODIFY_WARNING` | Warn user when modifying role | `true` | `YAI_ROLE_MODIFY_WARNING` |
164
157
 
165
158
  ### LLM Provider Configuration
166
159
 
@@ -169,52 +162,23 @@ other providers.
169
162
 
170
163
  #### Pre-configured Provider Settings
171
164
 
172
- | Provider | BASE_URL | COMPLETION_PATH | ANSWER_PATH |
173
- |--------------------------------|-----------------------------------------------------------|--------------------|------------------------------|
174
- | **OpenAI** (default) | `https://api.openai.com/v1` | `chat/completions` | `choices[0].message.content` |
175
- | **Claude** (native API) | `https://api.anthropic.com/v1` | `messages` | `content[0].text` |
176
- | **Claude** (OpenAI-compatible) | `https://api.anthropic.com/v1/openai` | `chat/completions` | `choices[0].message.content` |
177
- | **Cohere** | `https://api.cohere.com/v2` | `chat` | `message.content[0].text` |
178
- | **Google Gemini** | `https://generativelanguage.googleapis.com/v1beta/openai` | `chat/completions` | `choices[0].message.content` |
165
+ `provider` is not case sensitive.
166
+
167
+ Claude and gemini native api will support soon.
168
+
169
+ | Provider | BASE_URL |
170
+ | ------------------------------ | --------------------------------------------------------- |
171
+ | **OpenAI** (default) | `https://api.openai.com/v1` |
172
+ | **Claude** (native API) | `https://api.anthropic.com/v1` |
173
+ | **Claude** (OpenAI-compatible) | `https://api.anthropic.com/v1/openai` |
174
+ | **Cohere** | `https://api.cohere.com` |
175
+ | **Gemini** | `https://generativelanguage.googleapis.com/v1beta/openai` |
179
176
 
180
177
  > **Note**: Many providers offer OpenAI-compatible endpoints that work with the default settings.
181
178
  >
182
179
  > - Google Gemini: https://ai.google.dev/gemini-api/docs/openai
183
180
  > - Claude: https://docs.anthropic.com/en/api/openai-sdk
184
181
 
185
- #### Custom Provider Configuration Guide
186
-
187
- To configure a custom provider:
188
-
189
- 1. **Find the API Endpoint**:
190
-
191
- - Check the provider's API documentation for their chat completion endpoint
192
-
193
- 2. **Identify the Response Structure**:
194
-
195
- - Look at the JSON response format to find where the text content is located
196
-
197
- 3. **Set the Path Expression**:
198
- - Use jmespath syntax to specify the path to the text content
199
-
200
- **Example**: For Claude's native API, the response looks like:
201
-
202
- ```json
203
- {
204
- "content": [
205
- {
206
- "text": "Hi! My name is Claude.",
207
- "type": "text"
208
- }
209
- ],
210
- "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
211
- "model": "claude-3-7-sonnet-20250219",
212
- "role": "assistant"
213
- }
214
- ```
215
-
216
- The path to extract the text is: `content.[0].text`
217
-
218
182
  ### Syntax Highlighting Themes
219
183
 
220
184
  YAICLI supports all Pygments syntax highlighting themes. You can set your preferred theme in the config file:
@@ -225,7 +189,7 @@ CODE_THEME = monokai
225
189
 
226
190
  Browse available themes at: https://pygments.org/styles/
227
191
 
228
- ![monokai theme example](artwork/monokia.png)
192
+ ![monokia theme example](artwork/monokia.png)
229
193
 
230
194
  ## 🚀 Usage
231
195
 
@@ -625,12 +589,10 @@ YAICLI is designed with a modular architecture that separates concerns and makes
625
589
  ### Dependencies
626
590
 
627
591
  | Library | Purpose |
628
- |-----------------------------------------------------------------|----------------------------------------------------|
592
+ | --------------------------------------------------------------- | -------------------------------------------------- |
629
593
  | [Typer](https://typer.tiangolo.com/) | Command-line interface with type hints |
630
594
  | [Rich](https://rich.readthedocs.io/) | Terminal formatting and beautiful display |
631
595
  | [prompt_toolkit](https://python-prompt-toolkit.readthedocs.io/) | Interactive input with history and auto-completion |
632
- | [httpx](https://www.python-httpx.org/) | Modern HTTP client with async support |
633
- | [jmespath](https://jmespath.org/) | JSON data extraction |
634
596
 
635
597
  ## 👨‍💻 Contributing
636
598
 
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "yaicli"
3
- version = "0.3.2"
3
+ version = "0.4.0"
4
4
  description = "A simple CLI tool to interact with LLM"
5
5
  authors = [{ name = "belingud", email = "im.victor@qq.com" }]
6
6
  readme = "README.md"
@@ -30,9 +30,10 @@ keywords = [
30
30
  "interact with llms",
31
31
  ]
32
32
  dependencies = [
33
+ "cohere>=5.15.0",
33
34
  "distro>=1.9.0",
34
35
  "httpx>=0.28.1",
35
- "jmespath>=1.0.1",
36
+ "openai>=1.76.0",
36
37
  "prompt-toolkit>=3.0.50",
37
38
  "rich>=13.9.4",
38
39
  "socksio>=1.0.0",
@@ -16,7 +16,6 @@ from rich.padding import Padding
16
16
  from rich.panel import Panel
17
17
  from rich.prompt import Prompt
18
18
 
19
- from yaicli.api import ApiClient
20
19
  from yaicli.chat_manager import ChatFileInfo, ChatManager, FileChatManager
21
20
  from yaicli.config import CONFIG_PATH, Config, cfg
22
21
  from yaicli.console import get_console
@@ -40,6 +39,7 @@ from yaicli.const import (
40
39
  )
41
40
  from yaicli.history import LimitedFileHistory
42
41
  from yaicli.printer import Printer
42
+ from yaicli.providers import BaseClient, create_api_client
43
43
  from yaicli.roles import RoleManager
44
44
  from yaicli.utils import detect_os, detect_shell, filter_command
45
45
 
@@ -51,7 +51,7 @@ class CLI:
51
51
  self,
52
52
  verbose: bool = False,
53
53
  stdin: Optional[str] = None,
54
- api_client: Optional[ApiClient] = None,
54
+ api_client: Optional[BaseClient] = None,
55
55
  printer: Optional[Printer] = None,
56
56
  chat_manager: Optional[ChatManager] = None,
57
57
  role: Optional[str] = None,
@@ -64,9 +64,10 @@ class CLI:
64
64
  self.config: Config = cfg
65
65
  self.current_mode: str = TEMP_MODE
66
66
  self.role: str = role or DefaultRoleNames.DEFAULT.value
67
+ self.init_role: str = self.role # --role can specify a role when enter interactive chat
67
68
 
68
69
  # Initialize role manager
69
- self.role_manager = RoleManager()
70
+ self.role_manager = RoleManager() # Singleton
70
71
 
71
72
  # Validate role
72
73
  if not self.role_manager.role_exists(self.role):
@@ -102,7 +103,7 @@ class CLI:
102
103
  self.console.print(f"Current role: {self.role}")
103
104
  self.console.print(Markdown("---", code_theme=self.config.get("CODE_THEME", DEFAULT_CODE_THEME)))
104
105
 
105
- self.api_client = api_client or ApiClient(self.config, self.console, self.verbose)
106
+ self.api_client = api_client or create_api_client(self.config, self.console, self.verbose)
106
107
  self.printer = printer or Printer(self.config, self.console, self.verbose, markdown=True)
107
108
 
108
109
  _origin_stderr = None
@@ -344,12 +345,11 @@ class CLI:
344
345
  def get_system_prompt(self) -> str:
345
346
  """Get the system prompt based on current role and mode"""
346
347
  # Use the role manager to get the system prompt
347
- self.console.print(f"Using role: {self.role}")
348
348
  return self.role_manager.get_system_prompt(self.role)
349
349
 
350
350
  def _build_messages(self, user_input: str) -> List[dict]:
351
351
  """Build message list for LLM API"""
352
- # Create the message list
352
+ # Create the message list with system prompt
353
353
  messages = [{"role": "system", "content": self.get_system_prompt()}]
354
354
 
355
355
  # Add previous conversation if available
@@ -420,7 +420,7 @@ class CLI:
420
420
  ████ ███████ ██ ██ ██ ██
421
421
  ██ ██ ██ ██ ██ ██ ██
422
422
  ██ ██ ██ ██ ██████ ███████ ██
423
- """,
423
+ """,
424
424
  style="bold cyan",
425
425
  )
426
426
  self.console.print("Welcome to YAICLI!", style="bold")
@@ -500,7 +500,7 @@ class CLI:
500
500
  @self.bindings.add(Keys.ControlI) # TAB
501
501
  def _(event: KeyPressEvent) -> None:
502
502
  self.current_mode = EXEC_MODE if self.current_mode == CHAT_MODE else CHAT_MODE
503
- self.role = DefaultRoleNames.SHELL if self.current_mode == EXEC_MODE else DefaultRoleNames.DEFAULT
503
+ self.role = DefaultRoleNames.SHELL if self.current_mode == EXEC_MODE else self.init_role
504
504
 
505
505
  def _run_once(self, input: str, shell: bool) -> None:
506
506
  """Run a single command (non-interactive)."""
@@ -13,6 +13,7 @@ from yaicli.const import (
13
13
  DEFAULT_CONFIG_MAP,
14
14
  DEFAULT_JUSTIFY,
15
15
  DEFAULT_MAX_SAVED_CHATS,
16
+ DEFAULT_ROLE_MODIFY_WARNING,
16
17
  )
17
18
  from yaicli.utils import str2bool
18
19
 
@@ -78,6 +79,10 @@ class Config(dict):
78
79
  f.write(f"\nMAX_SAVED_CHATS={DEFAULT_MAX_SAVED_CHATS}\n")
79
80
  if "JUSTIFY" not in config_content.strip():
80
81
  f.write(f"\nJUSTIFY={DEFAULT_JUSTIFY}\n")
82
+ if "ROLE_MODIFY_WARNING" not in config_content.strip():
83
+ f.write(
84
+ f"\n# Set to false to disable warnings about modified built-in roles\nROLE_MODIFY_WARNING={DEFAULT_ROLE_MODIFY_WARNING}\n"
85
+ )
81
86
 
82
87
  def _load_from_file(self) -> None:
83
88
  """Load configuration from the config file.
@@ -160,7 +165,7 @@ class Config(dict):
160
165
  self[key] = converted_value
161
166
 
162
167
 
163
- @lru_cache(maxsize=1)
168
+ @lru_cache(1)
164
169
  def get_config() -> Config:
165
170
  """Get the configuration singleton"""
166
171
  return Config()
@@ -1,10 +1,12 @@
1
1
  from enum import StrEnum
2
2
  from pathlib import Path
3
3
  from tempfile import gettempdir
4
- from typing import Any
4
+ from typing import Any, Literal
5
5
 
6
6
  from rich.console import JustifyMethod
7
7
 
8
+ BOOL_STR = Literal["true", "false", "yes", "no", "y", "n", "1", "0", "on", "off"]
9
+
8
10
 
9
11
  class JustifyEnum(StrEnum):
10
12
  DEFAULT = "default"
@@ -33,25 +35,24 @@ ROLES_DIR = CONFIG_PATH.parent / "roles"
33
35
 
34
36
  # Default configuration values
35
37
  DEFAULT_CODE_THEME = "monokai"
36
- DEFAULT_COMPLETION_PATH = "chat/completions"
37
- DEFAULT_ANSWER_PATH = "choices[0].message.content"
38
38
  DEFAULT_PROVIDER = "openai"
39
39
  DEFAULT_BASE_URL = "https://api.openai.com/v1"
40
40
  DEFAULT_MODEL = "gpt-4o"
41
41
  DEFAULT_SHELL_NAME = "auto"
42
42
  DEFAULT_OS_NAME = "auto"
43
- DEFAULT_STREAM = "true"
43
+ DEFAULT_STREAM: BOOL_STR = "true"
44
44
  DEFAULT_TEMPERATURE: float = 0.7
45
45
  DEFAULT_TOP_P: float = 1.0
46
46
  DEFAULT_MAX_TOKENS: int = 1024
47
47
  DEFAULT_MAX_HISTORY: int = 500
48
- DEFAULT_AUTO_SUGGEST = "true"
49
- DEFAULT_SHOW_REASONING = "true"
48
+ DEFAULT_AUTO_SUGGEST: BOOL_STR = "true"
49
+ DEFAULT_SHOW_REASONING: BOOL_STR = "true"
50
50
  DEFAULT_TIMEOUT: int = 60
51
51
  DEFAULT_INTERACTIVE_ROUND: int = 25
52
- DEFAULT_CHAT_HISTORY_DIR = Path(gettempdir()) / "yaicli/chats"
52
+ DEFAULT_CHAT_HISTORY_DIR: Path = Path(gettempdir()) / "yaicli/chats"
53
53
  DEFAULT_MAX_SAVED_CHATS = 20
54
54
  DEFAULT_JUSTIFY: JustifyMethod = "default"
55
+ DEFAULT_ROLE_MODIFY_WARNING: BOOL_STR = "true"
55
56
 
56
57
 
57
58
  class EventTypeEnum(StrEnum):
@@ -64,12 +65,11 @@ class EventTypeEnum(StrEnum):
64
65
  FINISH = "finish"
65
66
 
66
67
 
67
- SHELL_PROMPT = """Your are a Shell Command Generator named YAICLI.
68
- Generate a command EXCLUSIVELY for {_os} OS with {_shell} shell.
69
- If details are missing, offer the most logical solution.
70
- Ensure the output is a valid shell command.
71
- Combine multiple steps with `&&` when possible.
72
- Supply plain text only, avoiding Markdown formatting."""
68
+ SHELL_PROMPT = """You are YAICLI, a shell command generator.
69
+ The context conversation may contain other types of messages,
70
+ but you should only respond with a single valid {_shell} shell command for {_os}.
71
+ Do not include any explanations, comments, or formatting — only the command as plain text, avoiding Markdown formatting.
72
+ """
73
73
 
74
74
  DEFAULT_PROMPT = """
75
75
  You are YAICLI, a system management and programing assistant,
@@ -113,9 +113,6 @@ DEFAULT_CONFIG_MAP = {
113
113
  # System detection hints
114
114
  "SHELL_NAME": {"value": DEFAULT_SHELL_NAME, "env_key": "YAI_SHELL_NAME", "type": str},
115
115
  "OS_NAME": {"value": DEFAULT_OS_NAME, "env_key": "YAI_OS_NAME", "type": str},
116
- # API paths (usually no need to change for OpenAI compatible APIs)
117
- "COMPLETION_PATH": {"value": DEFAULT_COMPLETION_PATH, "env_key": "YAI_COMPLETION_PATH", "type": str},
118
- "ANSWER_PATH": {"value": DEFAULT_ANSWER_PATH, "env_key": "YAI_ANSWER_PATH", "type": str},
119
116
  # API call parameters
120
117
  "STREAM": {"value": DEFAULT_STREAM, "env_key": "YAI_STREAM", "type": bool},
121
118
  "TEMPERATURE": {"value": DEFAULT_TEMPERATURE, "env_key": "YAI_TEMPERATURE", "type": float},
@@ -136,6 +133,8 @@ DEFAULT_CONFIG_MAP = {
136
133
  # Chat history settings
137
134
  "CHAT_HISTORY_DIR": {"value": DEFAULT_CHAT_HISTORY_DIR, "env_key": "YAI_CHAT_HISTORY_DIR", "type": str},
138
135
  "MAX_SAVED_CHATS": {"value": DEFAULT_MAX_SAVED_CHATS, "env_key": "YAI_MAX_SAVED_CHATS", "type": int},
136
+ # Role settings
137
+ "ROLE_MODIFY_WARNING": {"value": DEFAULT_ROLE_MODIFY_WARNING, "env_key": "YAI_ROLE_MODIFY_WARNING", "type": bool},
139
138
  }
140
139
 
141
140
  DEFAULT_CONFIG_INI = f"""[core]
@@ -148,10 +147,6 @@ MODEL={DEFAULT_CONFIG_MAP["MODEL"]["value"]}
148
147
  SHELL_NAME={DEFAULT_CONFIG_MAP["SHELL_NAME"]["value"]}
149
148
  OS_NAME={DEFAULT_CONFIG_MAP["OS_NAME"]["value"]}
150
149
 
151
- # API paths (usually no need to change for OpenAI compatible APIs)
152
- COMPLETION_PATH={DEFAULT_CONFIG_MAP["COMPLETION_PATH"]["value"]}
153
- ANSWER_PATH={DEFAULT_CONFIG_MAP["ANSWER_PATH"]["value"]}
154
-
155
150
  # true: streaming response, false: non-streaming
156
151
  STREAM={DEFAULT_CONFIG_MAP["STREAM"]["value"]}
157
152
 
@@ -177,4 +172,8 @@ JUSTIFY={DEFAULT_CONFIG_MAP["JUSTIFY"]["value"]}
177
172
  # Chat history settings
178
173
  CHAT_HISTORY_DIR={DEFAULT_CONFIG_MAP["CHAT_HISTORY_DIR"]["value"]}
179
174
  MAX_SAVED_CHATS={DEFAULT_CONFIG_MAP["MAX_SAVED_CHATS"]["value"]}
175
+
176
+ # Role settings
177
+ # Set to false to disable warnings about modified built-in roles
178
+ ROLE_MODIFY_WARNING={DEFAULT_CONFIG_MAP["ROLE_MODIFY_WARNING"]["value"]}
180
179
  """
@@ -0,0 +1,34 @@
1
+ from yaicli.const import DEFAULT_PROVIDER
2
+ from yaicli.providers.base import BaseClient
3
+ from yaicli.providers.cohere import CohereClient
4
+ from yaicli.providers.openai import OpenAIClient
5
+
6
+
7
+ def create_api_client(config, console, verbose):
8
+ """Factory function to create the appropriate API client based on provider.
9
+
10
+ Args:
11
+ config: The configuration dictionary
12
+ console: The rich console for output
13
+ verbose: Whether to enable verbose output
14
+
15
+ Returns:
16
+ An instance of the appropriate ApiClient implementation
17
+ """
18
+ provider = config.get("PROVIDER", DEFAULT_PROVIDER).lower()
19
+
20
+ if provider == "openai":
21
+ return OpenAIClient(config, console, verbose)
22
+ elif provider == "cohere":
23
+ return CohereClient(config, console, verbose)
24
+ # elif provider == "google":
25
+ # return GoogleApiClient(config, console, verbose)
26
+ # elif provider == "claude":
27
+ # return ClaudeApiClient(config, console, verbose)
28
+ else:
29
+ # Fallback to openai client
30
+ console.print(f"Using generic HTTP client for provider: {provider}", style="yellow")
31
+ return OpenAIClient(config, console, verbose)
32
+
33
+
34
+ __all__ = ["BaseClient", "OpenAIClient", "CohereClient", "create_api_client"]