chat-console 0.3.995__tar.gz → 0.4.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {chat_console-0.3.995/chat_console.egg-info → chat_console-0.4.0}/PKG-INFO +24 -2
- {chat_console-0.3.995 → chat_console-0.4.0}/README.md +23 -1
- {chat_console-0.3.995 → chat_console-0.4.0}/app/__init__.py +1 -1
- {chat_console-0.3.995 → chat_console-0.4.0}/app/api/base.py +4 -4
- {chat_console-0.3.995 → chat_console-0.4.0}/app/api/openai.py +106 -36
- {chat_console-0.3.995 → chat_console-0.4.0}/app/config.py +25 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/utils.py +38 -16
- {chat_console-0.3.995 → chat_console-0.4.0/chat_console.egg-info}/PKG-INFO +24 -2
- {chat_console-0.3.995 → chat_console-0.4.0}/LICENSE +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/api/__init__.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/api/anthropic.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/api/ollama.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/database.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/main.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/models.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/ui/__init__.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/ui/chat_interface.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/ui/chat_list.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/ui/model_browser.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/ui/model_selector.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/ui/search.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/app/ui/styles.py +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/chat_console.egg-info/SOURCES.txt +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/chat_console.egg-info/dependency_links.txt +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/chat_console.egg-info/entry_points.txt +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/chat_console.egg-info/requires.txt +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/chat_console.egg-info/top_level.txt +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/setup.cfg +0 -0
- {chat_console-0.3.995 → chat_console-0.4.0}/setup.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: chat-console
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.4.0
|
4
4
|
Summary: A command-line interface for chatting with LLMs, storing chats and (future) rag interactions
|
5
5
|
Home-page: https://github.com/wazacraftrfid/chat-console
|
6
6
|
Author: Johnathan Greenaway
|
@@ -28,7 +28,8 @@ Dynamic: requires-dist
|
|
28
28
|
Dynamic: requires-python
|
29
29
|
Dynamic: summary
|
30
30
|
|
31
|
-
|
31
|
+
|
32
|
+
c# Chat CLI
|
32
33
|
|
33
34
|
A comprehensive command-line interface for chatting with various AI language models. This application allows you to interact with different LLM providers through an intuitive terminal-based interface.
|
34
35
|
|
@@ -37,6 +38,7 @@ A comprehensive command-line interface for chatting with various AI language mod
|
|
37
38
|
- Interactive terminal UI with Textual library
|
38
39
|
- Support for multiple AI models:
|
39
40
|
- OpenAI models (GPT-3.5, GPT-4)
|
41
|
+
- OpenAI reasoning models (o1, o1-mini, o3, o3-mini, o4-mini)
|
40
42
|
- Anthropic models (Claude 3 Opus, Sonnet, Haiku)
|
41
43
|
- Conversation history with search functionality
|
42
44
|
- Customizable response styles (concise, detailed, technical, friendly)
|
@@ -71,6 +73,26 @@ Run the application:
|
|
71
73
|
chat-cli
|
72
74
|
```
|
73
75
|
|
76
|
+
### Testing Reasoning Models
|
77
|
+
|
78
|
+
To test the OpenAI reasoning models implementation, you can use the included test script:
|
79
|
+
```
|
80
|
+
./test_reasoning.py
|
81
|
+
```
|
82
|
+
|
83
|
+
This script will test both completion and streaming with the available reasoning models.
|
84
|
+
|
85
|
+
### About OpenAI Reasoning Models
|
86
|
+
|
87
|
+
OpenAI's reasoning models (o1, o3, o4-mini, etc.) are LLMs trained with reinforcement learning to perform reasoning. These models:
|
88
|
+
|
89
|
+
- Think before they answer, producing a long internal chain of thought
|
90
|
+
- Excel in complex problem solving, coding, scientific reasoning, and multi-step planning
|
91
|
+
- Use "reasoning tokens" to work through problems step by step before providing a response
|
92
|
+
- Support different reasoning effort levels (low, medium, high)
|
93
|
+
|
94
|
+
The implementation in this CLI supports both standard completions and streaming with these models.
|
95
|
+
|
74
96
|
### Keyboard Shortcuts
|
75
97
|
|
76
98
|
- `q` - Quit the application
|
@@ -1,4 +1,5 @@
|
|
1
|
-
|
1
|
+
|
2
|
+
c# Chat CLI
|
2
3
|
|
3
4
|
A comprehensive command-line interface for chatting with various AI language models. This application allows you to interact with different LLM providers through an intuitive terminal-based interface.
|
4
5
|
|
@@ -7,6 +8,7 @@ A comprehensive command-line interface for chatting with various AI language mod
|
|
7
8
|
- Interactive terminal UI with Textual library
|
8
9
|
- Support for multiple AI models:
|
9
10
|
- OpenAI models (GPT-3.5, GPT-4)
|
11
|
+
- OpenAI reasoning models (o1, o1-mini, o3, o3-mini, o4-mini)
|
10
12
|
- Anthropic models (Claude 3 Opus, Sonnet, Haiku)
|
11
13
|
- Conversation history with search functionality
|
12
14
|
- Customizable response styles (concise, detailed, technical, friendly)
|
@@ -41,6 +43,26 @@ Run the application:
|
|
41
43
|
chat-cli
|
42
44
|
```
|
43
45
|
|
46
|
+
### Testing Reasoning Models
|
47
|
+
|
48
|
+
To test the OpenAI reasoning models implementation, you can use the included test script:
|
49
|
+
```
|
50
|
+
./test_reasoning.py
|
51
|
+
```
|
52
|
+
|
53
|
+
This script will test both completion and streaming with the available reasoning models.
|
54
|
+
|
55
|
+
### About OpenAI Reasoning Models
|
56
|
+
|
57
|
+
OpenAI's reasoning models (o1, o3, o4-mini, etc.) are LLMs trained with reinforcement learning to perform reasoning. These models:
|
58
|
+
|
59
|
+
- Think before they answer, producing a long internal chain of thought
|
60
|
+
- Excel in complex problem solving, coding, scientific reasoning, and multi-step planning
|
61
|
+
- Use "reasoning tokens" to work through problems step by step before providing a response
|
62
|
+
- Support different reasoning effort levels (low, medium, high)
|
63
|
+
|
64
|
+
The implementation in this CLI supports both standard completions and streaming with these models.
|
65
|
+
|
44
66
|
### Keyboard Shortcuts
|
45
67
|
|
46
68
|
- `q` - Quit the application
|
@@ -82,9 +82,9 @@ class BaseModelClient(ABC):
|
|
82
82
|
|
83
83
|
# If we couldn't get the provider from the UI, infer it from the model name
|
84
84
|
# Check for common OpenAI model patterns or prefixes
|
85
|
-
if (model_name_lower.startswith(("gpt-", "text-", "davinci")) or
|
85
|
+
if (model_name_lower.startswith(("gpt-", "text-", "davinci", "o1", "o3", "o4")) or
|
86
86
|
"gpt" in model_name_lower or
|
87
|
-
model_name_lower in ["04-mini", "04", "04-turbo", "04-vision"]):
|
87
|
+
model_name_lower in ["04-mini", "04", "04-turbo", "04-vision", "o1", "o3", "o4-mini"]):
|
88
88
|
provider = "openai"
|
89
89
|
logger.info(f"Identified {model_name} as an OpenAI model")
|
90
90
|
# Then check for Anthropic models - these should ALWAYS use Anthropic client
|
@@ -162,9 +162,9 @@ class BaseModelClient(ABC):
|
|
162
162
|
# If we couldn't get the provider from the UI, infer it from the model name
|
163
163
|
if not provider:
|
164
164
|
# Check for common OpenAI model patterns or prefixes
|
165
|
-
if (model_name_lower.startswith(("gpt-", "text-", "davinci")) or
|
165
|
+
if (model_name_lower.startswith(("gpt-", "text-", "davinci", "o1", "o3", "o4")) or
|
166
166
|
"gpt" in model_name_lower or
|
167
|
-
model_name_lower in ["04-mini", "04", "04-turbo", "04-vision"]):
|
167
|
+
model_name_lower in ["04-mini", "04", "04-turbo", "04-vision", "o1", "o3", "o4-mini"]):
|
168
168
|
if not AVAILABLE_PROVIDERS["openai"]:
|
169
169
|
raise Exception("OpenAI API key not found. Please set OPENAI_API_KEY environment variable.")
|
170
170
|
provider = "openai"
|
@@ -53,20 +53,38 @@ class OpenAIClient(BaseModelClient):
|
|
53
53
|
"""Generate a text completion using OpenAI"""
|
54
54
|
processed_messages = self._prepare_messages(messages, style)
|
55
55
|
|
56
|
-
#
|
57
|
-
|
58
|
-
"model": model,
|
59
|
-
"messages": processed_messages,
|
60
|
-
"temperature": temperature,
|
61
|
-
}
|
62
|
-
|
63
|
-
# Only add max_tokens if it's not None
|
64
|
-
if max_tokens is not None:
|
65
|
-
params["max_tokens"] = max_tokens
|
66
|
-
|
67
|
-
response = await self.client.chat.completions.create(**params)
|
56
|
+
# Check if this is a reasoning model (o-series)
|
57
|
+
is_reasoning_model = model.startswith(("o1", "o3", "o4")) or model in ["o1", "o3", "o4-mini"]
|
68
58
|
|
69
|
-
|
59
|
+
# Use the Responses API for reasoning models
|
60
|
+
if is_reasoning_model:
|
61
|
+
# Create parameters dict for the Responses API
|
62
|
+
params = {
|
63
|
+
"model": model,
|
64
|
+
"input": processed_messages,
|
65
|
+
"reasoning": {"effort": "medium"}, # Default to medium effort
|
66
|
+
}
|
67
|
+
|
68
|
+
# Only add max_tokens if it's not None
|
69
|
+
if max_tokens is not None:
|
70
|
+
params["max_output_tokens"] = max_tokens
|
71
|
+
|
72
|
+
response = await self.client.responses.create(**params)
|
73
|
+
return response.output_text
|
74
|
+
else:
|
75
|
+
# Use the Chat Completions API for non-reasoning models
|
76
|
+
params = {
|
77
|
+
"model": model,
|
78
|
+
"messages": processed_messages,
|
79
|
+
"temperature": temperature,
|
80
|
+
}
|
81
|
+
|
82
|
+
# Only add max_tokens if it's not None
|
83
|
+
if max_tokens is not None:
|
84
|
+
params["max_tokens"] = max_tokens
|
85
|
+
|
86
|
+
response = await self.client.chat.completions.create(**params)
|
87
|
+
return response.choices[0].message.content
|
70
88
|
|
71
89
|
async def generate_stream(self, messages: List[Dict[str, str]],
|
72
90
|
model: str,
|
@@ -83,6 +101,9 @@ class OpenAIClient(BaseModelClient):
|
|
83
101
|
|
84
102
|
processed_messages = self._prepare_messages(messages, style)
|
85
103
|
|
104
|
+
# Check if this is a reasoning model (o-series)
|
105
|
+
is_reasoning_model = model.startswith(("o1", "o3", "o4")) or model in ["o1", "o3", "o4-mini"]
|
106
|
+
|
86
107
|
try:
|
87
108
|
debug_log(f"OpenAI: preparing {len(processed_messages)} messages for stream")
|
88
109
|
|
@@ -119,20 +140,37 @@ class OpenAIClient(BaseModelClient):
|
|
119
140
|
|
120
141
|
while retry_count <= max_retries:
|
121
142
|
try:
|
122
|
-
# Create parameters dict
|
123
|
-
|
124
|
-
|
125
|
-
|
126
|
-
|
127
|
-
|
128
|
-
|
129
|
-
|
130
|
-
|
131
|
-
|
132
|
-
|
133
|
-
|
134
|
-
|
135
|
-
|
143
|
+
# Create parameters dict based on model type
|
144
|
+
if is_reasoning_model:
|
145
|
+
# Use the Responses API for reasoning models
|
146
|
+
params = {
|
147
|
+
"model": model,
|
148
|
+
"input": api_messages,
|
149
|
+
"reasoning": {"effort": "medium"}, # Default to medium effort
|
150
|
+
"stream": True,
|
151
|
+
}
|
152
|
+
|
153
|
+
# Only add max_tokens if it's not None
|
154
|
+
if max_tokens is not None:
|
155
|
+
params["max_output_tokens"] = max_tokens
|
156
|
+
|
157
|
+
debug_log(f"OpenAI: creating reasoning model stream with params: {params}")
|
158
|
+
stream = await self.client.responses.create(**params)
|
159
|
+
else:
|
160
|
+
# Use the Chat Completions API for non-reasoning models
|
161
|
+
params = {
|
162
|
+
"model": model,
|
163
|
+
"messages": api_messages,
|
164
|
+
"temperature": temperature,
|
165
|
+
"stream": True,
|
166
|
+
}
|
167
|
+
|
168
|
+
# Only add max_tokens if it's not None
|
169
|
+
if max_tokens is not None:
|
170
|
+
params["max_tokens"] = max_tokens
|
171
|
+
|
172
|
+
debug_log(f"OpenAI: creating chat completion stream with params: {params}")
|
173
|
+
stream = await self.client.chat.completions.create(**params)
|
136
174
|
|
137
175
|
# Store the stream for potential cancellation
|
138
176
|
self._active_stream = stream
|
@@ -157,17 +195,28 @@ class OpenAIClient(BaseModelClient):
|
|
157
195
|
|
158
196
|
chunk_count += 1
|
159
197
|
try:
|
160
|
-
|
161
|
-
|
162
|
-
|
163
|
-
|
164
|
-
text = str(
|
165
|
-
debug_log(f"OpenAI: yielding chunk {chunk_count} of length: {len(text)}")
|
198
|
+
# Handle different response formats based on model type
|
199
|
+
if is_reasoning_model:
|
200
|
+
# For reasoning models using the Responses API
|
201
|
+
if hasattr(chunk, 'output_text') and chunk.output_text is not None:
|
202
|
+
text = str(chunk.output_text)
|
203
|
+
debug_log(f"OpenAI reasoning: yielding chunk {chunk_count} of length: {len(text)}")
|
166
204
|
yield text
|
167
205
|
else:
|
168
|
-
debug_log(f"OpenAI: skipping
|
206
|
+
debug_log(f"OpenAI reasoning: skipping chunk {chunk_count} with missing content")
|
169
207
|
else:
|
170
|
-
|
208
|
+
# For regular models using the Chat Completions API
|
209
|
+
if chunk.choices and hasattr(chunk.choices[0], 'delta') and hasattr(chunk.choices[0].delta, 'content'):
|
210
|
+
content = chunk.choices[0].delta.content
|
211
|
+
if content is not None:
|
212
|
+
# Ensure we're returning a string
|
213
|
+
text = str(content)
|
214
|
+
debug_log(f"OpenAI: yielding chunk {chunk_count} of length: {len(text)}")
|
215
|
+
yield text
|
216
|
+
else:
|
217
|
+
debug_log(f"OpenAI: skipping None content chunk {chunk_count}")
|
218
|
+
else:
|
219
|
+
debug_log(f"OpenAI: skipping chunk {chunk_count} with missing content")
|
171
220
|
except Exception as chunk_error:
|
172
221
|
debug_log(f"OpenAI: error processing chunk {chunk_count}: {str(chunk_error)}")
|
173
222
|
# Skip problematic chunks but continue processing
|
@@ -221,11 +270,32 @@ class OpenAIClient(BaseModelClient):
|
|
221
270
|
for model in models_response.data:
|
222
271
|
# Use 'id' as both id and name for now; can enhance with more info if needed
|
223
272
|
models.append({"id": model.id, "name": model.id})
|
273
|
+
|
274
|
+
# Add reasoning models which might not be in the models list
|
275
|
+
reasoning_models = [
|
276
|
+
{"id": "o1", "name": "o1 (Reasoning)"},
|
277
|
+
{"id": "o1-mini", "name": "o1-mini (Reasoning)"},
|
278
|
+
{"id": "o3", "name": "o3 (Reasoning)"},
|
279
|
+
{"id": "o3-mini", "name": "o3-mini (Reasoning)"},
|
280
|
+
{"id": "o4-mini", "name": "o4-mini (Reasoning)"}
|
281
|
+
]
|
282
|
+
|
283
|
+
# Add reasoning models if they're not already in the list
|
284
|
+
existing_ids = {model["id"] for model in models}
|
285
|
+
for reasoning_model in reasoning_models:
|
286
|
+
if reasoning_model["id"] not in existing_ids:
|
287
|
+
models.append(reasoning_model)
|
288
|
+
|
224
289
|
return models
|
225
290
|
except Exception as e:
|
226
291
|
# Fallback to a static list if API call fails
|
227
292
|
return [
|
228
293
|
{"id": "gpt-3.5-turbo", "name": "gpt-3.5-turbo"},
|
229
294
|
{"id": "gpt-4", "name": "gpt-4"},
|
230
|
-
{"id": "gpt-4-turbo", "name": "gpt-4-turbo"}
|
295
|
+
{"id": "gpt-4-turbo", "name": "gpt-4-turbo"},
|
296
|
+
{"id": "o1", "name": "o1 (Reasoning)"},
|
297
|
+
{"id": "o1-mini", "name": "o1-mini (Reasoning)"},
|
298
|
+
{"id": "o3", "name": "o3 (Reasoning)"},
|
299
|
+
{"id": "o3-mini", "name": "o3-mini (Reasoning)"},
|
300
|
+
{"id": "o4-mini", "name": "o4-mini (Reasoning)"}
|
231
301
|
]
|
@@ -52,6 +52,31 @@ DEFAULT_CONFIG = {
|
|
52
52
|
"max_tokens": 8192,
|
53
53
|
"display_name": "GPT-4"
|
54
54
|
},
|
55
|
+
"o1": {
|
56
|
+
"provider": "openai",
|
57
|
+
"max_tokens": 128000,
|
58
|
+
"display_name": "o1 (Reasoning)"
|
59
|
+
},
|
60
|
+
"o1-mini": {
|
61
|
+
"provider": "openai",
|
62
|
+
"max_tokens": 128000,
|
63
|
+
"display_name": "o1-mini (Reasoning)"
|
64
|
+
},
|
65
|
+
"o3": {
|
66
|
+
"provider": "openai",
|
67
|
+
"max_tokens": 128000,
|
68
|
+
"display_name": "o3 (Reasoning)"
|
69
|
+
},
|
70
|
+
"o3-mini": {
|
71
|
+
"provider": "openai",
|
72
|
+
"max_tokens": 128000,
|
73
|
+
"display_name": "o3-mini (Reasoning)"
|
74
|
+
},
|
75
|
+
"o4-mini": {
|
76
|
+
"provider": "openai",
|
77
|
+
"max_tokens": 128000,
|
78
|
+
"display_name": "o4-mini (Reasoning)"
|
79
|
+
},
|
55
80
|
# Use the corrected keys from anthropic.py
|
56
81
|
"claude-3-opus-20240229": {
|
57
82
|
"provider": "anthropic",
|
@@ -65,12 +65,25 @@ async def generate_conversation_title(message: str, model: str, client: Any) ->
|
|
65
65
|
|
66
66
|
# Generate title
|
67
67
|
debug_log(f"Sending title generation request to {title_model}")
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
68
|
+
|
69
|
+
# Check if this is a reasoning model (o-series)
|
70
|
+
is_reasoning_model = title_model.startswith(("o1", "o3", "o4")) or title_model in ["o1", "o3", "o4-mini"]
|
71
|
+
|
72
|
+
if is_reasoning_model:
|
73
|
+
# For reasoning models, don't include temperature
|
74
|
+
title = await title_client.generate_completion(
|
75
|
+
messages=title_prompt,
|
76
|
+
model=title_model,
|
77
|
+
max_tokens=60
|
78
|
+
)
|
79
|
+
else:
|
80
|
+
# For non-reasoning models, include temperature
|
81
|
+
title = await title_client.generate_completion(
|
82
|
+
messages=title_prompt,
|
83
|
+
model=title_model,
|
84
|
+
temperature=0.7,
|
85
|
+
max_tokens=60
|
86
|
+
)
|
74
87
|
|
75
88
|
# Sanitize the title
|
76
89
|
title = title.strip().strip('"\'').strip()
|
@@ -755,7 +768,13 @@ def resolve_model_id(model_id_or_name: str) -> str:
|
|
755
768
|
"35-turbo": "gpt-3.5-turbo",
|
756
769
|
"35": "gpt-3.5-turbo",
|
757
770
|
"4.1-mini": "gpt-4.1-mini", # Add support for gpt-4.1-mini
|
758
|
-
"4.1": "gpt-4.1" # Add support for gpt-4.1
|
771
|
+
"4.1": "gpt-4.1", # Add support for gpt-4.1
|
772
|
+
# Add support for reasoning models
|
773
|
+
"o1": "o1",
|
774
|
+
"o1-mini": "o1-mini",
|
775
|
+
"o3": "o3",
|
776
|
+
"o3-mini": "o3-mini",
|
777
|
+
"o4-mini": "o4-mini"
|
759
778
|
}
|
760
779
|
|
761
780
|
if input_lower in openai_model_aliases:
|
@@ -765,23 +784,26 @@ def resolve_model_id(model_id_or_name: str) -> str:
|
|
765
784
|
|
766
785
|
# Special case handling for common typos and model name variations
|
767
786
|
typo_corrections = {
|
768
|
-
|
769
|
-
"
|
770
|
-
"o1
|
771
|
-
"o1-
|
772
|
-
"
|
773
|
-
"o4
|
774
|
-
"o4-
|
787
|
+
# Keep reasoning models as-is, don't convert 'o' to '0'
|
788
|
+
# "o4-mini": "04-mini",
|
789
|
+
# "o1": "01",
|
790
|
+
# "o1-mini": "01-mini",
|
791
|
+
# "o1-preview": "01-preview",
|
792
|
+
# "o4": "04",
|
793
|
+
# "o4-preview": "04-preview",
|
794
|
+
# "o4-vision": "04-vision"
|
775
795
|
}
|
776
796
|
|
797
|
+
# Don't convert reasoning model IDs that start with 'o'
|
777
798
|
# Check for more complex typo patterns with dates
|
778
|
-
if input_lower.startswith("o1-") and "-202" in input_lower:
|
799
|
+
if input_lower.startswith("o1-") and "-202" in input_lower and not any(input_lower == model_id for model_id in ["o1", "o1-mini", "o3", "o3-mini", "o4-mini"]):
|
779
800
|
corrected = "01" + input_lower[2:]
|
780
801
|
logger.info(f"Converting '{input_lower}' to '{corrected}' (letter 'o' to zero '0')")
|
781
802
|
input_lower = corrected
|
782
803
|
model_id_or_name = corrected
|
783
804
|
|
784
|
-
if
|
805
|
+
# Only apply typo corrections if not a reasoning model
|
806
|
+
if input_lower in typo_corrections and not any(input_lower == model_id for model_id in ["o1", "o1-mini", "o3", "o3-mini", "o4-mini"]):
|
785
807
|
corrected = typo_corrections[input_lower]
|
786
808
|
logger.info(f"Converting '{input_lower}' to '{corrected}' (letter 'o' to zero '0')")
|
787
809
|
input_lower = corrected
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: chat-console
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.4.0
|
4
4
|
Summary: A command-line interface for chatting with LLMs, storing chats and (future) rag interactions
|
5
5
|
Home-page: https://github.com/wazacraftrfid/chat-console
|
6
6
|
Author: Johnathan Greenaway
|
@@ -28,7 +28,8 @@ Dynamic: requires-dist
|
|
28
28
|
Dynamic: requires-python
|
29
29
|
Dynamic: summary
|
30
30
|
|
31
|
-
|
31
|
+
|
32
|
+
c# Chat CLI
|
32
33
|
|
33
34
|
A comprehensive command-line interface for chatting with various AI language models. This application allows you to interact with different LLM providers through an intuitive terminal-based interface.
|
34
35
|
|
@@ -37,6 +38,7 @@ A comprehensive command-line interface for chatting with various AI language mod
|
|
37
38
|
- Interactive terminal UI with Textual library
|
38
39
|
- Support for multiple AI models:
|
39
40
|
- OpenAI models (GPT-3.5, GPT-4)
|
41
|
+
- OpenAI reasoning models (o1, o1-mini, o3, o3-mini, o4-mini)
|
40
42
|
- Anthropic models (Claude 3 Opus, Sonnet, Haiku)
|
41
43
|
- Conversation history with search functionality
|
42
44
|
- Customizable response styles (concise, detailed, technical, friendly)
|
@@ -71,6 +73,26 @@ Run the application:
|
|
71
73
|
chat-cli
|
72
74
|
```
|
73
75
|
|
76
|
+
### Testing Reasoning Models
|
77
|
+
|
78
|
+
To test the OpenAI reasoning models implementation, you can use the included test script:
|
79
|
+
```
|
80
|
+
./test_reasoning.py
|
81
|
+
```
|
82
|
+
|
83
|
+
This script will test both completion and streaming with the available reasoning models.
|
84
|
+
|
85
|
+
### About OpenAI Reasoning Models
|
86
|
+
|
87
|
+
OpenAI's reasoning models (o1, o3, o4-mini, etc.) are LLMs trained with reinforcement learning to perform reasoning. These models:
|
88
|
+
|
89
|
+
- Think before they answer, producing a long internal chain of thought
|
90
|
+
- Excel in complex problem solving, coding, scientific reasoning, and multi-step planning
|
91
|
+
- Use "reasoning tokens" to work through problems step by step before providing a response
|
92
|
+
- Support different reasoning effort levels (low, medium, high)
|
93
|
+
|
94
|
+
The implementation in this CLI supports both standard completions and streaming with these models.
|
95
|
+
|
74
96
|
### Keyboard Shortcuts
|
75
97
|
|
76
98
|
- `q` - Quit the application
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|