code-lm 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,181 @@
1
+ Metadata-Version: 2.2
2
+ Name: code-lm
3
+ Version: 0.1.0
4
+ Summary: An AI coding assistant CLI using OpenRouter and various LLM models.
5
+ Home-page: https://github.com/Panagiotis897/lm-code
6
+ Author: Panagiotis897
7
+ Author-email: Panagiotis897 <your.email@example.com>
8
+ Project-URL: Homepage, https://github.com/Panagiotis897/lm-code
9
+ Project-URL: Bug Tracker, https://github.com/Panagiotis897/lm-code/issues
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Operating System :: OS Independent
12
+ Classifier: Development Status :: 3 - Alpha
13
+ Classifier: Environment :: Console
14
+ Classifier: Intended Audience :: Developers
15
+ Classifier: Topic :: Software Development
16
+ Classifier: Topic :: Utilities
17
+ Requires-Python: >=3.7
18
+ Description-Content-Type: text/markdown
19
+ Requires-Dist: click>=8.0
20
+ Requires-Dist: rich>=13.0
21
+ Requires-Dist: requests>=2.25.0
22
+ Requires-Dist: PyYAML>=6.0
23
+ Requires-Dist: tiktoken>=0.6.0
24
+ Requires-Dist: questionary>=2.0.0
25
+ Dynamic: author
26
+ Dynamic: home-page
27
+ Dynamic: requires-python
28
+
29
+ # Gemini Code
30
+
31
+ A powerful AI coding assistant for your terminal, powered by Gemini 2.5 Pro with support for other LLM models.
32
+ More information [here](https://blossom-tarsier-434.notion.site/Gemini-Code-1c6c13716ff180db86a0c7f4b2da13ab?pvs=4)
33
+
34
+ ## Features
35
+
36
+ - Interactive chat sessions in your terminal
37
+ - Multiple model support (Gemini 2.5 Pro, Gemini 1.5 Pro, and more)
38
+ - Basic history management (prevents excessive length)
39
+ - Markdown rendering in the terminal
40
+ - Automatic tool usage by the assistant:
41
+ - File operations (view, edit, list, grep, glob)
42
+ - Directory operations (ls, tree, create_directory)
43
+ - System commands (bash)
44
+ - Quality checks (linting, formatting)
45
+ - Test running capabilities (pytest, etc.)
46
+
47
+ ## Installation
48
+
49
+ ### Method 1: Install from PyPI (Recommended)
50
+
51
+ ```bash
52
+ # Install directly from PyPI
53
+ pip install gemini-code
54
+ ```
55
+
56
+ ### Method 2: Install from Source
57
+
58
+ ```bash
59
+ # Clone the repository
60
+ git clone https://github.com/raizamartin/gemini-code.git
61
+ cd gemini-code
62
+
63
+ # Install the package
64
+ pip install -e .
65
+ ```
66
+
67
+ ## Setup
68
+
69
+ Before using Gemini CLI, you need to set up your API keys:
70
+
71
+ ```bash
72
+ # Set up Google API key for Gemini models
73
+ gemini setup YOUR_GOOGLE_API_KEY
74
+ ```
75
+
76
+ ## Usage
77
+
78
+ ```bash
79
+ # Start an interactive session with the default model
80
+ gemini
81
+
82
+ # Start a session with a specific model
83
+ gemini --model models/gemini-2.5-pro-exp-03-25
84
+
85
+ # Set default model
86
+ gemini set-default-model models/gemini-2.5-pro-exp-03-25
87
+
88
+ # List all available models
89
+ gemini list-models
90
+ ```
91
+
92
+ ## Interactive Commands
93
+
94
+ During an interactive session, you can use these commands:
95
+
96
+ - `/exit` - Exit the chat session
97
+ - `/help` - Display help information
98
+
99
+ ## How It Works
100
+
101
+ ### Tool Usage
102
+
103
+ Unlike direct command-line tools, the Gemini CLI's tools are used automatically by the assistant to help answer your questions. For example:
104
+
105
+ 1. You ask: "What files are in the current directory?"
106
+ 2. The assistant uses the `ls` tool behind the scenes
107
+ 3. The assistant provides you with a formatted response
108
+
109
+ This approach makes the interaction more natural and similar to how Claude Code works.
110
+
111
+ ## Development
112
+
113
+ This project is under active development. More models and features will be added soon!
114
+
115
+ ### Recent Changes in v0.1.69
116
+
117
+ - Added test_runner tool to execute automated tests (e.g., pytest)
118
+ - Fixed syntax issues in the tool definitions
119
+ - Improved error handling in tool execution
120
+ - Updated status displays during tool execution with more informative messages
121
+ - Added additional utility tools (directory_tools, quality_tools, task_complete_tool, summarizer_tool)
122
+
123
+ ### Recent Changes in v0.1.21
124
+
125
+ - Implemented native Gemini function calling for much more reliable tool usage
126
+ - Rewritten the tool execution system to use Gemini's built-in function calling capability
127
+ - Enhanced the edit tool to better handle file creation and content updating
128
+ - Updated system prompt to encourage function calls instead of text-based tool usage
129
+ - Fixed issues with Gemini not actively creating or modifying files
130
+ - Simplified the BaseTool interface to support both legacy and function call modes
131
+
132
+ ### Recent Changes in v0.1.20
133
+
134
+ - Fixed error with Flask version check in example code
135
+ - Improved error handling in system prompt example code
136
+
137
+ ### Recent Changes in v0.1.19
138
+
139
+ - Improved system prompt to encourage more active tool usage
140
+ - Added thinking/planning phase to help Gemini reason about solutions
141
+ - Enhanced response format to prioritize creating and modifying files over printing code
142
+ - Filtered out thinking stages from final output to keep responses clean
143
+ - Made Gemini more proactive as a coding partner, not just an advisor
144
+
145
+ ### Recent Changes in v0.1.18
146
+
147
+ - Updated default model to Gemini 2.5 Pro Experimental (models/gemini-2.5-pro-exp-03-25)
148
+ - Updated system prompts to reference Gemini 2.5 Pro
149
+ - Improved model usage and documentation
150
+
151
+ ### Recent Changes in v0.1.17
152
+
153
+ - Added `list-models` command to show all available Gemini models
154
+ - Improved error handling for models that don't exist or require permission
155
+ - Added model initialization test to verify model availability
156
+ - Updated help documentation with new commands
157
+
158
+ ### Recent Changes in v0.1.16
159
+
160
+ - Fixed file creation issues: The CLI now properly handles creating files with content
161
+ - Enhanced tool pattern matching: Added support for more formats that Gemini might use
162
+ - Improved edit tool handling: Better handling of missing arguments when creating files
163
+ - Added special case for natural language edit commands (e.g., "edit filename with content: ...")
164
+
165
+ ### Recent Changes in v0.1.15
166
+
167
+ - Fixed tool execution issues: The CLI now properly processes tool calls and executes Bash commands correctly
168
+ - Fixed argument parsing for Bash tool: Commands are now passed as a single argument to avoid parsing issues
169
+ - Improved error handling in tools: Better handling of failures and timeouts
170
+ - Updated model name throughout the codebase to use `gemini-1.5-pro` consistently
171
+
172
+ ### Known Issues
173
+
174
+ - If you created a config file with earlier versions, you may need to delete it to get the correct defaults:
175
+ ```bash
176
+ rm -rf ~/.config/gemini-code
177
+ ```
178
+
179
+ ## License
180
+
181
+ MIT
@@ -0,0 +1,22 @@
1
+ gemini_cli/__init__.py,sha256=r8biTIkUHhaUtKuiP90Gyodr3Dh1Uk1ETMM9gIpt4UQ,107
2
+ gemini_cli/config.py,sha256=ZzD3O-dIH3fXkucDdaeBFb0N5vODIf6vqOtT7d2kmhA,2557
3
+ gemini_cli/main.py,sha256=x-F-DXFWjCQEziBg7edGz3bsE21hBwcndg82XTgbCl4,9429
4
+ gemini_cli/utils.py,sha256=fPmaS8Hi6trX8lhuqu52as-ljf3rJhXY0P6QZBmu52A,538
5
+ gemini_cli/models/__init__.py,sha256=Ql8zY71YLQjt528UkZbakpueMzQcdsiXJXGfP21n-IY,157
6
+ gemini_cli/models/gemini.py,sha256=aA_aMsay6XipgaH5kCNyC9uUWanzBWQWyYyz5MsDGsk,37507
7
+ gemini_cli/models/openrouter.py,sha256=BfzF524GI7mvjeu_AqvSZXNOUQHiTUMyfRqk4URE8B8,31954
8
+ gemini_cli/tools/__init__.py,sha256=gONeoSuMKLFLcLcIQPwQX_VbMqNLSwbAynXNDx95Sm4,3641
9
+ gemini_cli/tools/base.py,sha256=urYf9rQPaDhM-kQAGcSgDtJ4QyEefWftBtdjN3ixmf4,3293
10
+ gemini_cli/tools/directory_tools.py,sha256=xzmgaYromC252FIFiHV11Du5NpMsi-8pRuFmp4hvMPk,5704
11
+ gemini_cli/tools/file_tools.py,sha256=4jQElvVsMXSN7LDiTtbJSCLdDtp9dW9azPwyCr4SKGs,11819
12
+ gemini_cli/tools/quality_tools.py,sha256=qSL5Ie0ZCNdeDD7VoybGI50KXCNp5ulBVKckpuyekW0,4238
13
+ gemini_cli/tools/summarizer_tool.py,sha256=js0UVyP9M_VtURuT6ZnUIoogWvaQiSH_dAq12-fnKw0,7265
14
+ gemini_cli/tools/system_tools.py,sha256=-a1aesREYo-pIpIa5mMUMLailoswfHRjX8Q-8tSLmHI,2038
15
+ gemini_cli/tools/task_complete_tool.py,sha256=B3qhfwZGLgrxEIbDaWJ8Xrj-G2Ut5xUFi239o0KDk-8,2487
16
+ gemini_cli/tools/test_runner.py,sha256=p6y18BmdHmCNzlqk9V3zk-7auYfitpyLrCrcOwcyiqs,4100
17
+ gemini_cli/tools/tree_tool.py,sha256=fX60z6dw2q-SHdJJCbVLWnxta8Lq7cxwi94A2qWTn4Q,4581
18
+ code_lm-0.1.0.dist-info/METADATA,sha256=4ncXd05mBqbW9LIDfSS4-Rrx5CL4WLmZgSCEiGFitLA,6261
19
+ code_lm-0.1.0.dist-info/WHEEL,sha256=nn6H5-ilmfVryoAQl3ZQ2l8SH5imPWFpm1A5FgEuFV4,91
20
+ code_lm-0.1.0.dist-info/entry_points.txt,sha256=X0PFugPAiZDO8YsKuHnGrQJJ2ZSxRtKcNsivc96C9k8,47
21
+ code_lm-0.1.0.dist-info/top_level.txt,sha256=SRsqiKyD15llWPipzbeaivAzk2CXqZJTf1QDwaxFOgg,11
22
+ code_lm-0.1.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (75.8.1)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1,2 @@
1
+ [console_scripts]
2
+ lmcode = gemini_cli.main:cli
@@ -0,0 +1 @@
1
+ gemini_cli
gemini_cli/__init__.py ADDED
@@ -0,0 +1,5 @@
1
+ """
2
+ LM CLI - A command-line interface for interacting with various LLM models.
3
+ """
4
+
5
+ __version__ = "0.1.70"
gemini_cli/config.py ADDED
@@ -0,0 +1,78 @@
1
+ """
2
+ Configuration management for LM CLI.
3
+ """
4
+
5
+ import os
6
+ import yaml
7
+ from pathlib import Path
8
+
9
+ class Config:
10
+ """Manages configuration for the LM CLI application."""
11
+
12
+ def __init__(self):
13
+ self.config_dir = Path.home() / ".config" / "gemini-code"
14
+ self.config_file = self.config_dir / "config.yaml"
15
+ self._ensure_config_exists()
16
+ self.config = self._load_config()
17
+
18
+ def _ensure_config_exists(self):
19
+ """Create config directory and file if they don't exist."""
20
+ self.config_dir.mkdir(parents=True, exist_ok=True)
21
+
22
+ if not self.config_file.exists():
23
+ default_config = {
24
+ "api_keys": {},
25
+ "default_model": "qwen/qwen-2.5-coder-32b-instruct:free",
26
+ "settings": {
27
+ "max_tokens": 1000000,
28
+ "temperature": 0.7,
29
+ "token_warning_threshold": 800000,
30
+ "auto_compact_threshold": 950000,
31
+ }
32
+ }
33
+
34
+ with open(self.config_file, 'w') as f:
35
+ yaml.dump(default_config, f)
36
+
37
+ def _load_config(self):
38
+ """Load configuration from file."""
39
+ with open(self.config_file, 'r') as f:
40
+ return yaml.safe_load(f)
41
+
42
+ def _save_config(self):
43
+ """Save configuration to file."""
44
+ with open(self.config_file, 'w') as f:
45
+ yaml.dump(self.config, f)
46
+
47
+ def get_api_key(self, provider):
48
+ """Get API key for a specific provider."""
49
+ return self.config.get("api_keys", {}).get(provider)
50
+
51
+ def set_api_key(self, provider, key):
52
+ """Set API key for a specific provider."""
53
+ if "api_keys" not in self.config:
54
+ self.config["api_keys"] = {}
55
+
56
+ self.config["api_keys"][provider] = key
57
+ self._save_config()
58
+
59
+ def get_default_model(self):
60
+ """Get the default model."""
61
+ return self.config.get("default_model")
62
+
63
+ def set_default_model(self, model):
64
+ """Set the default model."""
65
+ self.config["default_model"] = model
66
+ self._save_config()
67
+
68
+ def get_setting(self, setting, default=None):
69
+ """Get a specific setting."""
70
+ return self.config.get("settings", {}).get(setting, default)
71
+
72
+ def set_setting(self, setting, value):
73
+ """Set a specific setting."""
74
+ if "settings" not in self.config:
75
+ self.config["settings"] = {}
76
+
77
+ self.config["settings"][setting] = value
78
+ self._save_config()
gemini_cli/main.py ADDED
@@ -0,0 +1,225 @@
1
+ """
2
+ Main entry point for the LM Code application.
3
+ Targets OpenRouter and other supported models. Includes ASCII Art welcome.
4
+ """
5
+
6
+ import os
7
+ import sys
8
+ import click
9
+ from rich.console import Console
10
+ from rich.markdown import Markdown
11
+ from rich.panel import Panel
12
+ from pathlib import Path
13
+ import yaml
14
+ import logging
15
+ import time
16
+
17
+ from .models.openrouter import OpenRouterModel, list_available_models
18
+ from .config import Config
19
+ from .utils import count_tokens
20
+ from .tools import AVAILABLE_TOOLS
21
+
22
+ # Setup console and config
23
+ console = Console() # Create console instance HERE
24
+ try:
25
+ config = Config()
26
+ except Exception as e:
27
+ console.print(f"[bold red]Error loading configuration:[/bold red] {e}")
28
+ config = None
29
+
30
+ # Setup logging - MORE EXPLICIT CONFIGURATION
31
+ log_level = os.environ.get("LOG_LEVEL", "WARNING").upper() # <-- Default back to WARNING
32
+ log_format = '%(asctime)s - %(levelname)s - [%(filename)s:%(lineno)d] - %(message)s'
33
+
34
+ # Get root logger and set level
35
+ root_logger = logging.getLogger()
36
+ root_logger.setLevel(log_level)
37
+
38
+ # Remove existing handlers to avoid duplicates if basicConfig was called elsewhere
39
+ for handler in root_logger.handlers[:]:
40
+ root_logger.removeHandler(handler)
41
+
42
+ # Add a stream handler to output to console
43
+ stream_handler = logging.StreamHandler(sys.stdout)
44
+ stream_handler.setLevel(log_level)
45
+ formatter = logging.Formatter(log_format)
46
+ stream_handler.setFormatter(formatter)
47
+ root_logger.addHandler(stream_handler)
48
+
49
+ log = logging.getLogger(__name__) # Get logger for this module
50
+ log.info(f"Logging initialized with level: {log_level}") # Confirm level
51
+
52
+ # --- Default Model ---
53
+ DEFAULT_MODEL = "qwen/qwen-2.5-coder-32b-instruct:free"
54
+ # --- ---
55
+
56
+ # --- Supported Models ---
57
+ SUPPORTED_MODELS = [
58
+ {"id": "qwen/qwen-2.5-coder-32b-instruct:free", "description": "Qwen 2.5 Coder 32B"},
59
+ {"id": "qwen/qwq-32b:free", "description": "Qwen QWQ 32B"},
60
+ {"id": "deepseek/deepseek-r1:free", "description": "DeepSeek R1"},
61
+ {"id": "google/gemma-3-27b-it:free", "description": "Gemma 3 27B Italian"},
62
+ {"id": "google/gemini-2.5-pro-exp-03-25:free", "description": "Gemini 2.5 Pro Experimental"},
63
+ ]
64
+ # --- ---
65
+
66
+ # --- ASCII Art Definition ---
67
+ LM_CODE_ART = r"""
68
+
69
+ [green]
70
+ ██╗ ███╗ ███╗ ██████╗ ██████╗ ██████╗ ███████╗
71
+ ██║ ████╗ ████║ ██╔════╝██╔═══██╗██╔══██╗██╔════╝
72
+ ██║ ██╔████╔██║ ██║ ██║ ██║██║ ██║█████╗
73
+ ██║ ██║╚██╔╝██║ ██║ ██║ ██║██║ ██║██╔══╝
74
+ ███████╗██║ ╚═╝ ██║ ╚██████╗╚██████╔╝██████╔╝███████╗
75
+ ╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚══════╝
76
+ [/green]
77
+ """
78
+ # --- End ASCII Art ---
79
+
80
+
81
+ CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
82
+
83
+ @click.group(invoke_without_command=True, context_settings=CONTEXT_SETTINGS)
84
+ @click.option(
85
+ '--model', '-m',
86
+ help=f'Model ID to use (e.g., qwen/qwen-2.5-coder-32b-instruct:free). Default: {DEFAULT_MODEL}',
87
+ default=None
88
+ )
89
+ @click.pass_context
90
+ def cli(ctx, model):
91
+ """Interactive CLI for LM models with coding assistance tools."""
92
+ if not config:
93
+ console.print("[bold red]Configuration could not be loaded. Cannot proceed.[/bold red]")
94
+ sys.exit(1)
95
+
96
+ if ctx.invoked_subcommand is None:
97
+ model_name_to_use = model or config.get_default_model() or DEFAULT_MODEL
98
+ log.info(f"Attempting to start interactive session with model: {model_name_to_use}")
99
+ # Pass the console object to start_interactive_session
100
+ start_interactive_session(model_name_to_use, console)
101
+
102
+ @cli.command(name="setup")
103
+ @click.argument('key', required=True)
104
+ def setup(key):
105
+ if not config: console.print("[bold red]Config error.[/bold red]"); return
106
+ try: config.set_api_key("openrouter", key); console.print("[green]✓[/green] OpenRouter API key saved.")
107
+ except Exception as e: console.print(f"[bold red]Error saving API key:[/bold red] {e}")
108
+
109
+ @cli.command(name="set-default-model")
110
+ @click.argument('model_name', required=True)
111
+ def set_default_model(model_name):
112
+ if not config: console.print("[bold red]Config error.[/bold red]"); return
113
+ try: config.set_default_model(model_name); console.print(f"[green]✓[/green] Default model set to [bold]{model_name}[/bold].")
114
+ except Exception as e: console.print(f"[bold red]Error setting default model:[/bold red] {e}")
115
+
116
+ @cli.command(name="list-models")
117
+ def list_models():
118
+ if not config: console.print("[bold red]Config error.[/bold red]"); return
119
+ api_key = config.get_api_key("openrouter")
120
+ if not api_key: console.print("[bold red]Error:[/bold red] API key not found. Run 'lmcode setup YOUR_OPENROUTER_API_KEY'."); return
121
+ console.print("[yellow]Fetching models...[/yellow]")
122
+ try:
123
+ models_list = list_available_models(api_key)
124
+ if not models_list:
125
+ console.print("[red]No models found or fetch error.[/red]")
126
+ else:
127
+ console.print("\n[bold cyan]Supported Models:[/bold cyan]")
128
+ for model in SUPPORTED_MODELS:
129
+ console.print(f"- [bold green]{model['id']}[/bold green]: {model['description']}")
130
+ console.print("\nUse 'lmcode --model MODEL' or 'lmcode set-default-model MODEL'.")
131
+ except Exception as e:
132
+ console.print(f"[bold red]Error listing models:[/bold red] {e}")
133
+ log.error("List models failed", exc_info=True)
134
+
135
+
136
+ # --- Modified start_interactive_session to use OpenRouter ---
137
+ def start_interactive_session(model_name: str, console: Console):
138
+ """Start an interactive chat session with the selected OpenRouter model."""
139
+ if not config: console.print("[bold red]Config error.[/bold red]"); return
140
+
141
+ # --- Display Welcome Art ---
142
+ console.clear()
143
+ console.print(LM_CODE_ART)
144
+ console.print(Panel("[b]Welcome to LM Code AI Assistant![/b]", border_style="green", expand=False))
145
+ time.sleep(0.1)
146
+ # --- End Welcome Art ---
147
+
148
+ api_key = config.get_api_key("openrouter")
149
+ if not api_key:
150
+ console.print("\n[bold red]Error:[/bold red] OpenRouter API key not found.")
151
+ console.print("Please run [bold]'lmcode setup YOUR_OPENROUTER_API_KEY'[/bold] first.")
152
+ return
153
+
154
+ try:
155
+ console.print(f"\nInitializing model [bold]{model_name}[/bold]...")
156
+ # Pass the console object to OpenRouterModel constructor
157
+ model = OpenRouterModel(api_key=api_key, console=console, model_name=model_name)
158
+ console.print("[green]Model initialized successfully.[/green]\n")
159
+
160
+ except Exception as e:
161
+ console.print(f"\n[bold red]Error initializing model '{model_name}':[/bold red] {e}")
162
+ log.error(f"Failed to initialize model {model_name}", exc_info=True)
163
+ console.print("Please check model name, API key permissions, network. Use 'lmcode list-models'.")
164
+ return
165
+
166
+ # --- Session Start Message ---
167
+ console.print("Type '/help' for commands, '/exit' or Ctrl+C to quit.")
168
+
169
+ while True:
170
+ try:
171
+ user_input = console.input("[bold green]You:[/bold green] ")
172
+
173
+ if user_input.lower() == '/exit': break
174
+ elif user_input.lower() == '/help': show_help(); continue
175
+
176
+ # Display initial "thinking" status - generate handles intermediate ones
177
+ response_text = model.generate(user_input)
178
+
179
+ if response_text is None and user_input.startswith('/'): console.print(f"[yellow]Unknown command:[/yellow] {user_input}"); continue
180
+ elif response_text is None: console.print("[red]Received an empty response from the model.[/red]"); log.warning("generate() returned None unexpectedly."); continue
181
+
182
+ console.print("[bold green]Assistant:[/bold green]")
183
+ console.print(Markdown(response_text), highlight=True)
184
+
185
+ except KeyboardInterrupt:
186
+ console.print("\n[yellow]Session interrupted. Exiting.[/yellow]")
187
+ break
188
+ except Exception as e:
189
+ console.print(f"\n[bold red]An error occurred during the session:[/bold red] {e}")
190
+ log.error("Error during interactive loop", exc_info=True)
191
+
192
+
193
+ def show_help():
194
+ """Show help information for interactive mode."""
195
+ tool_list_formatted = ""
196
+ if AVAILABLE_TOOLS:
197
+ # Add indentation for the bullet points
198
+ tool_list_formatted = "\n".join([f" • [white]`{name}`[/white]" for name in sorted(AVAILABLE_TOOLS.keys())])
199
+ else:
200
+ tool_list_formatted = " (No tools available)"
201
+
202
+ # Use direct rich markup and ensure newlines are preserved
203
+ help_text = f""" [bold]Help[/bold]
204
+
205
+ [cyan]Interactive Commands:[/cyan]
206
+ /exit
207
+ /help
208
+
209
+ [cyan]CLI Commands:[/cyan]
210
+ lmcode setup KEY
211
+ lmcode list-models
212
+ lmcode set-default-model NAME
213
+ lmcode --model NAME
214
+
215
+ [cyan]Workflow Hint:[/cyan] Analyze -> Plan -> Execute -> Verify -> Summarize
216
+
217
+ [cyan]Available Tools:[/cyan]
218
+ {tool_list_formatted}
219
+ """
220
+ # Print directly to Panel without Markdown wrapper
221
+ console.print(Panel(help_text, title="Help", border_style="green", expand=False))
222
+
223
+
224
+ if __name__ == "__main__":
225
+ cli()
@@ -0,0 +1,6 @@
1
+ """
2
+ Model interfaces for different LLM providers.
3
+ """
4
+
5
+ # Import model classes for easy access
6
+ from .openrouter import OpenRouterModel, list_available_models