axe-cli 1.7.6__py3-none-any.whl → 1.8.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
axe_cli/README.md ADDED
@@ -0,0 +1,304 @@
1
+ # Axe CLI Technical Reference
2
+
3
+ This document covers the technical details of Axe CLI, including configuration, session management, architecture, and advanced features.
4
+
5
+ ## Sessions and Context Management
6
+
7
+ axe automatically saves your conversation history, allowing you to continue previous work at any time.
8
+
9
+ ### Session resuming
10
+
11
+ Each time you start axe, a new session is created. If you want to continue a previous conversation, there are several ways:
12
+
13
+ **Continue the most recent session:**
14
+
15
+ Use the `--continue` flag to continue the most recent session in the current working directory:
16
+
17
+ ```bash
18
+ axe --continue
19
+ ```
20
+
21
+ **Switch to a specific session:**
22
+
23
+ Use the `--session` flag to switch to a session with a specific ID:
24
+
25
+ ```bash
26
+ axe --session abc123
27
+ ```
28
+
29
+ **Switch sessions during runtime:**
30
+
31
+ Enter `/sessions` (or `/resume`) to view all sessions in the current working directory, and use arrow keys to select the session you want to switch to:
32
+
33
+ ```
34
+ /sessions
35
+ ```
36
+
37
+ The list shows each session's title and last update time, helping you find the conversation you want to continue.
38
+
39
+ ### Startup replay
40
+
41
+ When you continue an existing session, axe will replay the previous conversation history so you can quickly understand the context. During replay, previous messages and AI responses will be displayed.
42
+
43
+ ### Clear and compact
44
+
45
+ As the conversation progresses, the context grows longer. axe will automatically compress the context when needed to ensure the conversation can continue.
46
+
47
+ You can also manually manage the context using slash commands:
48
+
49
+ **Clear context:**
50
+
51
+ Enter `/clear` to clear all context in the current session and start a fresh conversation:
52
+
53
+ ```
54
+ /clear
55
+ ```
56
+
57
+ After clearing, the AI will forget all previous conversation content. You usually don't need to use this command; for new tasks, starting a new session is a better choice.
58
+
59
+ **Compact context:**
60
+
61
+ Enter `/compact` to have the AI summarize the current conversation and replace the original context with the summary:
62
+
63
+ ```
64
+ /compact
65
+ ```
66
+
67
+ Compacting preserves key information while reducing token consumption. This is useful when the conversation is long but you still want to retain some context.
68
+
69
+ ## Configuration
70
+
71
+ axe uses configuration files to manage API providers, models, services, and runtime parameters, supporting both TOML and JSON formats.
72
+
73
+ ### Config file location
74
+
75
+ The default configuration file is located at `~/.axe/config.toml`. On first run, if the configuration file doesn't exist, axe will automatically create a default configuration file.
76
+
77
+ You can specify a different configuration file (TOML or JSON format) with the `--config-file` flag:
78
+
79
+ ```bash
80
+ axe --config-file /path/to/config.toml
81
+ ```
82
+
83
+ When calling axe programmatically, you can also pass the complete configuration content directly via the `--config` flag:
84
+
85
+ ```bash
86
+ axe --config '{"default_model": "claude-sonnet-4", "providers": {...}, "models": {...}}'
87
+ ```
88
+
89
+ ### Config items
90
+
91
+ The configuration file contains the following top-level configuration items:
92
+
93
+ | Item | Type | Description |
94
+ |------|------|-------------|
95
+ | default_model | string | Default model name, must be a model defined in models |
96
+ | default_thinking | boolean | Whether to enable thinking mode by default (defaults to false) |
97
+ | providers | table | API provider configuration |
98
+ | models | table | Model configuration |
99
+ | loop_control | table | Agent loop control parameters |
100
+ | services | table | External service configuration (search, fetch) |
101
+ | mcp | table | MCP client configuration |
102
+
103
+ **Complete configuration example:**
104
+
105
+ ```toml
106
+ default_model = "claude-sonnet-4"
107
+ default_thinking = false
108
+
109
+ [providers.anthropic]
110
+ type = "anthropic"
111
+ base_url = "https://api.anthropic.com/v1"
112
+ api_key = "sk-ant-xxx"
113
+
114
+ [models.claude-sonnet-4]
115
+ provider = "anthropic"
116
+ model = "claude-sonnet-4-20250514"
117
+ max_context_size = 200000
118
+
119
+ [loop_control]
120
+ max_steps_per_turn = 100
121
+ max_retries_per_step = 3
122
+ max_ralph_iterations = 0
123
+ reserved_context_size = 50000
124
+
125
+ [services.search]
126
+ base_url = "https://api.example.com/search"
127
+ api_key = "sk-xxx"
128
+
129
+ [services.fetch]
130
+ base_url = "https://api.example.com/fetch"
131
+ api_key = "sk-xxx"
132
+
133
+ [mcp.client]
134
+ tool_call_timeout_ms = 60000
135
+ ```
136
+
137
+ ### providers
138
+
139
+ `providers` defines API provider connection information. Each provider uses a unique name as key.
140
+
141
+ | Field | Type | Required | Description |
142
+ |-------|------|----------|-------------|
143
+ | type | string | Yes | Provider type (e.g., anthropic, openai) |
144
+ | base_url | string | Yes | API base URL |
145
+ | api_key | string | Yes | API key |
146
+ | env | table | No | Environment variables to set before creating provider instance |
147
+ | custom_headers | table | No | Custom HTTP headers to attach to requests |
148
+
149
+ **Example:**
150
+
151
+ ```toml
152
+ [providers.anthropic]
153
+ type = "anthropic"
154
+ base_url = "https://api.anthropic.com/v1"
155
+ api_key = "sk-ant-xxx"
156
+ custom_headers = { "X-Custom-Header" = "value" }
157
+ ```
158
+
159
+ ### models
160
+
161
+ `models` defines available models. Each model uses a unique name as key.
162
+
163
+ | Field | Type | Required | Description |
164
+ |-------|------|----------|-------------|
165
+ | provider | string | Yes | Provider name to use, must be defined in providers |
166
+ | model | string | Yes | Model identifier (model name used in API) |
167
+ | max_context_size | integer | Yes | Maximum context length (in tokens) |
168
+ | capabilities | array | No | Model capability list |
169
+
170
+ **Example:**
171
+
172
+ ```toml
173
+ [models.claude-sonnet-4]
174
+ provider = "anthropic"
175
+ model = "claude-sonnet-4-20250514"
176
+ max_context_size = 200000
177
+ capabilities = ["thinking", "image_in", "video_in"]
178
+ ```
179
+
180
+ ### loop_control
181
+
182
+ `loop_control` controls agent execution loop behavior.
183
+
184
+ | Field | Type | Default | Description |
185
+ |-------|------|---------|-------------|
186
+ | max_steps_per_turn | integer | 100 | Maximum steps per turn |
187
+ | max_retries_per_step | integer | 3 | Maximum retries per step |
188
+ | max_ralph_iterations | integer | 0 | Extra iterations after each user message; 0 disables; -1 is unlimited |
189
+ | reserved_context_size | integer | 50000 | Reserved token count for LLM response generation; auto-compaction triggers when context_tokens + reserved_context_size >= max_context_size |
190
+
191
+ ### services
192
+
193
+ `services` configures external services used by axe.
194
+
195
+ **search service:**
196
+
197
+ Configures web search service. When enabled, the SearchWeb tool becomes available.
198
+
199
+ | Field | Type | Required | Description |
200
+ |-------|------|----------|-------------|
201
+ | base_url | string | Yes | Search service API URL |
202
+ | api_key | string | Yes | API key |
203
+ | custom_headers | table | No | Custom HTTP headers to attach to requests |
204
+
205
+ **fetch service:**
206
+
207
+ Configures web fetch service. When enabled, the FetchURL tool prioritizes using this service to fetch webpage content.
208
+
209
+ | Field | Type | Required | Description |
210
+ |-------|------|----------|-------------|
211
+ | base_url | string | Yes | Fetch service API URL |
212
+ | api_key | string | Yes | API key |
213
+ | custom_headers | table | No | Custom HTTP headers to attach to requests |
214
+
215
+ ### mcp
216
+
217
+ `mcp` configures MCP client behavior.
218
+
219
+ | Field | Type | Default | Description |
220
+ |-------|------|---------|-------------|
221
+ | client.tool_call_timeout_ms | integer | 60000 | MCP tool call timeout (milliseconds) |
222
+
223
+ ## Architecture
224
+
225
+ ```
226
+ ┌──────────────────────────────────────────────────────────────┐
227
+ │ YOUR CODEBASE │
228
+ │ 100K lines across 500 files │
229
+ └───────────────────────┬──────────────────────────────────────┘
230
+
231
+
232
+ ┌──────────────────────────────────────────────────────────────┐
233
+ │ AXE-DIG ENGINE │
234
+ │ 5-layer analysis + semantic embeddings │
235
+ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌────────┐ │
236
+ │ │ AST │→│ Calls │→│ CFG │→│ DFG │→│ PDG │ │
237
+ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ └────────┘ │
238
+ │ │
239
+ │ In-memory daemon: 100ms queries instead of 30s CLI spawns │
240
+ └───────────────────────┬──────────────────────────────────────┘
241
+
242
+
243
+ ┌──────────────────────────────────────────────────────────────┐
244
+ │ AXE AGENT │
245
+ │ • Understands code semantically (not just text) │
246
+ │ • Extracts minimal context (95% token savings) │
247
+ │ • Executes tools (file ops, shell, multi-agent) │
248
+ │ • Interactive shell UI with Ctrl+X toggle │
249
+ └──────────────────────────────────────────────────────────────┘
250
+ ```
251
+
252
+ **The difference:**
253
+ - **Other tools**: Dump 100K lines → Claude figures it out → Burn tokens
254
+ - **axe**: Extract 5K tokens of pure signal → Surgical edits → Save money
255
+
256
+ ## Advanced features
257
+
258
+ ### Multi-agent workflows
259
+ Spawn subagents for parallel tasks:
260
+
261
+ ```bash
262
+ # Main agent delegates to specialists
263
+ Task "refactor auth module" --agent refactor-specialist
264
+ Task "update tests" --agent test-specialist
265
+ Task "update docs" --agent docs-specialist
266
+ ```
267
+
268
+ ### Skills system
269
+ Reusable workflows and domain expertise:
270
+
271
+ ```bash
272
+ # Available skills auto-detected from project
273
+ /skill:docker-deploy
274
+ /skill:api-design
275
+ /skill:performance-optimization
276
+ ```
277
+
278
+ ### Context management
279
+ axe maintains conversation history and can checkpoint/restore:
280
+
281
+ ```bash
282
+ # Save current context
283
+ /checkpoint "before-refactor"
284
+
285
+ # Restore if things go wrong
286
+ /restore "before-refactor"
287
+ ```
288
+
289
+ ## MCP Integration
290
+
291
+ For AI tools integration, axe supports Model Context Protocol (MCP).
292
+
293
+ **Add to your MCP-compatible tool's configuration:**
294
+
295
+ ```json
296
+ {
297
+ "mcpServers": {
298
+ "axe-dig": {
299
+ "command": "dig-mcp",
300
+ "args": ["--project", "/path/to/your/project"]
301
+ }
302
+ }
303
+ }
304
+ ```
@@ -0,0 +1,160 @@
1
+ # Agents and Subagents
2
+
3
+ An agent defines the AI's behavior, including system prompts, available tools, and subagents. You can use built-in agents or create custom agents.
4
+
5
+ ## Lethal Efficiency with Dynamic Subagents
6
+
7
+ axe isn't limited to one main agent. You can create subagents and tasks for *anything* you want.
8
+
9
+ Need a dedicated security researcher? A ruthlessly precise code reviewer? A creative copywriter? axe can create and deploy specialized subagents based on your exact requirements. These subagents help you complete tasks better, faster, and more efficiently—operating with lethal precision to divide and conquer complex workflows.
10
+
11
+ ## Built-in agents
12
+
13
+ axe provides two built-in agents. You can select one at startup with the `--agent` flag:
14
+
15
+ ```bash
16
+ axe --agent default
17
+ ```
18
+
19
+ **default**
20
+
21
+ The default agent, suitable for general use. Enabled tools:
22
+
23
+ - Task, SetTodoList, Shell, ReadFile, ReadMediaFile, Glob, Grep, WriteFile, StrReplaceFile
24
+ - CodeSearch, CodeContext, CodeStructure, CodeImpact (axe-dig tools)
25
+
26
+ **okabe**
27
+
28
+ An experimental agent for testing new prompts and tools. Adds SendDMail on top of default.
29
+
30
+ ## Custom agent files
31
+
32
+ Agents are defined in YAML format. Load a custom agent with the `--agent-file` flag:
33
+
34
+ ```bash
35
+ axe --agent-file /path/to/my-agent.yaml
36
+ ```
37
+
38
+ **Basic structure:**
39
+
40
+ ```yaml
41
+ version: 1
42
+ agent:
43
+ name: my-agent
44
+ system_prompt_path: ./system.md
45
+ tools:
46
+ - "axe_cli.tools.shell:Shell"
47
+ - "axe_cli.tools.file:ReadFile"
48
+ - "axe_cli.tools.file:WriteFile"
49
+ ```
50
+
51
+ **Inheritance and overrides:**
52
+
53
+ Use `extend` to inherit another agent's configuration and only override what you need to change:
54
+
55
+ ```yaml
56
+ version: 1
57
+ agent:
58
+ extend: default # Inherit from default agent
59
+ system_prompt_path: ./my-prompt.md # Override system prompt
60
+ exclude_tools: # Exclude certain tools
61
+ - "axe_cli.tools.web:SearchWeb"
62
+ - "axe_cli.tools.web:FetchURL"
63
+ ```
64
+
65
+ `extend: default` inherits from the built-in default agent. You can also specify a relative path to inherit from another agent file.
66
+
67
+ **Configuration fields:**
68
+
69
+ | Field | Description | Required |
70
+ |-------|-------------|----------|
71
+ | extend | Agent to inherit from, can be `default` or a relative path | No |
72
+ | name | Agent name | Yes (optional when inheriting) |
73
+ | system_prompt_path | System prompt file path, relative to agent file | Yes (optional when inheriting) |
74
+ | system_prompt_args | Custom arguments passed to system prompt, merged when inheriting | No |
75
+ | tools | Tool list, format is `module:ClassName` | Yes (optional when inheriting) |
76
+ | exclude_tools | Tools to exclude | No |
77
+ | subagents | Subagent definitions | No |
78
+
79
+ ### System prompt built-in parameters
80
+
81
+ The system prompt file is a Markdown template that can use `${VAR}` syntax to reference variables. Built-in variables include:
82
+
83
+ | Variable | Description |
84
+ |----------|-------------|
85
+ | ${AXE_NOW} | Current time (ISO format) |
86
+ | ${AXE_WORK_DIR} | Working directory path |
87
+ | ${AXE_WORK_DIR_LS} | Working directory file list |
88
+ | ${AXE_AGENTS_MD} | AGENTS.md file content (if exists) |
89
+ | ${AXE_SKILLS} | Loaded skills list |
90
+
91
+ You can also define custom parameters via `system_prompt_args`:
92
+
93
+ ```yaml
94
+ agent:
95
+ system_prompt_args:
96
+ MY_VAR: "custom value"
97
+ ```
98
+
99
+ Then use `${MY_VAR}` in the prompt.
100
+
101
+ **System prompt example:**
102
+
103
+ ```markdown
104
+ # My Agent
105
+
106
+ You are a helpful assistant. Current time: ${AXE_NOW}.
107
+
108
+ Working directory: ${AXE_WORK_DIR}
109
+
110
+ ${MY_VAR}
111
+ ```
112
+
113
+ ### Defining subagents in agent files
114
+
115
+ Subagents can handle specific types of tasks. After defining subagents in an agent file, the main agent can launch them via the Task tool:
116
+
117
+ ```yaml
118
+ version: 1
119
+ agent:
120
+ extend: default
121
+ subagents:
122
+ coder:
123
+ path: ./coder-sub.yaml
124
+ description: "Handle coding tasks"
125
+ reviewer:
126
+ path: ./reviewer-sub.yaml
127
+ description: "Code review expert"
128
+ ```
129
+
130
+ Subagent files are also standard agent format, typically inheriting from the main agent and excluding certain tools:
131
+
132
+ ```yaml
133
+ # coder-sub.yaml
134
+ version: 1
135
+ agent:
136
+ extend: ./agent.yaml # Inherit from main agent
137
+ system_prompt_args:
138
+ ROLE_ADDITIONAL: |
139
+ You are now running as a subagent...
140
+ exclude_tools:
141
+ - "axe_cli.tools.multiagent:Task" # Exclude Task tool to avoid nesting
142
+ ```
143
+
144
+ ### How subagents run
145
+
146
+ Subagents launched via the Task tool run in an isolated context and return results to the main agent when complete. Advantages of this approach:
147
+
148
+ - Isolated context, avoiding pollution of main agent's conversation history
149
+ - Multiple independent tasks can be processed in parallel
150
+ - Subagents can have targeted system prompts
151
+
152
+ ### Dynamic subagent creation
153
+
154
+ CreateSubagent is an advanced tool that allows AI to dynamically define new subagent types at runtime (not enabled by default). To use it, add to your agent file:
155
+
156
+ ```yaml
157
+ agent:
158
+ tools:
159
+ - "axe_cli.tools.multiagent:CreateSubagent"
160
+ ```
@@ -20,20 +20,28 @@ When responding to the user, you MUST use the SAME language as the user, unless
20
20
 
21
21
  # Code Intelligence Tools (Axe-Dig)
22
22
 
23
- The codebase is automatically indexed on startup using axe-dig, providing powerful semantic code search and analysis capabilities. When working with code, **YOU MUST PREFER** these tools over basic file operations:
23
+ The codebase is automatically indexed on startup using axe-dig. When searching for code, **Grep** is **MOST PREFERABLE** for exact results, but **CodeSearch** can be used alongside it to form a **POWERFUL COMBINATION** of semantic and exact search.
24
24
 
25
25
  **Recommended workflow for code tasks:**
26
- 1. **CodeSearch** - Find what you're looking for (semantic discovery)
27
- 2. **Grep** - Get exact file location and line numbers
26
+ 1. **Grep** - PREFERRED. Use this first to find code locations.
27
+ 2. **CodeSearch** - Use in combination with Grep for semantic discovery.
28
28
  3. **CodeImpact** - Understand the function and shows all callers and dependencies, for refactoring (if needed)
29
29
  4. **StrReplaceFile** - Make precise edits
30
30
 
31
31
  ## When to use Code Intelligence tools:
32
32
 
33
33
  - **CodeSearch**: Use this **HEAVILY** for natural language queries to find code by BEHAVIOR or PURPOSE (e.g., "find where subagents are created").
34
- - **Much better than Grep** for finding code based on what it does, even if you don't know the exact variable names.
34
+ - for finding code based on what it does, even if you don't know the exact variable names.
35
35
  - Returns semantic matches even without exact text matches.
36
36
  - Usage: `chop semantic search "natural language query"`
37
+
38
+ **When to use CodeSearch:**
39
+ - Finding code by behavior: "cache with TTL", "validate input", "retry logic"
40
+ - Exploring unfamiliar code: "session management", "error handling patterns"
41
+ - Before refactoring: "who implements this pattern?"
42
+
43
+
44
+ **Scores 0.65-0.80 are excellent matches. Use CodeSearch together with Grep for a powerful search strategy.**
37
45
 
38
46
  - **CodeContext**: Use when you need to understand a specific function, class, or symbol WITHOUT reading entire files.
39
47
  - **Requires a function/symbol name** (often found via CodeSearch first).
@@ -54,11 +62,47 @@ The codebase is automatically indexed on startup using axe-dig, providing powerf
54
62
  - Can be used after CodeSearch to pinpoint exact locations for StrReplaceFile.
55
63
  - Supports literal strings and complex regex patterns.
56
64
  - Returns file paths + line numbers + content.
65
+
57
66
  - **ReadFile**: Reading full file content when detailed implementation logic is needed.
58
67
  - **FileSearch**: Finding files by name patterns.
59
68
 
60
69
  **Important**: The codebase is indexed automatically. You don't need to run any indexing commands.
61
70
 
71
+ ## Semantic Search Best Practices
72
+
73
+ CodeSearch uses vector embeddings that capture code BEHAVIOR, not just syntax. Each function is embedded with:
74
+ - Function signatures and docstrings
75
+ - Call graphs (what it calls, who calls it)
76
+ - Complexity metrics (branches, loops, cyclomatic complexity)
77
+ - Data flow patterns (variable usage and transformations)
78
+ - Dependencies (imports, external modules)
79
+ - First ~10 lines of code
80
+
81
+ This means you can find code by BEHAVIOR even without exact keywords:
82
+ - ✅ "reset statistics counter" finds `reset_step_count()` (word "statistics" not in code)
83
+ - ✅ "retry with backoff" finds `_is_retryable_error()` (word "backoff" not in function name)
84
+ - ✅ "load TOML config" finds `load_config_from_string()` (0.76 score - near perfect!)
85
+ - ✅ "execute shell and capture output" finds `run_sh()` which returns `(returncode, stdout, stderr)`
86
+
87
+ **How to write effective queries:**
88
+ 1. Describe WHAT the code does, not HOW: "cache data with expiration" not "redis.setex"
89
+ 2. Be specific but natural: "retry with exponential backoff" finds retry logic better than just "retry"
90
+ 3. Use behavioral terms: "validate and sanitize input", "handle connection errors", "parse configuration"
91
+ 4. Scores 0.65+ are excellent, 0.55-0.65 are good, below 0.55 try rephrasing
92
+
93
+ **Recommended workflow:**
94
+ 1. **Grep** - PREFERABLY, use this first to locate code.
95
+ 2. **CodeSearch** - Use if Grep fails or for behavioral search. Combine with Grep for best results.
96
+ 3. **CodeContext** - Understand the function without reading entire files
97
+ 4. **CodeImpact** - Check who calls it before refactoring
98
+ 5. **StrReplaceFile** - Make the changes
99
+
100
+ **Each tool has its strength:**
101
+ - **CodeSearch**: Discovery by behavior (best for "what does X?")
102
+ - **Grep**: Exact text matching (best for "where is this string?")
103
+ - **CodeContext**: Understanding without reading files (best for "how does X work?")
104
+ - **CodeImpact**: Dependency analysis (best for "who uses X?")
105
+
62
106
  # General Guidelines for Coding
63
107
 
64
108
  When building something from scratch, you should:
axe_cli/app.py CHANGED
@@ -108,16 +108,19 @@ class AxeCLI:
108
108
 
109
109
  model: LLMModel | None = None
110
110
  provider: LLMProvider | None = None
111
+ model_config_name: str | None = None
111
112
 
112
113
  # try to use config file
113
114
  if not model_name and config.default_model:
114
115
  # no --model specified && default model is set in config
115
116
  model = config.models[config.default_model]
116
117
  provider = config.providers[model.provider]
118
+ model_config_name = config.default_model
117
119
  if model_name and model_name in config.models:
118
120
  # --model specified && model is set in config
119
121
  model = config.models[model_name]
120
122
  provider = config.providers[model.provider]
123
+ model_config_name = model_name
121
124
 
122
125
  if not model:
123
126
  model = LLMModel(provider="", model="", max_context_size=100_000)
@@ -142,7 +145,7 @@ class AxeCLI:
142
145
  logger.info("Using LLM model: {model}", model=model)
143
146
  logger.info("Thinking mode: {thinking}", thinking=thinking)
144
147
 
145
- runtime = await Runtime.create(config, llm, session, yolo, skills_dir)
148
+ runtime = await Runtime.create(config, llm, model_config_name, session, yolo, skills_dir)
146
149
 
147
150
  if agent_file is None:
148
151
  agent_file = DEFAULT_AGENT_FILE
@@ -152,17 +155,19 @@ class AxeCLI:
152
155
  await context.restore()
153
156
 
154
157
  soul = AxeSoul(agent, context=context)
155
- return AxeCLI(soul, runtime, env_overrides)
158
+ return AxeCLI(soul, runtime, env_overrides, model)
156
159
 
157
160
  def __init__(
158
161
  self,
159
162
  _soul: AxeSoul,
160
163
  _runtime: Runtime,
161
164
  _env_overrides: dict[str, str],
165
+ _model_config: LLMModel | None = None,
162
166
  ) -> None:
163
167
  self._soul = _soul
164
168
  self._runtime = _runtime
165
169
  self._env_overrides = _env_overrides
170
+ self._model_config = _model_config
166
171
 
167
172
  @property
168
173
  def soul(self) -> AxeSoul:
@@ -288,7 +293,7 @@ class AxeCLI:
288
293
  welcome_info.append(
289
294
  WelcomeInfoItem(
290
295
  name="Model",
291
- value=model_display_name(self._soul.model_name),
296
+ value=model_display_name(self._model_config or self._soul.model_name),
292
297
  level=WelcomeInfoItem.Level.INFO,
293
298
  )
294
299
  )
@@ -0,0 +1,3 @@
1
+ from __future__ import annotations
2
+
3
+ AXE_CODE_PLATFORM_ID = "axe-code"
axe_cli/config.py CHANGED
@@ -14,6 +14,14 @@ from axe_cli.share import get_share_dir
14
14
  from axe_cli.utils.logging import logger
15
15
 
16
16
 
17
+ class OAuthRef(BaseModel):
18
+ storage: Literal["keyring", "file"]
19
+ key: str
20
+
21
+ def __hash__(self):
22
+ return hash((self.storage, self.key))
23
+
24
+
17
25
  class LLMProvider(BaseModel):
18
26
  """LLM provider configuration."""
19
27
 
@@ -27,6 +35,8 @@ class LLMProvider(BaseModel):
27
35
  """Environment variables to set before creating the provider instance"""
28
36
  custom_headers: dict[str, str] | None = None
29
37
  """Custom headers to include in API requests"""
38
+ oauth: OAuthRef | None = None
39
+ """OAuth configuration"""
30
40
  reasoning_key: str | None = None
31
41
  """Key name for reasoning/thinking content in API responses (e.g., 'reasoning' for OpenRouter)."""
32
42
 
@@ -42,6 +52,8 @@ class LLMModel(BaseModel):
42
52
  """Provider name"""
43
53
  model: str
44
54
  """Model name"""
55
+ alias: str | None = None
56
+ """Model alias for nickname switch"""
45
57
  max_context_size: int
46
58
  """Maximum context size (unit: tokens)"""
47
59
  capabilities: set[ModelCapability] | None = None
axe_cli/llm.py CHANGED
@@ -45,10 +45,13 @@ class LLM:
45
45
  return self.chat_provider.model_name
46
46
 
47
47
 
48
- def model_display_name(model_name: str | None) -> str:
49
- if not model_name:
48
+ def model_display_name(model: LLMModel | str | None) -> str:
49
+ """Get display name for a model, using alias if available."""
50
+ if model is None:
50
51
  return ""
51
- return model_name
52
+ if isinstance(model, str):
53
+ return model
54
+ return model.alias or model.model
52
55
 
53
56
 
54
57
  def augment_provider_with_env_vars(provider: LLMProvider, model: LLMModel) -> dict[str, str]: