axe-cli 1.8.2__py3-none-any.whl → 1.8.3__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- axe_cli/README.md +0 -7
- axe_cli/agents/README.md +4 -0
- axe_cli-1.8.3.dist-info/METADATA +686 -0
- {axe_cli-1.8.2.dist-info → axe_cli-1.8.3.dist-info}/RECORD +6 -6
- {axe_cli-1.8.2.dist-info → axe_cli-1.8.3.dist-info}/WHEEL +1 -1
- axe_cli-1.8.2.dist-info/METADATA +0 -492
- {axe_cli-1.8.2.dist-info → axe_cli-1.8.3.dist-info}/entry_points.txt +0 -0
axe_cli/README.md
CHANGED
|
@@ -122,13 +122,6 @@ max_retries_per_step = 3
|
|
|
122
122
|
max_ralph_iterations = 0
|
|
123
123
|
reserved_context_size = 50000
|
|
124
124
|
|
|
125
|
-
[services.search]
|
|
126
|
-
base_url = "https://api.example.com/search"
|
|
127
|
-
api_key = "sk-xxx"
|
|
128
|
-
|
|
129
|
-
[services.fetch]
|
|
130
|
-
base_url = "https://api.example.com/fetch"
|
|
131
|
-
api_key = "sk-xxx"
|
|
132
125
|
|
|
133
126
|
[mcp.client]
|
|
134
127
|
tool_call_timeout_ms = 60000
|
axe_cli/agents/README.md
CHANGED
|
@@ -8,6 +8,10 @@ axe isn't limited to one main agent. You can create subagents and tasks for *any
|
|
|
8
8
|
|
|
9
9
|
Need a dedicated security researcher? A ruthlessly precise code reviewer? A creative copywriter? axe can create and deploy specialized subagents based on your exact requirements. These subagents help you complete tasks better, faster, and more efficiently—operating with lethal precision to divide and conquer complex workflows.
|
|
10
10
|
|
|
11
|
+

|
|
12
|
+
|
|
13
|
+
**Subagents enable parallel task execution:** Spawn multiple specialized agents to work on different aspects of a problem simultaneously, each with their own context and tools.
|
|
14
|
+
|
|
11
15
|
## Built-in agents
|
|
12
16
|
|
|
13
17
|
axe provides two built-in agents. You can select one at startup with the `--agent` flag:
|
|
@@ -0,0 +1,686 @@
|
|
|
1
|
+
Metadata-Version: 2.3
|
|
2
|
+
Name: axe-cli
|
|
3
|
+
Version: 1.8.3
|
|
4
|
+
Summary: axe, yerrrr
|
|
5
|
+
Requires-Dist: agent-client-protocol==0.7.0
|
|
6
|
+
Requires-Dist: axe-dig
|
|
7
|
+
Requires-Dist: aiofiles>=24.0,<26.0
|
|
8
|
+
Requires-Dist: aiohttp==3.13.3
|
|
9
|
+
Requires-Dist: typer==0.21.1
|
|
10
|
+
Requires-Dist: kosong[contrib]==0.41.0
|
|
11
|
+
Requires-Dist: loguru>=0.6.0,<0.8
|
|
12
|
+
Requires-Dist: prompt-toolkit==3.0.52
|
|
13
|
+
Requires-Dist: pillow==12.1.0
|
|
14
|
+
Requires-Dist: pyyaml==6.0.3
|
|
15
|
+
Requires-Dist: rich==14.2.0
|
|
16
|
+
Requires-Dist: ripgrepy==2.2.0
|
|
17
|
+
Requires-Dist: streamingjson==0.0.5
|
|
18
|
+
Requires-Dist: trafilatura==2.0.0
|
|
19
|
+
Requires-Dist: lxml==6.0.2
|
|
20
|
+
Requires-Dist: tenacity==9.1.2
|
|
21
|
+
Requires-Dist: fastmcp==2.12.5
|
|
22
|
+
Requires-Dist: pydantic==2.12.5
|
|
23
|
+
Requires-Dist: httpx[socks]==0.28.1
|
|
24
|
+
Requires-Dist: pykaos==0.6.0
|
|
25
|
+
Requires-Dist: batrachian-toad==0.5.23 ; python_full_version >= '3.14'
|
|
26
|
+
Requires-Dist: tomlkit==0.14.0
|
|
27
|
+
Requires-Dist: jinja2==3.1.6
|
|
28
|
+
Requires-Dist: pyobjc-framework-cocoa>=12.1 ; sys_platform == 'darwin'
|
|
29
|
+
Requires-Dist: keyring>=25.7.0
|
|
30
|
+
Requires-Dist: tiktoken>=0.8.0
|
|
31
|
+
Requires-Python: >=3.13
|
|
32
|
+
Description-Content-Type: text/markdown
|
|
33
|
+
|
|
34
|
+
# axe: The Agent Built for Real Engineers
|
|
35
|
+
|
|
36
|
+
**What it really means to be a 10x engineer—and the tool built for that reality.**
|
|
37
|
+
|
|
38
|
+

|
|
39
|
+
|
|
40
|
+
---
|
|
41
|
+
|
|
42
|
+
## what do you mean by a "10x engineer"
|
|
43
|
+
|
|
44
|
+
The industry loves the lore of "10x engineer"—the lone genius who ships a new product in a weekend, the hacker who rewrites the entire stack in a caffeine-fueled sprint, the visionary who creates something from nothing.
|
|
45
|
+
|
|
46
|
+
**That's not what a 10x engineer actually does.**
|
|
47
|
+
|
|
48
|
+
The real 10x engineers aren't working on greenfield projects. They're not inventing new frameworks or building the next viral app. They're maintaining **behemoth codebases** where millions of users depend on their decisions every single day.
|
|
49
|
+
|
|
50
|
+
Their incentive structure is fundamentally different: **"If it's not broken, don't fix it."**
|
|
51
|
+
|
|
52
|
+
And with that constraint in mind, they ask a different question entirely:
|
|
53
|
+
|
|
54
|
+
> **"What truly matters for solving this particular problem, and how can I gain enough confidence to ship it reliably?"**
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
## The Real Engineering Challenge
|
|
59
|
+
|
|
60
|
+
**Idea creation is a human trait.** Ideas arise from impulsive feelings, obstacles we encounter, problems we want to solve. Creating something new is exciting, visceral, immediate.
|
|
61
|
+
|
|
62
|
+
**Maintaining something reliably over time requires a completely different pedigree**—and it's by far more important than creating the idea itself.
|
|
63
|
+
|
|
64
|
+
Consider the reality:
|
|
65
|
+
- A production codebase with **100,000+ lines** across hundreds of files
|
|
66
|
+
- **Millions of users** whose workflows depend on your system staying stable
|
|
67
|
+
- **Years of accumulated complexity**: edge cases, performance optimizations, backwards compatibility
|
|
68
|
+
- **Distributed teams** where no single person understands the entire system
|
|
69
|
+
- **The cost of breaking things** is measured in downtime, lost revenue, and user trust
|
|
70
|
+
|
|
71
|
+
In this environment, the questions that matter are:
|
|
72
|
+
- "If I change this function, what breaks?"
|
|
73
|
+
- "How does this data flow through the system?"
|
|
74
|
+
- "What are all the execution paths that touch this code?"
|
|
75
|
+
- "Where are the hidden dependencies I need to understand before refactoring?"
|
|
76
|
+
|
|
77
|
+
**This is where most coding tools fail.**
|
|
78
|
+
|
|
79
|
+
They're built for the weekend hackathon, the demo video, the "move fast and break things" mentality. They optimize for speed of creation, not confidence in maintenance.
|
|
80
|
+
|
|
81
|
+
---
|
|
82
|
+
|
|
83
|
+
## Enter axe
|
|
84
|
+
|
|
85
|
+
We built axe because we understood this problem intimately. Our team has been maintaining production systems at scale, and we needed a tool that matched the way **real engineering** actually works.
|
|
86
|
+
|
|
87
|
+
**axe is built for large codebases.** Not prototypes. Not "good enough for now" solutions.
|
|
88
|
+
|
|
89
|
+
It's built for the engineer who needs to:
|
|
90
|
+
- Understand a **call graph** before changing a function signature
|
|
91
|
+
- Trace **data flow** to debug a subtle state corruption
|
|
92
|
+
- Analyze **execution paths** to understand why a test fails in CI but not locally
|
|
93
|
+
- Perform **impact analysis** before refactoring to know exactly what depends on what
|
|
94
|
+
|
|
95
|
+
**The core insight:** To ship reliably in large codebases, you need **precise understanding**, not exhaustive reading.
|
|
96
|
+
|
|
97
|
+
|
|
98
|
+
---
|
|
99
|
+
|
|
100
|
+
## How axe Works: Precision Through Intelligence
|
|
101
|
+
|
|
102
|
+
Most coding tools take the brute-force approach: dump your entire codebase into the context window and hope the LLM figures it out.
|
|
103
|
+
|
|
104
|
+
**This is backwards.**
|
|
105
|
+
|
|
106
|
+
axe uses **axe-dig**, a 5-layer code intelligence engine that extracts **exactly what matters** for the task at hand:
|
|
107
|
+
|
|
108
|
+
```
|
|
109
|
+
┌──────────────────────────────────────────────────────────────┐
|
|
110
|
+
│ Layer 5: Program Dependence → "What affects line 42?" │
|
|
111
|
+
│ Layer 4: Data Flow → "Where does this value go?" │
|
|
112
|
+
│ Layer 3: Control Flow → "How complex is this?" │
|
|
113
|
+
│ Layer 2: Call Graph → "Who calls this function?" │
|
|
114
|
+
│ Layer 1: AST → "What functions exist?" │
|
|
115
|
+
└──────────────────────────────────────────────────────────────┘
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
**This isn't about saving tokens.** It's about **technical precision.**
|
|
119
|
+
|
|
120
|
+
When you need to understand a function, axe-dig gives you:
|
|
121
|
+
- The function signature and what it does
|
|
122
|
+
- **Forward call graph**: What does this function call?
|
|
123
|
+
- **Backward call graph**: Who calls this function?
|
|
124
|
+
- **Control flow complexity**: How many execution paths exist?
|
|
125
|
+
- **Data flow**: How do values transform through this code?
|
|
126
|
+
- **Impact analysis**: What breaks if I change this?
|
|
127
|
+
|
|
128
|
+
Sometimes this means fetching **more context**, not less. When you're debugging a race condition or tracing a subtle bug through multiple layers, axe-dig will pull in the full dependency chain—because **correctness matters more than brevity**.
|
|
129
|
+
|
|
130
|
+
The goal isn't minimalism. **The goal is confidence.**
|
|
131
|
+
|
|
132
|
+
---
|
|
133
|
+
|
|
134
|
+
## A Real Example: Understanding Before Changing
|
|
135
|
+
|
|
136
|
+
Let's say you need to refactor a payment processing function in a production system.
|
|
137
|
+
|
|
138
|
+
**The wrong approach** (most tools):
|
|
139
|
+
1. Read the entire `payment.py` file (4,200 tokens)
|
|
140
|
+
2. Read related files the LLM thinks might be relevant (15,000+ tokens)
|
|
141
|
+
3. Make changes based on incomplete understanding
|
|
142
|
+
4. Hope nothing breaks
|
|
143
|
+
|
|
144
|
+
**The axe approach**:
|
|
145
|
+
|
|
146
|
+
```bash
|
|
147
|
+
# 1. Understand what this function does
|
|
148
|
+
chop context process_payment --project . --depth 2
|
|
149
|
+
|
|
150
|
+
# Result: ~175 tokens
|
|
151
|
+
# - Function signature
|
|
152
|
+
# - What it calls: validate_card, stripe.charge, db.save_transaction
|
|
153
|
+
# - Complexity: 3 decision points
|
|
154
|
+
# - Data flow: card → card_valid → charge → transaction
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
```bash
|
|
158
|
+
# 2. See who depends on this (impact analysis)
|
|
159
|
+
chop impact process_payment .
|
|
160
|
+
|
|
161
|
+
# Result: Shows exactly which functions call this
|
|
162
|
+
# - payment.py: update_subscription (line 134)
|
|
163
|
+
# - subscription.py: renew_subscription (line 45)
|
|
164
|
+
# - tests/test_payment.py: 8 test functions
|
|
165
|
+
```
|
|
166
|
+
|
|
167
|
+
```bash
|
|
168
|
+
# 3. Understand the full execution path
|
|
169
|
+
chop slice src/payment.py process_payment 89
|
|
170
|
+
|
|
171
|
+
# Result: Only the 6 lines that affect the return value
|
|
172
|
+
# Not the entire 180-line function
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
**Now you can refactor with confidence.** You know:
|
|
176
|
+
- What the function does
|
|
177
|
+
- What depends on it
|
|
178
|
+
- What execution paths exist
|
|
179
|
+
- What data flows through it
|
|
180
|
+
|
|
181
|
+
**This is what enables reliable shipping.**
|
|
182
|
+
|
|
183
|
+
---
|
|
184
|
+
|
|
185
|
+
## The axe-dig Difference: Side-by-Side Comparison
|
|
186
|
+
|
|
187
|
+
To demonstrate the precision advantage, we built a minimal CLI agent implementation with basic tools (grep, edit, write, shell) and compared it against the same agent with axe-dig tools.
|
|
188
|
+
|
|
189
|
+
**Note:** These are intentionally minimal implementations to show how phenomenal the axe-dig difference is.
|
|
190
|
+
|
|
191
|
+
### Example 1: Basic Edit Operations
|
|
192
|
+
|
|
193
|
+

|
|
194
|
+
|
|
195
|
+
**Left:** Basic CLI agent with grep
|
|
196
|
+
**Right:** axe CLI with axe-dig
|
|
197
|
+
|
|
198
|
+
The difference is clear. The basic agent searches blindly, while axe-dig understands code structure and dependencies.
|
|
199
|
+
|
|
200
|
+
### Example 2: Understanding Call Flow Tracers
|
|
201
|
+
|
|
202
|
+
When asked to explain how call flow tracking works, both agents found the context—but the results were dramatically different.
|
|
203
|
+
|
|
204
|
+

|
|
205
|
+
|
|
206
|
+
**Left:** Had to read the entire file after grepping for literal strings. **44,000 tokens**.
|
|
207
|
+
**Right:** axe-dig used **17,000 tokens** while also discovering:
|
|
208
|
+
- Call graphs for the decorator used on tracer functions
|
|
209
|
+
- Thread-safe depth tracking mechanisms
|
|
210
|
+
- How functions using this decorator actually work
|
|
211
|
+
|
|
212
|
+
axe-dig didn't just use fewer tokens—it provided **better understanding** of how the code flows.
|
|
213
|
+
|
|
214
|
+
### Example 3: The Compounding Effect
|
|
215
|
+
|
|
216
|
+
The difference compounds with follow-up questions. When we asked about caller information:
|
|
217
|
+
|
|
218
|
+

|
|
219
|
+
|
|
220
|
+
**Left:** Started wrong, inferred wrong, continued wrong.
|
|
221
|
+
**Right:** Had more context and better understanding from the start, leading to precise answers.
|
|
222
|
+
|
|
223
|
+
**This is why axe doesn't just optimize for token savings—it optimizes for what the code actually does and how it flows.**
|
|
224
|
+
|
|
225
|
+
### Example 4: Active Search vs. Passive Explanation
|
|
226
|
+
|
|
227
|
+
In the mlx-lm codebase, when asked how to compute DWQ targets:
|
|
228
|
+
|
|
229
|
+

|
|
230
|
+
|
|
231
|
+
**Left:** Explained the concept generically.
|
|
232
|
+
**Right:** axe CLI actively searched the codebase and found the actual implementation.
|
|
233
|
+
|
|
234
|
+
**Precision means finding the answer in your code, not explaining theory.**
|
|
235
|
+
|
|
236
|
+
---
|
|
237
|
+
|
|
238
|
+
## Token Efficiency: A Consequence, Not the Goal
|
|
239
|
+
|
|
240
|
+
Yes, axe-dig achieves **95% token reduction** compared to reading raw files.
|
|
241
|
+
|
|
242
|
+
| Scenario | Raw Tokens | axe-dig Tokens | Savings |
|
|
243
|
+
|----------|------------|----------------|---------|
|
|
244
|
+
| Function + callees | 21,271 | 175 | **99%** |
|
|
245
|
+
| Codebase overview (26 files) | 103,901 | 11,664 | 89% |
|
|
246
|
+
| Deep call chain (7 files) | 53,474 | 2,667 | 95% |
|
|
247
|
+
|
|
248
|
+
But **this isn't why axe exists.**
|
|
249
|
+
|
|
250
|
+
Token efficiency is a **byproduct of precision**. When you extract only the information needed to make a correct decision, you naturally use fewer tokens than dumping everything.
|
|
251
|
+
|
|
252
|
+
However, axe-dig is **not a token-saving machine**. When the situation demands it—when you need to trace a complex bug through multiple layers, when you need to understand how a feature connects throughout the codebase—axe-dig will fetch **more context**, not less.
|
|
253
|
+
|
|
254
|
+
**The principle:** Fetch exactly what's needed for technical precision. Sometimes that's 175 tokens. Sometimes it's 15,000 tokens. The difference is **intentionality**.
|
|
255
|
+
|
|
256
|
+
Other tools are incentivized to burn tokens (they charge per token). axe is incentivized to **get the answer right**.
|
|
257
|
+
|
|
258
|
+
---
|
|
259
|
+
|
|
260
|
+
## Built for Local Intelligence
|
|
261
|
+
|
|
262
|
+
axe was designed with **local compute and local LLMs** in mind.
|
|
263
|
+
|
|
264
|
+
Why does this matter?
|
|
265
|
+
|
|
266
|
+
Local LLMs have different constraints than cloud APIs:
|
|
267
|
+
- **Slower prefill and decoding** (can't waste time on irrelevant context)
|
|
268
|
+
- **Smaller context windows** (need precision, not bloat)
|
|
269
|
+
- **No per-token billing** (optimization is about speed and accuracy, not cost)
|
|
270
|
+
|
|
271
|
+
This forced us to build a **precise retrieval engine** from the ground up. We couldn't rely on "dump everything and let the cloud LLM figure it out."
|
|
272
|
+
|
|
273
|
+
The result: **axe works brilliantly with both local and cloud models**, because precision benefits everyone.
|
|
274
|
+
|
|
275
|
+
### Running Locally: Real-World Performance
|
|
276
|
+
|
|
277
|
+
Here's axe running with **srswti/blackbird-she-doesnt-refuse-21b**—a 21B parameter model from our Bodega collection, running entirely locally:
|
|
278
|
+
|
|
279
|
+

|
|
280
|
+
|
|
281
|
+
**Hardware:** M1 Max, 64GB RAM
|
|
282
|
+
**Model:** Bodega Blackbird 21B (local inference)
|
|
283
|
+
**Performance:** Spawning subagents, parallel task execution, full agentic capabilities
|
|
284
|
+
|
|
285
|
+
As you can see, the capability of axe-optimized Bodega models running locally is exceptional. The precision retrieval engine means even local models can handle complex workflows efficiently—because they're not wasting compute on irrelevant context.
|
|
286
|
+
|
|
287
|
+
---
|
|
288
|
+
|
|
289
|
+
## The axe-dig Advantage: Semantic Search
|
|
290
|
+
|
|
291
|
+
Traditional search finds syntax. **axe-dig semantic search finds behavior.**
|
|
292
|
+
|
|
293
|
+
```bash
|
|
294
|
+
# Traditional grep
|
|
295
|
+
grep "cache" src/ # Finds: variable names, comments, "cache_dir"
|
|
296
|
+
|
|
297
|
+
# axe-dig semantic search
|
|
298
|
+
chop semantic search "memoize expensive computations with TTL expiration"
|
|
299
|
+
|
|
300
|
+
# Finds: get_user_profile() because:
|
|
301
|
+
# - It calls redis.get() and redis.setex() (caching pattern)
|
|
302
|
+
# - Has TTL parameter in redis.setex call
|
|
303
|
+
# - Called by functions that do expensive DB queries
|
|
304
|
+
# Even though it doesn't mention "memoize" or "TTL"
|
|
305
|
+
```
|
|
306
|
+
|
|
307
|
+
Every function gets embedded with:
|
|
308
|
+
- Signature and docstring
|
|
309
|
+
- **Forward and backward call graphs**
|
|
310
|
+
- **Complexity metrics** (branches, loops, cyclomatic complexity)
|
|
311
|
+
- **Data flow patterns** (variables used and transformed)
|
|
312
|
+
- **Dependencies** (imports, external modules)
|
|
313
|
+
- First ~10 lines of implementation
|
|
314
|
+
|
|
315
|
+
This gets encoded into **1024-dimensional embeddings**, indexed with FAISS for fast similarity search.
|
|
316
|
+
|
|
317
|
+
**Find code by what it does, not what it's named.**
|
|
318
|
+
|
|
319
|
+
---
|
|
320
|
+
|
|
321
|
+
## Daemon Architecture: 300x Faster Queries
|
|
322
|
+
|
|
323
|
+
**The old way:** Every query spawns a new process, parses the entire codebase, builds indexes, returns result, exits. ~30 seconds.
|
|
324
|
+
|
|
325
|
+
**axe-dig's daemon:** Long-running background process with indexes in RAM. ~100ms.
|
|
326
|
+
|
|
327
|
+
| Command | Daemon | CLI Spawn | Speedup |
|
|
328
|
+
|---------|--------|-----------|---------|
|
|
329
|
+
| `search` | 0.2ms | 72ms | **302x** |
|
|
330
|
+
| `extract` | 9ms | 97ms | **11x** |
|
|
331
|
+
| `impact` | 0.2ms | 1,129ms | **7,374x** |
|
|
332
|
+
| `structure` | 0.6ms | 181ms | **285x** |
|
|
333
|
+
| **Average** | **10ms** | **1,555ms** | **155x** |
|
|
334
|
+
|
|
335
|
+
**Why `impact` shows 7,374x speedup:** The CLI must rebuild the entire call graph from scratch on every invocation (~1.1 seconds). The daemon keeps the call graph in memory, so queries return in <1ms.
|
|
336
|
+
|
|
337
|
+
**Incremental updates:** When you edit one function, axe-dig doesn't re-analyze the entire codebase. Content-hash-based caching with automatic dependency tracking means **10x faster incremental updates**.
|
|
338
|
+
|
|
339
|
+
---
|
|
340
|
+
|
|
341
|
+
## Real-World Workflow: Debugging a Production Bug
|
|
342
|
+
|
|
343
|
+
**Scenario:** Users sometimes get stale data even after updates.
|
|
344
|
+
|
|
345
|
+
```bash
|
|
346
|
+
# 1. Find where the error occurs
|
|
347
|
+
chop search "get_user" src/
|
|
348
|
+
|
|
349
|
+
# 2. Get the program slice (what code affects the return value?)
|
|
350
|
+
chop slice src/db.py get_user 45
|
|
351
|
+
|
|
352
|
+
# Output shows only the 6 relevant lines:
|
|
353
|
+
# 12: cached = redis.get(f"user:{user_id}")
|
|
354
|
+
# 15: if cached:
|
|
355
|
+
# 18: return json.loads(cached)
|
|
356
|
+
# 23: user = db.query("SELECT * FROM users WHERE id = ?", user_id)
|
|
357
|
+
# 34: redis.setex(f"user:{user_id}", 3600, json.dumps(user))
|
|
358
|
+
# 45: return user
|
|
359
|
+
|
|
360
|
+
# 3. Find who calls this (to see if cache is invalidated on update)
|
|
361
|
+
chop impact get_user src/
|
|
362
|
+
|
|
363
|
+
# 4. Search for update functions
|
|
364
|
+
chop semantic search "update user data in database" src/
|
|
365
|
+
# Finds update_user() but it doesn't call redis.delete()!
|
|
366
|
+
|
|
367
|
+
# 5. Check data flow in update_user
|
|
368
|
+
chop dfg src/db.py update_user
|
|
369
|
+
# Shows: updates DB but never invalidates cache
|
|
370
|
+
```
|
|
371
|
+
|
|
372
|
+
**Result:** Found the bug—cache invalidation missing in `update_user()`. 4 commands, 3 minutes.
|
|
373
|
+
|
|
374
|
+
**Before axe-dig:** Read multiple files, manually trace execution, spend 45 minutes debugging, maybe miss the issue entirely.
|
|
375
|
+
|
|
376
|
+
**With axe-dig:** Surgical precision. Confidence to ship the fix.
|
|
377
|
+
|
|
378
|
+
---
|
|
379
|
+
|
|
380
|
+
## Core Capabilities
|
|
381
|
+
|
|
382
|
+
### Code Intelligence (powered by axe-dig)
|
|
383
|
+
|
|
384
|
+
| Tool | What it does | Use case |
|
|
385
|
+
|------|-------------|----------|
|
|
386
|
+
| **CodeSearch** | Semantic search by behavior | "Find payment processing logic" |
|
|
387
|
+
| **CodeContext** | LLM-ready function summaries with call graphs | Understand unfamiliar code |
|
|
388
|
+
| **CodeStructure** | Navigate functions/classes in files/dirs | Explore new codebases |
|
|
389
|
+
| **CodeImpact** | Reverse call graph (who calls this?) | Safe refactoring |
|
|
390
|
+
|
|
391
|
+
### File Operations
|
|
392
|
+
- `ReadFile` / `WriteFile` / `StrReplaceFile` - Standard file I/O
|
|
393
|
+
- `Grep` - Exact file locations + line numbers (use after CodeSearch)
|
|
394
|
+
- `Glob` - Pattern matching
|
|
395
|
+
- `ReadMediaFile` - Images, PDFs, videos
|
|
396
|
+
|
|
397
|
+
### Multi-Agent Workflows
|
|
398
|
+
- `Task` - Spawn subagents for parallel work
|
|
399
|
+
- `CreateSubagent` - Custom agent specs
|
|
400
|
+
- `SetTodoList` - Track multi-step tasks
|
|
401
|
+
|
|
402
|
+
**Subagents in action:**
|
|
403
|
+
|
|
404
|
+

|
|
405
|
+
|
|
406
|
+
Spawn specialized subagents to divide and conquer complex workflows. Each subagent operates independently with its own context and tools.
|
|
407
|
+
|
|
408
|
+
---
|
|
409
|
+
|
|
410
|
+
## Quick Start
|
|
411
|
+
|
|
412
|
+
### Install
|
|
413
|
+
```bash
|
|
414
|
+
# Install axe-cli (includes axe-dig)
|
|
415
|
+
uv pip install axe-cli
|
|
416
|
+
|
|
417
|
+
# Or from source
|
|
418
|
+
git clone https://github.com/SRSWTI/axe-cli
|
|
419
|
+
cd axe-cli
|
|
420
|
+
make prepare
|
|
421
|
+
make build
|
|
422
|
+
```
|
|
423
|
+
|
|
424
|
+
### Run
|
|
425
|
+
```bash
|
|
426
|
+
cd /path/to/your/project
|
|
427
|
+
axe
|
|
428
|
+
```
|
|
429
|
+
|
|
430
|
+
On first run, axe-dig automatically indexes your codebase (30-60 seconds for typical projects). After that, queries are instant.
|
|
431
|
+
|
|
432
|
+
### Start Using
|
|
433
|
+
```bash
|
|
434
|
+
# greet axe
|
|
435
|
+
hiii
|
|
436
|
+
|
|
437
|
+
# start coding
|
|
438
|
+
hey axe, can you tell me how does dwq targets are computed in mlx
|
|
439
|
+
|
|
440
|
+
# Toggle to shell mode
|
|
441
|
+
[Ctrl+X]
|
|
442
|
+
pytest tests/
|
|
443
|
+
[Ctrl+X]
|
|
444
|
+
```
|
|
445
|
+
Hit **Ctrl+X** to toggle between axe and your normal shell. No context switching. No juggling terminals.
|
|
446
|
+
|
|
447
|
+

|
|
448
|
+
|
|
449
|
+
---
|
|
450
|
+
|
|
451
|
+
## Powered by SRSWTI Inc.
|
|
452
|
+
|
|
453
|
+
**Building the world's fastest retrieval and inference engines.**
|
|
454
|
+
|
|
455
|
+
### Bodega Inference Engine
|
|
456
|
+
|
|
457
|
+
Exclusive models trained/optimized for Bodega Inference Engine. axe includes **zero-day support** for all Bodega models, ensuring immediate access to our latest breakthroughs.
|
|
458
|
+
|
|
459
|
+
**Note:** Our models are also available on [🤗 Hugging Face](https://huggingface.co/srswti).
|
|
460
|
+
|
|
461
|
+
#### Raptor Series
|
|
462
|
+
Ultra-compact reasoning models designed for efficiency and edge deployment. **Super light**, amazing agentic coding capabilities, robust tool support, minimal memory footprint.
|
|
463
|
+
|
|
464
|
+
- [🤗 **bodega-raptor-0.9b**](https://huggingface.co/srswti/bodega-raptor-0.9b) - 900M params. Runs on base m4 air with 100+ tok/s.
|
|
465
|
+
- [🤗 **bodega-raptor-90m**](https://huggingface.co/srswti/bodega-raptor-90m) - Extreme edge variant. Sub-100M params for amazing tool calling.
|
|
466
|
+
- [🤗 **bodega-raptor-1b-reasoning-opus4.5-distill**](https://huggingface.co/srswti/bodega-raptor-1b-reasoning-opus4.5-distill) - Distilled from Claude Opus 4.5 reasoning patterns.
|
|
467
|
+
- [🤗 **bodega-raptor-8b-mxfp4**](https://huggingface.co/srswti/bodega-raptor-8b-mxfp4) - Balanced power/performance for laptops.
|
|
468
|
+
- [🤗 **bodega-raptor-15b-6bit**](https://huggingface.co/srswti/bodega-raptor-15b-6bit) - Enhanced raptor variant.
|
|
469
|
+
|
|
470
|
+
#### Flagship Models
|
|
471
|
+
Frontier intelligence, distilled and optimized.
|
|
472
|
+
|
|
473
|
+
- [🤗 **deepseek-v3.2-speciale-distilled-raptor-32b-4bit**](https://huggingface.co/srswti/deepseek-v3.2-speciale-distilled-raptor-32b-4bit) - DeepSeek V3.2 distilled to 32B with Raptor reasoning. Exceptional math/code generation in 5-7GB footprint. 120 tok/s on M1 Max.
|
|
474
|
+
- [🤗 **bodega-centenario-21b-mxfp4**](https://huggingface.co/srswti/bodega-centenario-21b-mxfp4) - Production workhorse. 21B params optimized for sustained inference workloads.
|
|
475
|
+
- [🤗 **bodega-solomon-9b**](https://huggingface.co/srswti/bodega-solomon-9b) - Multimodal and best for agentic coding.
|
|
476
|
+
|
|
477
|
+
#### Axe-Turbo Series
|
|
478
|
+
**Launched specifically for the Axe coding use case.** High-performance agentic coding models optimized for the Axe ecosystem.
|
|
479
|
+
|
|
480
|
+
- [🤗 **axe-turbo-1b**](https://huggingface.co/srswti/axe-turbo-1b) - 1B params, 150 tok/s, sub-50ms first token. Edge-first architecture.
|
|
481
|
+
- [🤗 **axe-turbo-31b**](https://huggingface.co/srswti/axe-turbo-31b) - High-capacity workloads. Exceptional agentic capabilities.
|
|
482
|
+
|
|
483
|
+
#### Specialized Models
|
|
484
|
+
Task-specific optimization.
|
|
485
|
+
|
|
486
|
+
- [🤗 **bodega-vertex-4b**](https://huggingface.co/srswti/bodega-vertex-4b) - 4B params. Optimized for structured data.
|
|
487
|
+
- [🤗 **blackbird-she-doesnt-refuse-21b**](https://huggingface.co/srswti/blackbird-she-doesnt-refuse-21b) - Uncensored 21B variant for unrestricted generation.
|
|
488
|
+
|
|
489
|
+
### Using Bodega Models
|
|
490
|
+
|
|
491
|
+
Configure Bodega in `~/.axe/config.toml`:
|
|
492
|
+
|
|
493
|
+
```toml
|
|
494
|
+
default_model = "bodega-raptor"
|
|
495
|
+
|
|
496
|
+
[providers.bodega]
|
|
497
|
+
type = "bodega"
|
|
498
|
+
base_url = "http://localhost:44468" # Local Bodega server
|
|
499
|
+
api_key = ""
|
|
500
|
+
|
|
501
|
+
[models.bodega-raptor]
|
|
502
|
+
provider = "bodega"
|
|
503
|
+
model = "srswti/bodega-raptor-8b-mxfp4"
|
|
504
|
+
max_context_size = 32768
|
|
505
|
+
capabilities = ["thinking"]
|
|
506
|
+
|
|
507
|
+
[models.bodega-turbo]
|
|
508
|
+
provider = "bodega"
|
|
509
|
+
model = "srswti/axe-turbo-31b"
|
|
510
|
+
max_context_size = 32768
|
|
511
|
+
capabilities = ["thinking"]
|
|
512
|
+
```
|
|
513
|
+
|
|
514
|
+
See [sample_config.toml](sample_config.toml) for more examples including OpenRouter, Anthropic, and OpenAI configurations.
|
|
515
|
+
|
|
516
|
+
---
|
|
517
|
+
|
|
518
|
+
## Documentation Index
|
|
519
|
+
|
|
520
|
+
We've organized the docs to make them digestible. Here's what's where:
|
|
521
|
+
|
|
522
|
+
### [Common Use Cases & Workflows](examples/README.md)
|
|
523
|
+
Learn how to use axe for implementing features, fixing bugs, understanding unfamiliar code, and automating tasks. Includes real workflow examples for debugging, refactoring, and exploration.
|
|
524
|
+
|
|
525
|
+
### [Built-in Tools](src/axe_cli/tools/README.md)
|
|
526
|
+
Complete reference for all available tools: file operations, shell commands, multi-agent tasks, and the axe-dig code intelligence tools. Every tool is designed for precision, not guesswork.
|
|
527
|
+
|
|
528
|
+
### [Agent Skills](src/axe_cli/skills/README.md)
|
|
529
|
+
How to create and use specialized skills to extend axe's capabilities. Skills are reusable workflows and domain expertise that you can invoke with `/skill:name` commands. Turn your team's best practices into executable knowledge.
|
|
530
|
+
|
|
531
|
+
### [Agents & Subagents](src/axe_cli/agents/README.md)
|
|
532
|
+
Guide to creating custom agents and spawning specialized subagents for parallel work. These subagents operate with precision to divide and conquer complex workflows.
|
|
533
|
+
|
|
534
|
+
### [Technical Reference](src/axe_cli/README.md)
|
|
535
|
+
Deep dive into configuration (providers, models, loop control), session management, architecture, and MCP integration. Everything you need to customize axe for your workflow.
|
|
536
|
+
|
|
537
|
+
### [axe-dig: Code Intelligence Engine](docs/AXE-DIG.md)
|
|
538
|
+
**The secret weapon.** Complete documentation on axe-dig's 5-layer architecture, semantic search, daemon mode, and program slicing. Learn how to extract precise context while preserving everything needed for correct edits. Includes performance benchmarks, real-world debugging workflows, and the design rationale behind every choice. **This is what makes axe different from every other coding tool.**
|
|
539
|
+
|
|
540
|
+
---
|
|
541
|
+
|
|
542
|
+
## What's Coming
|
|
543
|
+
|
|
544
|
+
Our internal team has been using features that will change the game:
|
|
545
|
+
|
|
546
|
+
### 1. Interactive Dashboard: Visualize Your Codebase
|
|
547
|
+
|
|
548
|
+
Understanding code isn't just about reading—it's about **seeing** the structure, connections, and flow.
|
|
549
|
+
|
|
550
|
+
The dashboard provides real-time visualization for:
|
|
551
|
+
|
|
552
|
+
**Code Health Analysis:**
|
|
553
|
+
- **Cyclic dependencies**: Visualize circular imports and dependency loops that make refactoring dangerous
|
|
554
|
+
- **Dead code detection**: See unreachable functions and unused modules with connection graphs
|
|
555
|
+
- **Safe refactoring zones**: Identify code that can be changed without cascading effects
|
|
556
|
+
- **Execution trace visualization**: Watch the actual flow of data through your system at runtime
|
|
557
|
+
|
|
558
|
+
**Debugging Workflows:**
|
|
559
|
+
- Trace execution paths visually from entry point to crash
|
|
560
|
+
- See which functions are called, in what order, with what values
|
|
561
|
+
- Identify bottlenecks and performance hotspots in the call graph
|
|
562
|
+
- Understand data transformations across multiple layers
|
|
563
|
+
|
|
564
|
+
The dashboard turns axe-dig's 5-layer analysis into interactive, explorable visualizations. No more drawing diagrams on whiteboards—axe generates them from your actual code.
|
|
565
|
+
|
|
566
|
+
### 2. Execution Tracing
|
|
567
|
+
|
|
568
|
+
See what actually happened at runtime. No more guessing why a test failed.
|
|
569
|
+
|
|
570
|
+
```bash
|
|
571
|
+
# Trace a failing test
|
|
572
|
+
/trace pytest tests/test_payment.py::test_refund
|
|
573
|
+
|
|
574
|
+
# Shows exact values that flowed through each function:
|
|
575
|
+
# process_refund(amount=Decimal("50.00"), transaction_id="tx_123")
|
|
576
|
+
# → validate_refund(transaction=Transaction(status="completed"))
|
|
577
|
+
# → check_refund_window(created_at=datetime(2024, 1, 15))
|
|
578
|
+
# → datetime.now() - created_at = timedelta(days=45)
|
|
579
|
+
# → raised RefundWindowExpired # ← 30-day window exceeded
|
|
580
|
+
```
|
|
581
|
+
|
|
582
|
+
### 3. Monorepo Factoring (Enterprise Feature)
|
|
583
|
+
|
|
584
|
+
**Status:** Under active development. Our team has been using this internally for weeks.
|
|
585
|
+
|
|
586
|
+
Large monorepos become unmaintainable when everything is tangled together. axe analyzes your codebase and automatically factors it into logical modules based on:
|
|
587
|
+
|
|
588
|
+
- **Dependency analysis**: Which code actually depends on what
|
|
589
|
+
- **Call graph clustering**: Functions that work together, grouped together
|
|
590
|
+
- **Data flow boundaries**: Natural separation points in your architecture
|
|
591
|
+
- **Usage patterns**: How different parts of the codebase are actually used
|
|
592
|
+
|
|
593
|
+
**The result:** Clear module boundaries, reduced coupling, easier maintenance. This has been heavily requested by enterprise customers managing multi-million-line monorepos.
|
|
594
|
+
|
|
595
|
+
**Example workflow:**
|
|
596
|
+
```bash
|
|
597
|
+
# Analyze current structure
|
|
598
|
+
/monorepo analyze .
|
|
599
|
+
|
|
600
|
+
# Shows: 47 logical modules detected across 1,200 files
|
|
601
|
+
# Suggests: Split into 5 packages with clear boundaries
|
|
602
|
+
# Impact: Reduces cross-module dependencies by 73%
|
|
603
|
+
|
|
604
|
+
# Apply factoring
|
|
605
|
+
/monorepo factor --target packages/
|
|
606
|
+
```
|
|
607
|
+
|
|
608
|
+
### 4. Language Migration (X → Y)
|
|
609
|
+
|
|
610
|
+
Migrating codebases between languages is notoriously error-prone. axe uses its deep understanding of code structure to enable reliable migrations:
|
|
611
|
+
|
|
612
|
+
**How it works:**
|
|
613
|
+
1. **Analyze source code**: Extract call graphs, data flow, and business logic
|
|
614
|
+
2. **Preserve semantics**: Understand what the code does, not just what it says
|
|
615
|
+
3. **Generate target code**: Translate to the new language while maintaining behavior
|
|
616
|
+
4. **Verify correctness**: Compare execution traces and test coverage
|
|
617
|
+
|
|
618
|
+
**Supported migrations:**
|
|
619
|
+
- Python → TypeScript (preserve type safety)
|
|
620
|
+
- JavaScript → Go (maintain concurrency patterns)
|
|
621
|
+
- Ruby → Rust (keep performance characteristics)
|
|
622
|
+
- Java → Kotlin (modernize while preserving architecture)
|
|
623
|
+
|
|
624
|
+
Unlike simple transpilers, axe understands your code's **intent** and translates it idiomatically to the target language.
|
|
625
|
+
|
|
626
|
+
### 5. Performance Debugging
|
|
627
|
+
|
|
628
|
+
Flame graphs and memory profiling integrated directly in the chat interface.
|
|
629
|
+
|
|
630
|
+
```bash
|
|
631
|
+
# Generate flame graph
|
|
632
|
+
/flamegraph api_server.py
|
|
633
|
+
|
|
634
|
+
# Find memory leaks
|
|
635
|
+
/memory-profile background_worker.py
|
|
636
|
+
```
|
|
637
|
+
|
|
638
|
+
### 6. Smart Test Selection
|
|
639
|
+
|
|
640
|
+
```bash
|
|
641
|
+
# Only run tests affected by your changes
|
|
642
|
+
/test-impact src/payment/processor.py
|
|
643
|
+
|
|
644
|
+
# Shows: 12 tests need to run (not all 1,847)
|
|
645
|
+
```
|
|
646
|
+
|
|
647
|
+
---
|
|
648
|
+
|
|
649
|
+
## Supported Languages
|
|
650
|
+
|
|
651
|
+
Python, TypeScript, JavaScript, Go, Rust, Java, C, C++, Ruby, PHP, C#, Kotlin, Scala, Swift, Lua, Elixir
|
|
652
|
+
|
|
653
|
+
Language auto-detected. Specify with `--lang` if needed.
|
|
654
|
+
|
|
655
|
+
---
|
|
656
|
+
|
|
657
|
+
## Comparison
|
|
658
|
+
|
|
659
|
+
| Feature | Claude Code | OpenAI Codex | axe |
|
|
660
|
+
|---------|-------------|--------------|-----|
|
|
661
|
+
| **Built for** | Weekend projects | Demos | Production codebases |
|
|
662
|
+
| **Context strategy** | Dump everything | Dump everything | Extract signal (precision-first) |
|
|
663
|
+
| **Code search** | Text/regex | Text/regex | Semantic (behavior-based) |
|
|
664
|
+
| **Call graph analysis** | ❌ | ❌ | ✅ 5-layer analysis |
|
|
665
|
+
| **Precision optimization** | ❌ (incentivized to waste) | ❌ (incentivized to waste) | ✅ Fetch what's needed for correctness |
|
|
666
|
+
| **Execution tracing** | ❌ | ❌ | ✅ Coming soon |
|
|
667
|
+
| **Flame graphs** | ❌ | ❌ | ✅ Coming soon |
|
|
668
|
+
| **Memory profiling** | ❌ | ❌ | ✅ Coming soon |
|
|
669
|
+
| **Visual debugging** | ❌ | ❌ | ✅ Coming soon |
|
|
670
|
+
| **Shell integration** | ❌ | ❌ | ✅ Ctrl+X toggle |
|
|
671
|
+
| **Session management** | Limited | Limited | ✅ Full history + replay |
|
|
672
|
+
| **Skills system** | ❌ | ❌ | ✅ Modular, extensible |
|
|
673
|
+
| **Subagents** | ❌ | ❌ | ✅ Parallel task execution |
|
|
674
|
+
| **Battle-tested** | Public beta | Public API | 6 months internal use |
|
|
675
|
+
|
|
676
|
+
---
|
|
677
|
+
|
|
678
|
+
## Community
|
|
679
|
+
|
|
680
|
+
- **Issues**: [GitHub Issues](https://github.com/SRSWTI/axe-cli/issues)
|
|
681
|
+
- **Discussions**: [GitHub Discussions](https://github.com/SRSWTI/axe-cli/discussions)
|
|
682
|
+
- **Docs**: [Full documentation](https://axe-cli.dev/docs)
|
|
683
|
+
|
|
684
|
+
## Acknowledgements
|
|
685
|
+
|
|
686
|
+
Special thanks to [MoonshotAI/kimi-cli](https://github.com/MoonshotAI/kimi-cli) for their amazing work, which inspired our tools and the Kosong provider.
|
|
@@ -1,5 +1,5 @@
|
|
|
1
1
|
axe_cli/CHANGELOG.md,sha256=SGSnQvv2yk7kG6yVGqS8q44PLc9g4BcaCxXkFT_rMNw,16
|
|
2
|
-
axe_cli/README.md,sha256=
|
|
2
|
+
axe_cli/README.md,sha256=fLu7kWkfAbm_48VsAXXUD5yYAfMeq5VlMc-iAdZVEag,10587
|
|
3
3
|
axe_cli/__init__.py,sha256=nnr9vBuaOPg3n4pkn07f4cvBIcjtqwTtyK26CKjSFCI,206
|
|
4
4
|
axe_cli/acp/AGENTS.md,sha256=mWGID3bdGIIdlWGaLH09yN7KgQ_1mCPpqHjRUdXTjfc,4703
|
|
5
5
|
axe_cli/acp/__init__.py,sha256=pDSEbgArG80xXe482DRrs4mt9lElrmFsw3Ss51lvTNU,391
|
|
@@ -10,7 +10,7 @@ axe_cli/acp/server.py,sha256=ABCOQZGofXA2XlvwRPiNnf5BSNg-TUi5JGycWIwQKI0,14213
|
|
|
10
10
|
axe_cli/acp/session.py,sha256=92iMe_an-SWZpaIpXTv5auzVQLKzHHHx2bwYUsIxdlk,16952
|
|
11
11
|
axe_cli/acp/tools.py,sha256=1HAhkr-ywLcWMnnrYiNidSH5rSwewzb_Yc6D-vbiJgU,5833
|
|
12
12
|
axe_cli/acp/types.py,sha256=XpFjCPTAkmK-_NtFsL1C82UlQ2O88ngGmEaRp8bFgPQ,348
|
|
13
|
-
axe_cli/agents/README.md,sha256=
|
|
13
|
+
axe_cli/agents/README.md,sha256=h63I0jiFsuWdKBOvwunZS1K9a9J_bUJEn36JkDSclWg,5082
|
|
14
14
|
axe_cli/agents/default/agent.yaml,sha256=TtuTy3l5GyHvsDpnikXvpo6VuxhCmPg-_D_5lQhoQDE,980
|
|
15
15
|
axe_cli/agents/default/sub.yaml,sha256=bt4sqKHfhqr4ktUBBUidLGTYDlnHq7ZYjufA4FKC0dE,685
|
|
16
16
|
axe_cli/agents/default/system.md,sha256=lO8T0XFEDdABn4n9W-gmsgirfDuP0O3blsnuNqL8ySw,13519
|
|
@@ -152,7 +152,7 @@ axe_cli/wire/protocol.py,sha256=hzlvXrvex6kL1eqltGDedvF1CrGY_8dINMMVluF_J1c,77
|
|
|
152
152
|
axe_cli/wire/serde.py,sha256=v7MsE35R6Uy7ypynRaPG3iOdj4gkxzNprgaVmVVymBQ,742
|
|
153
153
|
axe_cli/wire/server.py,sha256=oNjJUdALTL91ygEYsP4c4lWJ57T3Z7RIbd78nWH7O94,21218
|
|
154
154
|
axe_cli/wire/types.py,sha256=O_uvsRoc5Xa7ODVcTYB9Po47cuLUErbEOhbA2qpUxOI,10597
|
|
155
|
-
axe_cli-1.8.
|
|
156
|
-
axe_cli-1.8.
|
|
157
|
-
axe_cli-1.8.
|
|
158
|
-
axe_cli-1.8.
|
|
155
|
+
axe_cli-1.8.3.dist-info/WHEEL,sha256=5DEXXimM34_d4Gx1AuF9ysMr1_maoEtGKjaILM3s4w4,80
|
|
156
|
+
axe_cli-1.8.3.dist-info/entry_points.txt,sha256=IOP2TaPtunLm5FigWdAF1KAzg7n6zN4L_JRNWbSm8Wg,41
|
|
157
|
+
axe_cli-1.8.3.dist-info/METADATA,sha256=Ma3oxusFOsvrQNngIzP8x5dyAVgEi6kzQ-N0IjdUSfo,27989
|
|
158
|
+
axe_cli-1.8.3.dist-info/RECORD,,
|
axe_cli-1.8.2.dist-info/METADATA
DELETED
|
@@ -1,492 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.3
|
|
2
|
-
Name: axe-cli
|
|
3
|
-
Version: 1.8.2
|
|
4
|
-
Summary: axe, yerrrr
|
|
5
|
-
Requires-Dist: agent-client-protocol==0.7.0
|
|
6
|
-
Requires-Dist: axe-dig
|
|
7
|
-
Requires-Dist: aiofiles>=24.0,<26.0
|
|
8
|
-
Requires-Dist: aiohttp==3.13.3
|
|
9
|
-
Requires-Dist: typer==0.21.1
|
|
10
|
-
Requires-Dist: kosong[contrib]==0.41.0
|
|
11
|
-
Requires-Dist: loguru>=0.6.0,<0.8
|
|
12
|
-
Requires-Dist: prompt-toolkit==3.0.52
|
|
13
|
-
Requires-Dist: pillow==12.1.0
|
|
14
|
-
Requires-Dist: pyyaml==6.0.3
|
|
15
|
-
Requires-Dist: rich==14.2.0
|
|
16
|
-
Requires-Dist: ripgrepy==2.2.0
|
|
17
|
-
Requires-Dist: streamingjson==0.0.5
|
|
18
|
-
Requires-Dist: trafilatura==2.0.0
|
|
19
|
-
Requires-Dist: lxml==6.0.2
|
|
20
|
-
Requires-Dist: tenacity==9.1.2
|
|
21
|
-
Requires-Dist: fastmcp==2.12.5
|
|
22
|
-
Requires-Dist: pydantic==2.12.5
|
|
23
|
-
Requires-Dist: httpx[socks]==0.28.1
|
|
24
|
-
Requires-Dist: pykaos==0.6.0
|
|
25
|
-
Requires-Dist: batrachian-toad==0.5.23 ; python_full_version >= '3.14'
|
|
26
|
-
Requires-Dist: tomlkit==0.14.0
|
|
27
|
-
Requires-Dist: jinja2==3.1.6
|
|
28
|
-
Requires-Dist: pyobjc-framework-cocoa>=12.1 ; sys_platform == 'darwin'
|
|
29
|
-
Requires-Dist: keyring>=25.7.0
|
|
30
|
-
Requires-Dist: tiktoken>=0.8.0
|
|
31
|
-
Requires-Python: >=3.13
|
|
32
|
-
Description-Content-Type: text/markdown
|
|
33
|
-
|
|
34
|
-
# axe
|
|
35
|
-
|
|
36
|
-
**The agent built for real codebases.**
|
|
37
|
-
|
|
38
|
-
While other coding tools like claude code burn tokens on bloat to charge you more, axe gives you surgical precision. Built for large-scale projects, battle-tested internally for 6 months, and powered by the world's most advanced code retrieval engine.
|
|
39
|
-
|
|
40
|
-
---
|
|
41
|
-
|
|
42
|
-
## What is axe?
|
|
43
|
-
|
|
44
|
-
axe is an agent that runs in your terminal, helping you ship production code faster. It reads and edits code, executes shell commands, searches the web, and autonomously plans multi-step workflows—all while using **95% fewer tokens** than tools that dump your entire codebase into context.
|
|
45
|
-
|
|
46
|
-
**Built for:**
|
|
47
|
-
- **Real engineers** working on production systems with 100K+ line codebases
|
|
48
|
-
- **Precision refactoring** where you need to understand call graphs before changing a function
|
|
49
|
-
- **Debugging** that requires tracing data flow, not just reading error messages
|
|
50
|
-
- **Architecture exploration** in unfamiliar codebases where grep won't cut it
|
|
51
|
-
|
|
52
|
-
Hit **Ctrl+X** to toggle between axe and your normal shell. No context switching. No juggling terminals.
|
|
53
|
-
/Users/rohit/proejct/axe-cli/assets/axe_gif_sample_Demo_zai.gif
|
|
54
|
-
|
|
55
|
-
/Users/rohit/proejct/axe-cli/assets/axe_gif_axe_sample_toggle_shell.gif
|
|
56
|
-
---
|
|
57
|
-
|
|
58
|
-
## Why axe exists
|
|
59
|
-
|
|
60
|
-
**The problem:** Other tools dump your entire codebase into context, charging you for irrelevant noise. They're built for vibe coding—one-shot weekend projects where "good enough" is the goal.
|
|
61
|
-
|
|
62
|
-
**The reality:** Real engineering happens in 100K+ line codebases where precision matters. You need to understand execution flow, trace bugs through call graphs, and refactor without breaking half your tests. You can't afford to burn 200K tokens reading files that don't matter.
|
|
63
|
-
|
|
64
|
-
**The solution:** axe combines intelligent agents with **axe-dig**, our 5-layer code intelligence engine that extracts meaning instead of dumping text.
|
|
65
|
-
|
|
66
|
-
### The axe-dig Advantage
|
|
67
|
-
|
|
68
|
-
**Stop burning context windows. Start shipping features.**
|
|
69
|
-
|
|
70
|
-
/Users/rohit/proejct/axe-cli/assets/axe_gif_init_axe.gif
|
|
71
|
-
|
|
72
|
-
here you xan see how how call flow tracker works, both were able to find the context , but not only the right side axe dig impelmenation used 3 times less tokens overlal but it also found the call graphs for that decorator used on tracer functions, thread safe depth trakaicng but also was able to explain it much intuive becasue of its overall context about how the fucntion using this decorator work. unlike the one on the left qhih had to read the whole file after jsut grepping the call flow tracer literal strings to get the context using 44000 tokens compared to axe implemnatioons of 17k tokens.
|
|
73
|
-
|
|
74
|
-
/Users/rohit/proejct/axe-cli/assets/axe_gif_explain_dig.gif
|
|
75
|
-
|
|
76
|
-
Your codebase is 100,000 lines. Claude can read ~200,000 tokens. Math says you're already in trouble.
|
|
77
|
-
|
|
78
|
-
| Approach | Tokens Used | What You Get |
|
|
79
|
-
|----------|-------------|--------------|
|
|
80
|
-
| Read raw files | 23,314 | Full code, zero context window left |
|
|
81
|
-
| Grep results | ~5,000 | File paths. No understanding. |
|
|
82
|
-
| **axe-dig** | **1,189** | Structure + call graph + complexity—everything needed to edit correctly |
|
|
83
|
-
|
|
84
|
-
**95% token savings** while preserving the information LLMs actually need to write correct code.
|
|
85
|
-
|
|
86
|
-
#### How axe-dig Works: we dig 5 levels deep.
|
|
87
|
-
|
|
88
|
-
Not every question needs full program analysis. Pick the layer that matches your task:
|
|
89
|
-
|
|
90
|
-
```
|
|
91
|
-
┌─────────────────────────────────────────────────────────────┐
|
|
92
|
-
│ Layer 5: Program Dependence → "What affects line 42?" │
|
|
93
|
-
│ Layer 4: Data Flow → "Where does this value go?" │
|
|
94
|
-
│ Layer 3: Control Flow → "How complex is this?" │
|
|
95
|
-
│ Layer 2: Call Graph → "Who calls this function?" │
|
|
96
|
-
│ Layer 1: AST → "What functions exist?" │
|
|
97
|
-
└─────────────────────────────────────────────────────────────┘
|
|
98
|
-
```
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
**Try it yourself on this codebase:**
|
|
102
|
-
|
|
103
|
-
```bash
|
|
104
|
-
# first run this if you didn't run the "axe" command prior to it. axe automicatlly makes the .dig folder with all the indexes, edges-- but since its your first time, you can run this first
|
|
105
|
-
chop semantic index .
|
|
106
|
-
|
|
107
|
-
# 1. Find code that resets counters (semantic search)
|
|
108
|
-
chop semantic search "reset cumulative statistics and start fresh counter"
|
|
109
|
-
|
|
110
|
-
# Result: Finds reset_step_count() at position #2 (score: 0.632)
|
|
111
|
-
# Why this query? We're looking for state reset logic
|
|
112
|
-
# What it found: TokenCounter.reset_step_count() - even though the code
|
|
113
|
-
# doesn't mention "cumulative" or "fresh", the embedding understands
|
|
114
|
-
# it resets a TokenCount object in a statistics tracking class
|
|
115
|
-
|
|
116
|
-
# 2. Get token-efficient context
|
|
117
|
-
chop context reset_step_count --project src/axe_cli/
|
|
118
|
-
|
|
119
|
-
# Result: ~89 tokens (vs ~4,200 for reading the raw file)
|
|
120
|
-
# Shows: Function signature, what it calls, complexity metrics
|
|
121
|
-
# 98% token savings while preserving understanding
|
|
122
|
-
|
|
123
|
-
# 3. Check who uses it before refactoring
|
|
124
|
-
chop impact TokenCounter src/axe_cli/
|
|
125
|
-
|
|
126
|
-
# Result: Only called by get_global_counter() in same file
|
|
127
|
-
# Meaning: Safe to refactor - no external dependencies to break
|
|
128
|
-
```
|
|
129
|
-
|
|
130
|
-
**What this demonstrates:**
|
|
131
|
-
- Semantic search finds code by behavior, not keywords
|
|
132
|
-
- Context extraction gives you understanding at 2% of the token cost
|
|
133
|
-
- Impact analysis shows dependencies instantly (no grep, no manual tracing)
|
|
134
|
-
|
|
135
|
-
#### Semantic Search: Find Code by Behavior
|
|
136
|
-
|
|
137
|
-
Here you can see how we took a sample cli agent with bas edit grep wrtie and shell on the left and on the right with axe as tools-- you will how much better it perfroms. for demo purposes we are showing a basic minimal. impelmenation of cli agents and also how even a simple agent with axe-dig tools jsut boost performance, and reduce token usage by lethacl precise search for large codebases.
|
|
138
|
-
|
|
139
|
-
/Users/rohit/proejct/axe-cli/assets/axe_gif_comparison.gif
|
|
140
|
-
|
|
141
|
-
and then you will see the exponeential fall of llms wher eif they start wrong, they infer wrong, and as you can see i continued the conversation about callerinfromation and since the axe code implemenation on the right had more context and better understanding-- it was even more precise and lethal with followup questions. so not only axe optimsied for token savings-- it optimises for wha tthe code actually doies and how it flows:
|
|
142
|
-
|
|
143
|
-
/Users/rohit/proejct/axe-cli/assets/axe_gif_Entire screen 2-1-2026 21-27-28.gif
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
|
|
147
|
-
Traditional search finds syntax. axe-dig semantic search finds **what code does** based on call graphs and structure.
|
|
148
|
-
|
|
149
|
-
```bash
|
|
150
|
-
# Try this on the axe-cli codebase itself:
|
|
151
|
-
chop semantic search "retry failed operations with exponential backoff"
|
|
152
|
-
|
|
153
|
-
# Result: Finds _is_retryable_error() at position #1 (score: 0.713)
|
|
154
|
-
# Why? The query doesn't mention "error" or specific function names
|
|
155
|
-
# But the embedding understands retry logic patterns:
|
|
156
|
-
# - Function checks exception types (retryable vs non-retryable)
|
|
157
|
-
# - Called by retry loops with backoff logic
|
|
158
|
-
# - Part of error handling flow in axesoul.py
|
|
159
|
-
```
|
|
160
|
-
|
|
161
|
-
**What it found:**
|
|
162
|
-
```json
|
|
163
|
-
[
|
|
164
|
-
{
|
|
165
|
-
"name": "_is_retryable_error",
|
|
166
|
-
"file": "src/axe_cli/soul/axesoul.py",
|
|
167
|
-
"score": 0.713
|
|
168
|
-
},
|
|
169
|
-
{
|
|
170
|
-
"name": "_retry_log",
|
|
171
|
-
"file": "src/axe_cli/soul/axesoul.py",
|
|
172
|
-
"score": 0.710
|
|
173
|
-
}
|
|
174
|
-
]
|
|
175
|
-
```
|
|
176
|
-
|
|
177
|
-
**Another example: Find config loading**
|
|
178
|
-
```bash
|
|
179
|
-
chop semantic search "load configuration from toml file"
|
|
180
|
-
|
|
181
|
-
# Result: load_config_from_string() at #1 (score: 0.759)
|
|
182
|
-
# Finds TOML parsing, config migration, and related tests
|
|
183
|
-
```
|
|
184
|
-
|
|
185
|
-
Every function gets embedded with:
|
|
186
|
-
- Signature + docstring
|
|
187
|
-
- What it calls + who calls it (forward & backward call graph)
|
|
188
|
-
- Complexity metrics (branches, loops, cyclomatic complexity)
|
|
189
|
-
- Data flow (which variables are used, how they transform)
|
|
190
|
-
- Dependencies (imports, external modules)
|
|
191
|
-
- First ~10 lines of code
|
|
192
|
-
|
|
193
|
-
This gets encoded into 1024-dimensional embeddings, so semantic search finds relevant code even when you use different terminology.
|
|
194
|
-
|
|
195
|
-
#### Daemon Architecture: 300x Faster
|
|
196
|
-
|
|
197
|
-
**The old way:** Every query spawns a new process, parses the entire codebase, throws away the results. ~30 seconds per query.
|
|
198
|
-
|
|
199
|
-
**axe-dig's daemon:** Long-running background process with indexes in RAM. ~100ms per query.
|
|
200
|
-
|
|
201
|
-
```bash
|
|
202
|
-
# First query auto-starts daemon (transparent)
|
|
203
|
-
axe # In your project directory
|
|
204
|
-
|
|
205
|
-
# Daemon stays running, queries use in-memory indexes
|
|
206
|
-
# 100ms, not 30s per query
|
|
207
|
-
```
|
|
208
|
-
|
|
209
|
-
/Users/rohit/proejct/axe-cli/assets/axe_gif_sample_Demo_zai.gif
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
**Incremental updates:** When you edit one function, axe-dig doesn't re-analyze the entire codebase. Content-hash-based caching with automatic dependency tracking means 10x faster incremental updates.
|
|
213
|
-
|
|
214
|
-
**What's stored:** The daemon keeps call graphs, complexity metrics, and semantic embeddings in `.dig/cache/`. A typical project generates ~10MB of indexes that load into RAM in \u003c1 second. See [full cache structure](docs/AXE-DIG.md#cache-structure) for details.
|
|
215
|
-
|
|
216
|
-
**[Read the full axe-dig documentation →](docs/AXE-DIG.md)**
|
|
217
|
-
|
|
218
|
-
---
|
|
219
|
-
|
|
220
|
-
## Documentation Index
|
|
221
|
-
|
|
222
|
-
We've organized the docs to make them digestible. Here's what's where:
|
|
223
|
-
|
|
224
|
-
### [Common Use Cases & Workflows](examples/README.md)
|
|
225
|
-
Learn how to use axe for implementing features, fixing bugs, understanding unfamiliar code, and automating tasks. Includes real workflow examples for debugging, refactoring, and exploration. See how axe handles everything from adding pagination to investigating race conditions.
|
|
226
|
-
|
|
227
|
-
### [Built-in Tools](src/axe_cli/tools/README.md)
|
|
228
|
-
Complete reference for all available tools: file operations, shell commands, multi-agent tasks, and the axe-dig code intelligence tools. CodeSearch finds code by behavior, CodeContext extracts LLM-ready summaries with 95% token savings, CodeStructure navigates files/directories, and CodeImpact shows reverse call graphs before you refactor. Every tool is designed for precision, not guesswork.
|
|
229
|
-
|
|
230
|
-
### [Agent Skills](src/axe_cli/skills/README.md)
|
|
231
|
-
How to create and use specialized skills to extend axe's capabilities. Skills are reusable workflows and domain expertise that you can invoke with `/skill:name` commands. Includes flow skills for multi-step automated workflows and examples for code style, git commits, and project standards. Turn your team's best practices into executable knowledge.
|
|
232
|
-
|
|
233
|
-
### [Agents & Subagents](src/axe_cli/agents/README.md)
|
|
234
|
-
Guide to creating custom agents and spawning specialized subagents for parallel work. Need a dedicated security researcher? A ruthlessly precise code reviewer? A creative copywriter? axe can create and deploy specialized subagents based on your exact requirements. These subagents operate with lethal precision to divide and conquer complex workflows.
|
|
235
|
-
|
|
236
|
-
### [Technical Reference](src/axe_cli/README.md)
|
|
237
|
-
Deep dive into configuration (providers, models, loop control), session management, architecture, and MCP integration. Everything you need to customize axe for your workflow. Configure Bodega models, set up OpenRouter/Anthropic/OpenAI providers, manage sessions, and integrate with other tools via Model Context Protocol.
|
|
238
|
-
|
|
239
|
-
### [axe-dig: Code Intelligence Engine](docs/AXE-DIG.md)
|
|
240
|
-
**The secret weapon.** Complete documentation on axe-dig's 5-layer architecture, semantic search, daemon mode, and program slicing. Learn how to extract 95% fewer tokens while preserving everything needed for correct edits. Includes performance benchmarks (155x faster queries, 89% token reduction), real-world debugging workflows, and the design rationale behind every choice. This is what makes axe different from every other coding tool.
|
|
241
|
-
|
|
242
|
-
---
|
|
243
|
-
|
|
244
|
-
## Quick start
|
|
245
|
-
|
|
246
|
-
### Install
|
|
247
|
-
```bash
|
|
248
|
-
# Install axe-cli (includes axe-dig)
|
|
249
|
-
uv pip install axe-cli
|
|
250
|
-
|
|
251
|
-
# Or from source
|
|
252
|
-
git clone https://github.com/SRSWTI/axe-cli
|
|
253
|
-
cd axe-cli
|
|
254
|
-
make prepare
|
|
255
|
-
make build
|
|
256
|
-
|
|
257
|
-
or uv run axe
|
|
258
|
-
```
|
|
259
|
-
|
|
260
|
-
### Run
|
|
261
|
-
```bash
|
|
262
|
-
cd /path/to/your/project
|
|
263
|
-
axe
|
|
264
|
-
```
|
|
265
|
-
|
|
266
|
-
On first run, axe-dig automatically indexes your codebase (30-60 seconds for typical projects). After that, queries are instant.
|
|
267
|
-
|
|
268
|
-
### Start using
|
|
269
|
-
```bash
|
|
270
|
-
# Find code by behavior
|
|
271
|
-
/skill:code-search "database connection pooling"
|
|
272
|
-
|
|
273
|
-
# Understand a function without reading the whole file
|
|
274
|
-
/skill:code-context get_user_by_id
|
|
275
|
-
|
|
276
|
-
# See who calls a function before refactoring
|
|
277
|
-
/skill:code-impact authenticate_request
|
|
278
|
-
|
|
279
|
-
# Make surgical edits
|
|
280
|
-
StrReplaceFile src/auth.py "old code" "new code"
|
|
281
|
-
|
|
282
|
-
# Toggle to shell mode
|
|
283
|
-
[Ctrl+X]
|
|
284
|
-
pytest tests/
|
|
285
|
-
[Ctrl+X]
|
|
286
|
-
```
|
|
287
|
-
|
|
288
|
-
---
|
|
289
|
-
|
|
290
|
-
## Core capabilities
|
|
291
|
-
|
|
292
|
-
### Code intelligence (powered by axe-dig)
|
|
293
|
-
|
|
294
|
-
| Tool | What it does | Use case |
|
|
295
|
-
|------|-------------|----------|
|
|
296
|
-
| **CodeSearch** | Semantic search by behavior | "Find payment processing logic" |
|
|
297
|
-
| **CodeContext** | LLM-ready function summaries (95% token savings) | Understand unfamiliar code |
|
|
298
|
-
| **CodeStructure** | Navigate functions/classes in files/dirs | Explore new codebases |
|
|
299
|
-
| **CodeImpact** | Reverse call graph (who calls this?) | Safe refactoring |
|
|
300
|
-
|
|
301
|
-
### File operations
|
|
302
|
-
- `ReadFile` / `WriteFile` / `StrReplaceFile` - Standard file I/O
|
|
303
|
-
- `Grep` - Exact file locations + line numbers (use after CodeSearch)
|
|
304
|
-
- `Glob` - Pattern matching
|
|
305
|
-
- `ReadMediaFile` - Images, PDFs, videos
|
|
306
|
-
|
|
307
|
-
### Multi-agent workflows
|
|
308
|
-
- `Task` - Spawn subagents for parallel work
|
|
309
|
-
- `CreateSubagent` - Custom agent specs
|
|
310
|
-
- `SetTodoList` - Track multi-step tasks
|
|
311
|
-
|
|
312
|
-
### Shell integration
|
|
313
|
-
- `Shell` - Execute commands
|
|
314
|
-
- **Ctrl+X** - Toggle between axe and normal shell mode
|
|
315
|
-
|
|
316
|
-
---
|
|
317
|
-
|
|
318
|
-
## Powered by SRSWTI Inc.
|
|
319
|
-
|
|
320
|
-
**Building the world's fastest retrieval and inference engines.**
|
|
321
|
-
|
|
322
|
-
### Bodega Inference Engine
|
|
323
|
-
|
|
324
|
-
Exclusive models trained/optimized for Bodega Inference Engine. axe includes **zero-day support** for all Bodega models (ofcourse), ensuring immediate access to our latest breakthroughs.
|
|
325
|
-
|
|
326
|
-
**Note:** Our models are also available on [🤗 Hugging Face](https://huggingface.co/srswti).
|
|
327
|
-
|
|
328
|
-
#### Raptor Series
|
|
329
|
-
Ultra-compact reasoning models designed for efficiency and edge deployment. **Super light**, amazing agentic coding capabilities, robust tool support, minimal memory footprint.
|
|
330
|
-
|
|
331
|
-
- [🤗 **bodega-raptor-0.9b**](https://huggingface.co/srswti/bodega-raptor-0.9b) - 900M params. Runs on base m4 air with 100+ tok/s.
|
|
332
|
-
- [🤗 **bodega-raptor-90m**](https://huggingface.co/srswti/bodega-raptor-90m) - Extreme edge variant. Sub-100M params for amazing tool calling.
|
|
333
|
-
- [🤗 **bodega-raptor-1b-reasoning-opus4.5-distill**](https://huggingface.co/srswti/bodega-raptor-1b-reasoning-opus4.5-distill) - Distilled from Claude Opus 4.5 reasoning patterns.
|
|
334
|
-
- [🤗 **bodega-raptor-8b-mxfp4**](https://huggingface.co/srswti/bodega-raptor-8b-mxfp4) - Balanced power/performance for laptops.
|
|
335
|
-
- [🤗 **bodega-raptor-15b-6bit**](https://huggingface.co/srswti/bodega-raptor-15b-6bit) - Enhanced raptor variant.
|
|
336
|
-
|
|
337
|
-
#### Flagship Models
|
|
338
|
-
Frontier intelligence, distilled and optimized.
|
|
339
|
-
|
|
340
|
-
- [🤗 **deepseek-v3.2-speciale-distilled-raptor-32b-4bit**](https://huggingface.co/srswti/deepseek-v3.2-speciale-distilled-raptor-32b-4bit) - DeepSeek V3.2 distilled to 32B with Raptor reasoning. Exceptional math/code generation in 5-7GB footprint. 120 tok/s on M1 Max.
|
|
341
|
-
- [🤗 **bodega-centenario-21b-mxfp4**](https://huggingface.co/srswti/bodega-centenario-21b-mxfp4) - Production workhorse. 21B params optimized for sustained inference workloads.
|
|
342
|
-
- [🤗 **bodega-solomon-9b**](https://huggingface.co/srswti/bodega-solomon-9b) - Multimodal and best for agentic coding.
|
|
343
|
-
|
|
344
|
-
#### Axe-Turbo Series
|
|
345
|
-
**Launched specifically for the Axe coding use case.** High-performance agentic coding models optimized for the Axe ecosystem.
|
|
346
|
-
|
|
347
|
-
- [🤗 **axe-turbo-1b**](https://huggingface.co/srswti/axe-turbo-1b) - 1B params, 150 tok/s, sub-50ms first token. Edge-first architecture.
|
|
348
|
-
- [🤗 **axe-turbo-31b**](https://huggingface.co/srswti/axe-turbo-31b) - High-capacity workloads. Exceptional agentic capabilities.
|
|
349
|
-
|
|
350
|
-
#### Specialized Models
|
|
351
|
-
Task-specific optimization.
|
|
352
|
-
|
|
353
|
-
- [🤗 **bodega-vertex-4b**](https://huggingface.co/srswti/bodega-vertex-4b) - 4B params. Optimized for structured data.
|
|
354
|
-
- [🤗 **blackbird-she-doesnt-refuse-21b**](https://huggingface.co/srswti/blackbird-she-doesnt-refuse-21b) - Uncensored 21B variant for unrestricted generation.
|
|
355
|
-
|
|
356
|
-
### Using Bodega Models
|
|
357
|
-
|
|
358
|
-
Configure Bodega in `~/.axe/config.toml`:
|
|
359
|
-
|
|
360
|
-
```toml
|
|
361
|
-
default_model = "bodega-raptor"
|
|
362
|
-
|
|
363
|
-
[providers.bodega]
|
|
364
|
-
type = "bodega"
|
|
365
|
-
base_url = "http://localhost:44468" # Local Bodega server
|
|
366
|
-
api_key = ""
|
|
367
|
-
|
|
368
|
-
[models.bodega-raptor]
|
|
369
|
-
provider = "bodega"
|
|
370
|
-
model = "srswti/bodega-raptor-8b-mxfp4"
|
|
371
|
-
max_context_size = 32768
|
|
372
|
-
capabilities = ["thinking"]
|
|
373
|
-
|
|
374
|
-
[models.bodega-turbo]
|
|
375
|
-
provider = "bodega"
|
|
376
|
-
model = "srswti/axe-turbo-31b"
|
|
377
|
-
max_context_size = 32768
|
|
378
|
-
capabilities = ["thinking"]
|
|
379
|
-
```
|
|
380
|
-
|
|
381
|
-
See [sample_config.toml](sample_config.toml) for more examples including OpenRouter, Anthropic, and OpenAI configurations.
|
|
382
|
-
|
|
383
|
-
---
|
|
384
|
-
|
|
385
|
-
## What's coming
|
|
386
|
-
|
|
387
|
-
Our internal team has been using features that will change the game:
|
|
388
|
-
|
|
389
|
-
### 1. Execution Tracing
|
|
390
|
-
See what actually happened at runtime. No more guessing why a test failed.
|
|
391
|
-
|
|
392
|
-
```bash
|
|
393
|
-
# Trace a failing test
|
|
394
|
-
/trace pytest tests/test_payment.py::test_refund
|
|
395
|
-
|
|
396
|
-
# Shows exact values that flowed through each function:
|
|
397
|
-
# process_refund(amount=Decimal("50.00"), transaction_id="tx_123")
|
|
398
|
-
# → validate_refund(transaction=Transaction(status="completed"))
|
|
399
|
-
# → check_refund_window(created_at=datetime(2024, 1, 15))
|
|
400
|
-
# → datetime.now() - created_at = timedelta(days=45)
|
|
401
|
-
# → raised RefundWindowExpired # ← 30-day window exceeded
|
|
402
|
-
```
|
|
403
|
-
|
|
404
|
-
### 2. Performance Debugging
|
|
405
|
-
Flame graphs and memory profiling integrated directly in the chat interface.
|
|
406
|
-
|
|
407
|
-
```bash
|
|
408
|
-
# Generate flame graph
|
|
409
|
-
/flamegraph api_server.py
|
|
410
|
-
|
|
411
|
-
# Find memory leaks
|
|
412
|
-
/memory-profile background_worker.py
|
|
413
|
-
```
|
|
414
|
-
|
|
415
|
-
### 3. Visual Debugging
|
|
416
|
-
Interactive visualizations for understanding complex codebases:
|
|
417
|
-
|
|
418
|
-
- **Call graphs**: See the entire call chain from entry point to implementation
|
|
419
|
-
- **Dependency graphs**: Understand module relationships and coupling
|
|
420
|
-
- **AST visualizations**: Navigate code structure visually
|
|
421
|
-
- **Data flow diagrams**: Trace how values transform through your system
|
|
422
|
-
|
|
423
|
-
All generated on demand and viewable in your browser. No more drawing diagrams on whiteboards—axe-dig generates them from your actual code.
|
|
424
|
-
|
|
425
|
-
### 4. Smart Test Selection
|
|
426
|
-
```bash
|
|
427
|
-
# Only run tests affected by your changes
|
|
428
|
-
/test-impact src/payment/processor.py
|
|
429
|
-
|
|
430
|
-
# Shows: 12 tests need to run (not all 1,847)
|
|
431
|
-
```
|
|
432
|
-
|
|
433
|
-
---
|
|
434
|
-
|
|
435
|
-
## Why we built this
|
|
436
|
-
|
|
437
|
-
We're building the world's best retrieval and inference engine. We started with coding because it's the hardest problem: understanding large codebases, tracing execution, debugging logic errors, optimizing performance.
|
|
438
|
-
|
|
439
|
-
If we can nail code understanding, we can nail anything.
|
|
440
|
-
|
|
441
|
-
**This is not for vibe coders.** This is not for weekend hackathons where "it works on my machine" is good enough. This is for engineers shipping production code to real users, where bugs cost money and downtime costs more.
|
|
442
|
-
|
|
443
|
-
Other tools optimize for demo videos and charging per token. We optimize for engineers who need to:
|
|
444
|
-
- Refactor 10,000 lines without breaking tests
|
|
445
|
-
- Debug race conditions in distributed systems
|
|
446
|
-
- Understand legacy codebases with zero documentation
|
|
447
|
-
- Ship features on deadline without cutting corners
|
|
448
|
-
|
|
449
|
-
**The bottom line:** If you're building real software in large codebases, you need precision tools. Not vibe coding toys.
|
|
450
|
-
|
|
451
|
-
Welcome to axe.
|
|
452
|
-
|
|
453
|
-
---
|
|
454
|
-
|
|
455
|
-
## Supported languages
|
|
456
|
-
|
|
457
|
-
Python, TypeScript, JavaScript, Go, Rust, Java, C, C++, Ruby, PHP, C#, Kotlin, Scala, Swift, Lua, Elixir
|
|
458
|
-
|
|
459
|
-
Language auto-detected. Specify with `--lang` if needed.
|
|
460
|
-
|
|
461
|
-
---
|
|
462
|
-
|
|
463
|
-
## Comparison
|
|
464
|
-
|
|
465
|
-
| Feature | Claude Code | OpenAI Codex | axe |
|
|
466
|
-
|---------|-------------|--------------|-----|
|
|
467
|
-
| **Built for** | Weekend projects | Demos | Production codebases |
|
|
468
|
-
| **Context strategy** | Dump everything | Dump everything | Extract signal (95% savings) |
|
|
469
|
-
| **Code search** | Text/regex | Text/regex | Semantic (behavior-based) |
|
|
470
|
-
| **Call graph analysis** | ❌ | ❌ | ✅ 5-layer analysis |
|
|
471
|
-
| **Token optimization** | ❌ (incentivized to waste) | ❌ (incentivized to waste) | ✅ Show savings per query |
|
|
472
|
-
| **Execution tracing** | ❌ | ❌ | ✅ Coming soon |
|
|
473
|
-
| **Flame graphs** | ❌ | ❌ | ✅ Coming soon |
|
|
474
|
-
| **Memory profiling** | ❌ | ❌ | ✅ Coming soon |
|
|
475
|
-
| **Visual debugging** | ❌ | ❌ | ✅ Coming soon |
|
|
476
|
-
| **Shell integration** | ❌ | ❌ | ✅ Ctrl+X toggle |
|
|
477
|
-
| **Session management** | Limited | Limited | ✅ Full history + replay |
|
|
478
|
-
| **Skills system** | ❌ | ❌ | ✅ Modular, extensible |
|
|
479
|
-
| **Subagents** | ❌ | ❌ | ✅ Parallel task execution |
|
|
480
|
-
| **Battle-tested** | Public beta | Public API | 6 months internal use |
|
|
481
|
-
|
|
482
|
-
---
|
|
483
|
-
|
|
484
|
-
## Community
|
|
485
|
-
|
|
486
|
-
- **Issues**: [GitHub Issues](https://github.com/SRSWTI/axe-cli/issues)
|
|
487
|
-
- **Discussions**: [GitHub Discussions](https://github.com/SRSWTI/axe-cli/discussions)
|
|
488
|
-
- **Docs**: [Full documentation](https://axe-cli.dev/docs)
|
|
489
|
-
|
|
490
|
-
## Acknowledgements
|
|
491
|
-
|
|
492
|
-
Special thanks to [MoonshotAI/kimi-cli](https://github.com/MoonshotAI/kimi-cli) for their amazing work, which inspired our tools and the Kosong provider.
|
|
File without changes
|