@voria/cli 0.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (67) hide show
  1. package/README.md +439 -0
  2. package/bin/voria +730 -0
  3. package/docs/ARCHITECTURE.md +419 -0
  4. package/docs/CHANGELOG.md +189 -0
  5. package/docs/CONTRIBUTING.md +447 -0
  6. package/docs/DESIGN_DECISIONS.md +380 -0
  7. package/docs/DEVELOPMENT.md +535 -0
  8. package/docs/EXAMPLES.md +434 -0
  9. package/docs/INSTALL.md +335 -0
  10. package/docs/IPC_PROTOCOL.md +310 -0
  11. package/docs/LLM_INTEGRATION.md +416 -0
  12. package/docs/MODULES.md +470 -0
  13. package/docs/PERFORMANCE.md +346 -0
  14. package/docs/PLUGINS.md +432 -0
  15. package/docs/QUICKSTART.md +184 -0
  16. package/docs/README.md +133 -0
  17. package/docs/ROADMAP.md +346 -0
  18. package/docs/SECURITY.md +334 -0
  19. package/docs/TROUBLESHOOTING.md +565 -0
  20. package/docs/USER_GUIDE.md +700 -0
  21. package/package.json +63 -0
  22. package/python/voria/__init__.py +8 -0
  23. package/python/voria/__pycache__/__init__.cpython-312.pyc +0 -0
  24. package/python/voria/__pycache__/engine.cpython-312.pyc +0 -0
  25. package/python/voria/core/__init__.py +1 -0
  26. package/python/voria/core/__pycache__/__init__.cpython-312.pyc +0 -0
  27. package/python/voria/core/__pycache__/setup.cpython-312.pyc +0 -0
  28. package/python/voria/core/agent/__init__.py +9 -0
  29. package/python/voria/core/agent/__pycache__/__init__.cpython-312.pyc +0 -0
  30. package/python/voria/core/agent/__pycache__/loop.cpython-312.pyc +0 -0
  31. package/python/voria/core/agent/loop.py +343 -0
  32. package/python/voria/core/executor/__init__.py +19 -0
  33. package/python/voria/core/executor/__pycache__/__init__.cpython-312.pyc +0 -0
  34. package/python/voria/core/executor/__pycache__/executor.cpython-312.pyc +0 -0
  35. package/python/voria/core/executor/executor.py +431 -0
  36. package/python/voria/core/github/__init__.py +33 -0
  37. package/python/voria/core/github/__pycache__/__init__.cpython-312.pyc +0 -0
  38. package/python/voria/core/github/__pycache__/client.cpython-312.pyc +0 -0
  39. package/python/voria/core/github/client.py +438 -0
  40. package/python/voria/core/llm/__init__.py +55 -0
  41. package/python/voria/core/llm/__pycache__/__init__.cpython-312.pyc +0 -0
  42. package/python/voria/core/llm/__pycache__/base.cpython-312.pyc +0 -0
  43. package/python/voria/core/llm/__pycache__/claude_provider.cpython-312.pyc +0 -0
  44. package/python/voria/core/llm/__pycache__/gemini_provider.cpython-312.pyc +0 -0
  45. package/python/voria/core/llm/__pycache__/modal_provider.cpython-312.pyc +0 -0
  46. package/python/voria/core/llm/__pycache__/model_discovery.cpython-312.pyc +0 -0
  47. package/python/voria/core/llm/__pycache__/openai_provider.cpython-312.pyc +0 -0
  48. package/python/voria/core/llm/base.py +152 -0
  49. package/python/voria/core/llm/claude_provider.py +188 -0
  50. package/python/voria/core/llm/gemini_provider.py +148 -0
  51. package/python/voria/core/llm/modal_provider.py +228 -0
  52. package/python/voria/core/llm/model_discovery.py +289 -0
  53. package/python/voria/core/llm/openai_provider.py +146 -0
  54. package/python/voria/core/patcher/__init__.py +9 -0
  55. package/python/voria/core/patcher/__pycache__/__init__.cpython-312.pyc +0 -0
  56. package/python/voria/core/patcher/__pycache__/patcher.cpython-312.pyc +0 -0
  57. package/python/voria/core/patcher/patcher.py +375 -0
  58. package/python/voria/core/planner/__init__.py +1 -0
  59. package/python/voria/core/setup.py +201 -0
  60. package/python/voria/core/token_manager/__init__.py +29 -0
  61. package/python/voria/core/token_manager/__pycache__/__init__.cpython-312.pyc +0 -0
  62. package/python/voria/core/token_manager/__pycache__/manager.cpython-312.pyc +0 -0
  63. package/python/voria/core/token_manager/manager.py +241 -0
  64. package/python/voria/engine.py +1185 -0
  65. package/python/voria/plugins/__init__.py +1 -0
  66. package/python/voria/plugins/python/__init__.py +1 -0
  67. package/python/voria/plugins/typescript/__init__.py +1 -0
@@ -0,0 +1,346 @@
1
+ # Performance Guide
2
+
3
+ Optimizing voria for speed and efficiency.
4
+
5
+ ## Performance Targets
6
+
7
+ | Metric | Target | Notes |
8
+ |--------|--------|-------|
9
+ | CLI startup | < 2s | Before Python engine ready |
10
+ | Command response | < 100ms (p95) | After initial request |
11
+ | Simple issue fix | < 30s | Via LLM |
12
+ | Full automation | < 5min | For simple issues (1-2 iterations) |
13
+ | Token rate | < 1000/min | To avoid rate limiting |
14
+
15
+ ## Optimization Strategies
16
+
17
+ ### 1. Disable Unnecessary Features
18
+
19
+ ```bash
20
+ # Skip tests (if you know they pass)
21
+ voria issue 42 --skip-tests
22
+
23
+ # Dry run (don't modify files)
24
+ voria issue 42 --dry-run
25
+
26
+ # Single iteration (don't refine)
27
+ voria issue 42 --max-iterations 1
28
+ ```
29
+
30
+ ### 2. Use Fastest LLM Provider
31
+
32
+ **Speed Ranking** (fastest to slowest):
33
+ 1. **Modal** - Sub-second responses (free!)
34
+ 2. **Gemini** - 1-2 seconds
35
+ 3. **OpenAI** - 2-3 seconds
36
+ 4. **Claude** - 3-5 seconds
37
+
38
+ ```bash
39
+ # Use fastest
40
+ voria issue 42 --llm modal
41
+ ```
42
+
43
+ ### 3. Cache Models
44
+
45
+ voria caches model responses. To warm up cache:
46
+
47
+ ```bash
48
+ # Pre-download models
49
+ voria plan 1 # First time (slow)
50
+ voria plan 2 # Second time (faster - cached)
51
+ ```
52
+
53
+ ### 4. Parallel Operations
54
+
55
+ For batch processing:
56
+
57
+ ```bash
58
+ # Process multiple issues in parallel
59
+ export voria_WORKERS=4
60
+ for issue in {1..20}; do
61
+ voria issue $issue &
62
+ done
63
+ wait
64
+ ```
65
+
66
+ ### 5. Limit Context Size
67
+
68
+ Large codebases slow down analysis:
69
+
70
+ ```bash
71
+ # Only analyze relevant files
72
+ voria issue 42 --include "src/api.py" --include "tests/test_api.py"
73
+
74
+ # Exclude large files
75
+ voria issue 42 --exclude "node_modules" --exclude "dist"
76
+ ```
77
+
78
+ ## Profiling
79
+
80
+ ### Python Profiling
81
+
82
+ ```bash
83
+ # Run with timing
84
+ time python3 -m voria.engine < test_command.json
85
+
86
+ # Detailed profiling
87
+ python3 -m cProfile -s cumtime -m voria.engine < test_command.json
88
+ ```
89
+
90
+ ### Rust Profiling
91
+
92
+ ```bash
93
+ # Flamegraph (Linux)
94
+ cargo install flamegraph
95
+ cargo flamegraph -- plan 1
96
+ # View: flamegraph.svg
97
+ ```
98
+
99
+ ## Memory Usage
100
+
101
+ ### Monitor Memory
102
+
103
+ ```bash
104
+ # Watch memory usage
105
+ watch -n 1 'ps aux | grep voria'
106
+
107
+ # Detailed memory stats
108
+ /usr/bin/time -v ./target/release/voria plan 1
109
+ ```
110
+
111
+ ### Reduce Memory Footprint
112
+
113
+ ```bash
114
+ # Smaller batch sizes
115
+ voria issue 42 --batch-size 100
116
+
117
+ # Disable caching
118
+ voria issue 42 --no-cache
119
+
120
+ # Limit context
121
+ voria issue 42 --max-files 10
122
+ ```
123
+
124
+ ## Network Optimization
125
+
126
+ ### Connection Pooling
127
+
128
+ voria reuses connections (built-in). To verify:
129
+
130
+ ```python
131
+ # Check connection reuse
132
+ httpx.AsyncClient(limits=httpx.Limits(
133
+ max_connections=5, # Connection pool size
134
+ max_keepalive_connections=5
135
+ ))
136
+ ```
137
+
138
+ ### Rate Limiting
139
+
140
+ ```bash
141
+ # Slow down token usage (avoid rate limits)
142
+ voria_TOKEN_RATE=100 voria issue 100 # 100 tokens/sec max
143
+
144
+ # Add delays between requests
145
+ voria_REQUEST_DELAY=2 voria issue 42 # 2 sec between requests
146
+ ```
147
+
148
+ ### DNS Caching
149
+
150
+ ```python
151
+ # voria auto-caches DNS (via httpx)
152
+ # Verify with:
153
+ # strace -e openat ./target/release/voria plan 1 | grep hosts
154
+ ```
155
+
156
+ ## Benchmarking
157
+
158
+ ### Run Benchmarks
159
+
160
+ ```bash
161
+ # Time different providers
162
+ for llm in modal openai gemini claude; do
163
+ echo "Testing $llm..."
164
+ time voria plan 1 --llm $llm
165
+ done
166
+
167
+ # Compare patch generation
168
+ for strategy in strict fuzzy; do
169
+ echo "Testing $strategy..."
170
+ time voria issue 1 --patch-strategy $strategy
171
+ done
172
+ ```
173
+
174
+ ### Create Benchmark Suite
175
+
176
+ ```bash
177
+ # benchmark.sh
178
+ #!/bin/bash
179
+ ISSUES=(1 2 3 10 42 100)
180
+
181
+ for issue in "${ISSUES[@]}"; do
182
+ echo "Issue #$issue"
183
+ /usr/bin/time -f "%e seconds, %M KB" \
184
+ ./target/release/voria plan $issue
185
+ done
186
+ ```
187
+
188
+ ## LLM Token Optimization
189
+
190
+ ### Use Smaller Models
191
+
192
+ ```bash
193
+ # Smaller = faster + cheaper
194
+ voria issue 42 --llm modal # GLM-5.1-FP8 (fast)
195
+ voria issue 42 --llm openai --model gpt-4.5-mini # mini (faster)
196
+ voria issue 42 --llm claude --model claude-3-haiku # haiku (fastest)
197
+ ```
198
+
199
+ ### Prompt Engineering
200
+
201
+ Keep prompts concise:
202
+
203
+ ```python
204
+ # ❌ Slow: Large context
205
+ prompt = f"""Analyze this entire codebase:
206
+ {entire_codebase}
207
+ Fix {issue_description}
208
+ """
209
+
210
+ # ✅ Fast: Focused context
211
+ prompt = f"""Analyze these files for {issue_description}:
212
+ {relevant_files_only}
213
+ """
214
+ ```
215
+
216
+ ### Token Budget Awareness
217
+
218
+ ```bash
219
+ # See token usage
220
+ voria plan 1 --verbose
221
+
222
+ # Output shows tokens consumed and cost
223
+ # Adjust budget if needed
224
+ python3 -m voria.core.setup # Increase budget
225
+ ```
226
+
227
+ ## Configuration Tuning
228
+
229
+ ### Development Mode (Faster)
230
+
231
+ ```bash
232
+ # Use debug builds (no optimizations)
233
+ cargo build # vs cargo build --release
234
+
235
+ # Skip some validations
236
+ voria_SKIP_VALIDATION=1 ./target/debug/voria plan 1
237
+ ```
238
+
239
+ ### Production Mode (Slower but robust)
240
+
241
+ ```bash
242
+ # Use release builds
243
+ cargo build --release
244
+
245
+ # With optimizations
246
+ RUSTFLAGS="-C target-cpu=native" cargo build --release
247
+ ```
248
+
249
+ ## Bottleneck Analysis
250
+
251
+ ### Common Bottlenecks
252
+
253
+ 1. **LLM API latency** (usually 2-5 seconds)
254
+ - Switch to faster provider: `--llm modal`
255
+ - Use smaller model: `--model gpt-4-mini`
256
+
257
+ 2. **File I/O** (usually < 1 second)
258
+ - Use faster disk (SSD vs HDD)
259
+ - Exclude unnecessary files: `--exclude "*.log"`
260
+
261
+ 3. **Test execution** (usually 10-30 seconds)
262
+ - Skip tests: `--skip-tests`
263
+ - Run subset: `--test-pattern "quick_*"`
264
+
265
+ 4. **Git operations** (usually < 5 seconds)
266
+ - Use shallow clone: `--shallow`
267
+ - Skip fetch: `--no-fetch`
268
+
269
+ ### Identify Bottleneck
270
+
271
+ ```bash
272
+ # Add timing breakpoints
273
+ voria_DEBUG_TIMING=1 ./target/release/voria issue 42
274
+
275
+ # Output:
276
+ # [TIMING] LLM plan: 3.2s
277
+ # [TIMING] Patch generation: 2.1s
278
+ # [TIMING] File apply: 0.3s
279
+ # [TIMING] Test execution: 15.2s
280
+ # [TIMING] Total: 20.8s
281
+ ```
282
+
283
+ ## Scaling
284
+
285
+ ### Processing Many Issues
286
+
287
+ ```bash
288
+ # Serial (slow)
289
+ for i in {1..100}; do
290
+ voria issue $i --create-pr
291
+ done
292
+
293
+ # Parallel (faster)
294
+ seq 1 100 | parallel "voria issue {} --create-pr" --max-procs 4
295
+ ```
296
+
297
+ ### Load Balancing
298
+
299
+ ```bash
300
+ # Distribute across LLM providers
301
+ for i in {1..100}; do
302
+ llm=$((i % 4)) # Rotate: 0=modal, 1=openai, 2=gemini, 3=claude
303
+ voria issue $i --llm ${LLMS[$llm]}
304
+ done
305
+ ```
306
+
307
+ ## Performance Tips
308
+
309
+ 1. **Use release builds** in production
310
+ 2. **Keep local iteratons small** (-max-iterations 2-3)
311
+ 3. **Enable caching** (default on, but verify)
312
+ 4. **Use appropriate LLM** for task
313
+ 5. **Focus on important files** (exclude boilerplate/tests)
314
+ 6. **Monitor token spending** (avoid surprises)
315
+ 7. **Reuse providers** (persistent process is faster)
316
+ 8. **Batch operations** (parallel processing)
317
+
318
+ ## Advanced Tuning
319
+
320
+ ### Rust Build Optimization
321
+
322
+ ```bash
323
+ # Profile-guided optimization
324
+ RUSTFLAGS="-Cllvm-args=-pgo-warn-missing-function" cargo build --release -Z pgo=instrument
325
+
326
+ # High performance
327
+ RUSTFLAGS="-C opt-level=3 -C lto=yes -C codegen-units=1" cargo build --release
328
+ ```
329
+
330
+ ### Python Optimization
331
+
332
+ ```python
333
+ # Use PyPy (faster than CPython)
334
+ pypy3 -m venv venv
335
+ source venv/bin/activate
336
+
337
+ # Compile to bytecode
338
+ python3 -m py_compile voria/
339
+ ```
340
+
341
+ ---
342
+
343
+ **See Also:**
344
+ - [SECURITY.md](SECURITY.md) - Security best practices
345
+ - [TROUBLESHOOTING.md](TROUBLESHOOTING.md) - Common issues
346
+ - [ARCHITECTURE.md](ARCHITECTURE.md) - System design