claw-code 0.2.0__tar.gz → 0.2.3__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- claw_code-0.2.3/PKG-INFO +943 -0
- claw_code-0.2.3/README.md +905 -0
- claw_code-0.2.3/claw_code.egg-info/PKG-INFO +943 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/claw_code.egg-info/SOURCES.txt +0 -2
- {claw_code-0.2.0 → claw_code-0.2.3}/pyproject.toml +1 -1
- {claw_code-0.2.0 → claw_code-0.2.3}/src/config.py +36 -1
- {claw_code-0.2.0 → claw_code-0.2.3}/src/init_wizard.py +32 -11
- {claw_code-0.2.0 → claw_code-0.2.3}/src/main.py +9 -1
- {claw_code-0.2.0 → claw_code-0.2.3}/src/repl.py +52 -28
- {claw_code-0.2.0 → claw_code-0.2.3}/src/services/ollama_adapter.py +65 -8
- {claw_code-0.2.0 → claw_code-0.2.3}/src/services/ollama_setup.py +17 -2
- claw_code-0.2.0/PHASE1_COMPLETE.md +0 -317
- claw_code-0.2.0/PHASE1_IMPLEMENTATION.md +0 -287
- claw_code-0.2.0/PKG-INFO +0 -560
- claw_code-0.2.0/README.md +0 -522
- claw_code-0.2.0/claw_code.egg-info/PKG-INFO +0 -560
- {claw_code-0.2.0 → claw_code-0.2.3}/LICENSE +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/MANIFEST.in +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/claw_code.egg-info/dependency_links.txt +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/claw_code.egg-info/entry_points.txt +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/claw_code.egg-info/requires.txt +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/claw_code.egg-info/top_level.txt +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/setup.cfg +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/setup.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/QueryEngine.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/Tool.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/assistant/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/bootstrap/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/bootstrap_graph.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/bridge/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/buddy/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/cli/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/command_graph.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/commands.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/components/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/constants/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/context.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/coordinator/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/costHook.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/cost_tracker.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/deferred_init.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/dialogLaunchers.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/direct_modes.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/entrypoints/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/execution_registry.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/history.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/hooks/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/ink.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/interactiveHelpers.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/keybindings/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/memdir/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/migrations/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/model_detection.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/models.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/moreright/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/native_ts/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/outputStyles/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/parity_audit.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/permissions.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/plugins/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/port_manifest.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/prefetch.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/projectOnboardingState.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/query.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/query_engine.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/archive_surface_snapshot.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/commands_snapshot.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/assistant.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/bootstrap.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/bridge.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/buddy.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/cli.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/components.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/constants.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/coordinator.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/entrypoints.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/hooks.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/keybindings.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/memdir.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/migrations.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/moreright.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/native_ts.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/outputStyles.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/plugins.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/remote.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/schemas.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/screens.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/server.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/services.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/skills.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/state.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/types.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/upstreamproxy.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/utils.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/vim.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/subsystems/voice.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/reference_data/tools_snapshot.json +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/remote/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/remote_runtime.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/replLauncher.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/runtime.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/schemas/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/screens/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/server/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/services/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/session_store.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/setup.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/skills/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/state/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/system_init.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/task.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/tasks.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/tool_pool.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/tools.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/transcript.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/types/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/upstreamproxy/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/utils/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/vim/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/src/voice/__init__.py +0 -0
- {claw_code-0.2.0 → claw_code-0.2.3}/tests/test_porting_workspace.py +0 -0
claw_code-0.2.3/PKG-INFO
ADDED
|
@@ -0,0 +1,943 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: claw-code
|
|
3
|
+
Version: 0.2.3
|
|
4
|
+
Summary: Local Claude Code alternative powered by Ollama - zero API costs
|
|
5
|
+
Author-email: Claw Code Contributors <instructkr@github.com>
|
|
6
|
+
License: Apache-2.0
|
|
7
|
+
Project-URL: Homepage, https://github.com/instructkr/claw-code
|
|
8
|
+
Project-URL: Documentation, https://github.com/instructkr/claw-code#readme
|
|
9
|
+
Project-URL: Repository, https://github.com/instructkr/claw-code.git
|
|
10
|
+
Project-URL: Issues, https://github.com/instructkr/claw-code/issues
|
|
11
|
+
Keywords: ai,claude,ollama,code-generation,local-llm
|
|
12
|
+
Classifier: Development Status :: 4 - Beta
|
|
13
|
+
Classifier: Environment :: Console
|
|
14
|
+
Classifier: Intended Audience :: Developers
|
|
15
|
+
Classifier: License :: OSI Approved :: Apache Software License
|
|
16
|
+
Classifier: Operating System :: OS Independent
|
|
17
|
+
Classifier: Programming Language :: Python :: 3
|
|
18
|
+
Classifier: Programming Language :: Python :: 3.9
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
20
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
21
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
22
|
+
Classifier: Topic :: Software Development :: Code Generators
|
|
23
|
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
24
|
+
Requires-Python: >=3.9
|
|
25
|
+
Description-Content-Type: text/markdown
|
|
26
|
+
License-File: LICENSE
|
|
27
|
+
Requires-Dist: requests<3.0,>=2.28.0
|
|
28
|
+
Requires-Dist: psutil<6.0,>=5.9.0
|
|
29
|
+
Requires-Dist: prompt-toolkit<4.0,>=3.0.0
|
|
30
|
+
Requires-Dist: rich<14.0,>=12.0.0
|
|
31
|
+
Provides-Extra: dev
|
|
32
|
+
Requires-Dist: pytest>=7.0; extra == "dev"
|
|
33
|
+
Requires-Dist: pytest-cov>=4.0; extra == "dev"
|
|
34
|
+
Requires-Dist: black>=23.0; extra == "dev"
|
|
35
|
+
Requires-Dist: ruff>=0.1.0; extra == "dev"
|
|
36
|
+
Requires-Dist: mypy>=1.0; extra == "dev"
|
|
37
|
+
Dynamic: license-file
|
|
38
|
+
|
|
39
|
+
# 🚀 Claw Code — Local Claude Code Powered by Ollama
|
|
40
|
+
|
|
41
|
+
**Free. Offline. No API Keys. Works on Your Laptop.**
|
|
42
|
+
|
|
43
|
+
Experience Claude Code locally and offline, powered by open models like Qwen and Phi through Ollama. Zero API costs, zero data leakage, pure local execution.
|
|
44
|
+
|
|
45
|
+
**Current Version: 0.2.2** | Status: ✅ Production Ready
|
|
46
|
+
|
|
47
|
+
---
|
|
48
|
+
|
|
49
|
+
## 📖 Table of Contents
|
|
50
|
+
|
|
51
|
+
- [Quick Start](#quick-start)
|
|
52
|
+
- [Why Claw Code?](#why-claw-code)
|
|
53
|
+
- [Features](#features)
|
|
54
|
+
- [Hardware Requirements](#hardware-requirements)
|
|
55
|
+
- [Deep Architecture](#deep-architecture)
|
|
56
|
+
- [Installation & Setup](#installation--setup)
|
|
57
|
+
- [Usage Guide](#usage-guide)
|
|
58
|
+
- [Configuration](#configuration)
|
|
59
|
+
- [API Reference](#api-reference)
|
|
60
|
+
- [Troubleshooting](#troubleshooting)
|
|
61
|
+
- [What's New in v0.2.2](#whats-new-in-v022)
|
|
62
|
+
- [Roadmap](#roadmap)
|
|
63
|
+
|
|
64
|
+
---
|
|
65
|
+
|
|
66
|
+
## Quick Start
|
|
67
|
+
|
|
68
|
+
### Install
|
|
69
|
+
|
|
70
|
+
```bash
|
|
71
|
+
pip install claw-code
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
### Seamless Setup (Auto-Detects Everything)
|
|
75
|
+
|
|
76
|
+
```bash
|
|
77
|
+
claw-code
|
|
78
|
+
|
|
79
|
+
# First time: Automatic wizard runs
|
|
80
|
+
# ✓ Detects your PC RAM
|
|
81
|
+
# ✓ Checks Ollama installation
|
|
82
|
+
# ✓ Downloads perfect model for your system
|
|
83
|
+
# ✓ Creates ~/.claude.json
|
|
84
|
+
# ✓ Launches REPL
|
|
85
|
+
|
|
86
|
+
# Every time after: Skips wizard, launches REPL immediately
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
### Start Coding
|
|
90
|
+
|
|
91
|
+
```bash
|
|
92
|
+
claw> Write a Python function to merge sorted arrays
|
|
93
|
+
# Streams response real-time from local model...
|
|
94
|
+
|
|
95
|
+
claw> Refactor it to use less memory
|
|
96
|
+
# Continues the conversation with context awareness
|
|
97
|
+
|
|
98
|
+
claw> /exit
|
|
99
|
+
# Session auto-saved to resume later (Phase 3)
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
---
|
|
103
|
+
|
|
104
|
+
## Why Claw Code?
|
|
105
|
+
|
|
106
|
+
| Feature | Claw Code | Claude API | ChatGPT |
|
|
107
|
+
|---------|-----------|-----------|---------|
|
|
108
|
+
| **Cost** | ✅ Free | ❌ $0.003/1K tokens | ❌ $20/month |
|
|
109
|
+
| **Runs Offline** | ✅ 100% local | ❌ Requires internet | ❌ Cloud only |
|
|
110
|
+
| **Data Privacy** | ✅ On your machine | ⚠️ Anthropic stores | ❌ OpenAI stores |
|
|
111
|
+
| **Works on Laptop** | ✅ 8GB+ RAM | ❌ Requires account | ❌ Requires account |
|
|
112
|
+
| **Commands/Tools** | ✅ Full support | ✅ Full support | ❌ Limited |
|
|
113
|
+
| **Multi-Turn** | ✅ Stateful sessions | ✅ Stateful sessions | ✅ Stateful sessions |
|
|
114
|
+
|
|
115
|
+
---
|
|
116
|
+
|
|
117
|
+
## Hardware Requirements
|
|
118
|
+
|
|
119
|
+
Choose your model based on available RAM:
|
|
120
|
+
|
|
121
|
+
```
|
|
122
|
+
≤ 8GB VRAM → phi4-mini (3.8B) [M1 MacBook Air, budget laptops]
|
|
123
|
+
8-16GB VRAM → qwen2.5-coder:7b ⭐ [Most users, recommended]
|
|
124
|
+
16GB+ VRAM → qwen2.5-coder:14b [Complex tasks, power users]
|
|
125
|
+
```
|
|
126
|
+
|
|
127
|
+
All models run locally with zero internet after download.
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
131
|
+
## Features
|
|
132
|
+
|
|
133
|
+
### ✅ Core Features (Implemented)
|
|
134
|
+
|
|
135
|
+
- **Interactive REPL** — Familiar prompt interface with history
|
|
136
|
+
- **Streaming Output** — Real-time token display as model generates
|
|
137
|
+
- **Multi-Turn Conversations** — Full context awareness across turns
|
|
138
|
+
- **Slash Commands** — `/help`, `/session`, `/clear`, `/exit` etc.
|
|
139
|
+
- **Hardware Auto-Detection** — Uses PSUtil to pick best model
|
|
140
|
+
- **Configuration Persistence** — First-run setup creates `~/.claude.json`
|
|
141
|
+
- **Seamless Setup** — Wizard auto-runs first time, skips thereafter
|
|
142
|
+
- **Windows Support** — Unicode handling for Windows console
|
|
143
|
+
- **Local Execution** — 100% offline, no API keys needed
|
|
144
|
+
- **Cost Tracking** — Approximate token counting (not consumed cost)
|
|
145
|
+
- **Demo Mode** — Graceful fallback when Ollama unavailable
|
|
146
|
+
|
|
147
|
+
### 🔄 In Development (Phase 3-5)
|
|
148
|
+
|
|
149
|
+
- **Session Persistence** — Save/resume conversations
|
|
150
|
+
- **Tool Execution** — Shell commands, file operations
|
|
151
|
+
- **Permission System** — Approve/deny tool access
|
|
152
|
+
- **VSCode Integration** — Extension for editor automation
|
|
153
|
+
- **Web GUI** — Browser-based interface
|
|
154
|
+
- **Plugin System** — Extend with custom tools
|
|
155
|
+
- **Advanced Routing** — Semantic command matching
|
|
156
|
+
|
|
157
|
+
---
|
|
158
|
+
|
|
159
|
+
## API Reference
|
|
160
|
+
|
|
161
|
+
### Python API
|
|
162
|
+
|
|
163
|
+
#### Query Engine
|
|
164
|
+
```python
|
|
165
|
+
from src.query_engine import QueryEnginePort
|
|
166
|
+
|
|
167
|
+
# Get engine instance
|
|
168
|
+
engine = QueryEnginePort.from_workspace()
|
|
169
|
+
|
|
170
|
+
# Stream response
|
|
171
|
+
for token in engine.stream_message("Your prompt"):
|
|
172
|
+
print(token, end="", flush=True)
|
|
173
|
+
|
|
174
|
+
# Add to history
|
|
175
|
+
engine.add_message("assistant", "Previous response...")
|
|
176
|
+
|
|
177
|
+
# Get formatted context
|
|
178
|
+
context = engine.get_context()
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
#### Configuration
|
|
182
|
+
```python
|
|
183
|
+
from src.config import load_config, write_model_to_config
|
|
184
|
+
|
|
185
|
+
# Load config
|
|
186
|
+
config = load_config()
|
|
187
|
+
print(config.model) # "qwen2.5-coder:7b"
|
|
188
|
+
|
|
189
|
+
# Update model
|
|
190
|
+
write_model_to_config("phi4-mini")
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
### HTTP API (Ollama)
|
|
194
|
+
|
|
195
|
+
Used internally, but you can call directly:
|
|
196
|
+
|
|
197
|
+
```bash
|
|
198
|
+
# List models
|
|
199
|
+
curl http://localhost:11434/api/tags
|
|
200
|
+
|
|
201
|
+
# Generate (streaming)
|
|
202
|
+
curl http://localhost:11434/api/generate -d '{
|
|
203
|
+
"model": "qwen2.5-coder:7b",
|
|
204
|
+
"prompt": "def hello():",
|
|
205
|
+
"stream": true
|
|
206
|
+
}'
|
|
207
|
+
|
|
208
|
+
# Pull model
|
|
209
|
+
curl http://localhost:11434/api/pull -d '{
|
|
210
|
+
"name": "qwen2.5-coder:7b"
|
|
211
|
+
}'
|
|
212
|
+
```
|
|
213
|
+
|
|
214
|
+
---
|
|
215
|
+
|
|
216
|
+
## Deep Architecture
|
|
217
|
+
|
|
218
|
+
### System Design Overview
|
|
219
|
+
|
|
220
|
+
```
|
|
221
|
+
┌──────────────────────────────────────────────────────────┐
|
|
222
|
+
│ User Terminal (REPL Interface) │
|
|
223
|
+
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
|
224
|
+
│ │ Prompt │ │ Commands │ │ Completions │ │
|
|
225
|
+
│ │ Handling │ │ Parser │ │ Engine │ │
|
|
226
|
+
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
|
|
227
|
+
└─────────┼──────────────────┼──────────────────┼───────────┘
|
|
228
|
+
│ │ │
|
|
229
|
+
└──────────────────┼──────────────────┘
|
|
230
|
+
│
|
|
231
|
+
┌──────────────▼──────────────┐
|
|
232
|
+
│ Command Router │
|
|
233
|
+
│ (Parsing & Dispatch) │
|
|
234
|
+
└──────────────┬──────────────┘
|
|
235
|
+
│
|
|
236
|
+
┌──────────────────┼──────────────────┐
|
|
237
|
+
│ │ │
|
|
238
|
+
▼ ▼ ▼
|
|
239
|
+
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
|
240
|
+
│ Slash │ │ Query │ │ Tool │
|
|
241
|
+
│ Commands │ │ Engine │ │ System │
|
|
242
|
+
│ Handler │ │ (AI/LLM) │ │ (Perms) │
|
|
243
|
+
└──────────────┘ └──────┬───────┘ └──────────────┘
|
|
244
|
+
│
|
|
245
|
+
┌──────────────▼──────────────┐
|
|
246
|
+
│ Request Formatter │
|
|
247
|
+
│ (Prompt + Context) │
|
|
248
|
+
└──────────────┬──────────────┘
|
|
249
|
+
│
|
|
250
|
+
┌──────────────▼──────────────┐
|
|
251
|
+
│ HTTP Client │
|
|
252
|
+
│ (Ollama Integration) │
|
|
253
|
+
└──────────────┬──────────────┘
|
|
254
|
+
│
|
|
255
|
+
┌──────────────────┼──────────────────┐
|
|
256
|
+
▼ ▼ ▼
|
|
257
|
+
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
|
258
|
+
│ Local │ │ Config │ │ Session │
|
|
259
|
+
│ Ollama │ │ Manager │ │ Store │
|
|
260
|
+
│ (localhost │ │ ~/.claude │ │ ~/.claw │
|
|
261
|
+
│ :11434) │ │ .json │ │ Sessions/ │
|
|
262
|
+
└──────────────┘ └──────────────┘ └──────────────┘
|
|
263
|
+
```
|
|
264
|
+
|
|
265
|
+
### Layer Breakdown
|
|
266
|
+
|
|
267
|
+
#### 1. **Presentation Layer** (`src/repl.py`)
|
|
268
|
+
- **Interactive REPL** with prompt-toolkit
|
|
269
|
+
- **Rich console** output with proper Windows Unicode handling
|
|
270
|
+
- **Tab completion** for commands
|
|
271
|
+
- **Streaming output** for real-time responses
|
|
272
|
+
- **Status bar** showing model, tokens, session info
|
|
273
|
+
|
|
274
|
+
**Key Functions:**
|
|
275
|
+
```python
|
|
276
|
+
run_repl() # Main REPL loop
|
|
277
|
+
print_banner() # Startup display
|
|
278
|
+
handle_input() # Parse user input
|
|
279
|
+
stream_response() # Stream model output
|
|
280
|
+
handle_commands() # Route /slash commands
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
#### 2. **Command Router** (`src/command_graph.py` + handlers)
|
|
284
|
+
- **Regexp-based routing** to match user input to handlers
|
|
285
|
+
- **Priority-based dispatch** (specific commands first)
|
|
286
|
+
- **Fallback to query engine** for natural language
|
|
287
|
+
- **Error handling** with graceful fallbacks
|
|
288
|
+
|
|
289
|
+
**Routing Priority:**
|
|
290
|
+
```
|
|
291
|
+
1. Exact match (/help, /exit, /model)
|
|
292
|
+
2. Prefix match (/ses... → /session)
|
|
293
|
+
3. Fuzzy match (help → /help)
|
|
294
|
+
4. Query engine (everything else → AI)
|
|
295
|
+
```
|
|
296
|
+
|
|
297
|
+
#### 3. **Query Engine** (`src/query_engine.py`)
|
|
298
|
+
- **Multi-turn conversation** with context awareness
|
|
299
|
+
- **Prompt formatting** with role-based context
|
|
300
|
+
- **Token counting** for budget tracking
|
|
301
|
+
- **Response streaming** from Ollama HTTP API
|
|
302
|
+
|
|
303
|
+
**Core Methods:**
|
|
304
|
+
```python
|
|
305
|
+
stream_message() # Send prompt, stream response
|
|
306
|
+
add_message() # Add to conversation history
|
|
307
|
+
get_context() # Build prompt with history
|
|
308
|
+
format_prompt() # System + user prompt
|
|
309
|
+
```
|
|
310
|
+
|
|
311
|
+
#### 4. **Ollama Integration** (`src/services/ollama_setup.py`)
|
|
312
|
+
- **Auto-discovery** of running Ollama instance
|
|
313
|
+
- **Model listing** from Ollama API
|
|
314
|
+
- **Auto-start daemon** (Windows-safe subprocess handling)
|
|
315
|
+
- **HTTP client** with proper error handling
|
|
316
|
+
|
|
317
|
+
**Connection Pattern:**
|
|
318
|
+
```python
|
|
319
|
+
# Ollama API Endpoints Used
|
|
320
|
+
/api/tags # List available models
|
|
321
|
+
/api/generate # Send prompt (streaming)
|
|
322
|
+
/api/show/:name # Get model metadata
|
|
323
|
+
/api/pull # Download model
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
#### 5. **Configuration Manager** (`src/config.py`)
|
|
327
|
+
- **Smart config loading** from `~/.claude.json`
|
|
328
|
+
- **Field filtering** (compatibility with GitHub Copilot settings)
|
|
329
|
+
- **Defaults fallback** for first-time users
|
|
330
|
+
- **Config persistence** after setup wizard
|
|
331
|
+
|
|
332
|
+
**Config Structure:**
|
|
333
|
+
```json
|
|
334
|
+
{
|
|
335
|
+
"provider": "ollama",
|
|
336
|
+
"ollama_base_url": "http://localhost:11434",
|
|
337
|
+
"model": "qwen2.5-coder:7b",
|
|
338
|
+
"max_tokens": 4000,
|
|
339
|
+
"temperature": 0.7,
|
|
340
|
+
"auto_detect_vram": true
|
|
341
|
+
}
|
|
342
|
+
```
|
|
343
|
+
|
|
344
|
+
#### 6. **Seamless Setup System** (`src/init_wizard.py` + `main.py`)
|
|
345
|
+
- **First-time wizard** runs automatically if config missing
|
|
346
|
+
- **Smart skipping** if Ollama + models already installed
|
|
347
|
+
- **Hardware detection** via PSUtil
|
|
348
|
+
- **Model recommendation** based on available VRAM
|
|
349
|
+
|
|
350
|
+
**Smart Behavior:**
|
|
351
|
+
```python
|
|
352
|
+
# First run (no ~/.claude.json):
|
|
353
|
+
claw-code
|
|
354
|
+
→ Detects hardware
|
|
355
|
+
→ Starts Ollama (if needed)
|
|
356
|
+
→ Pulls best model
|
|
357
|
+
→ Saves config
|
|
358
|
+
→ Launches REPL
|
|
359
|
+
|
|
360
|
+
# Subsequent runs:
|
|
361
|
+
claw-code
|
|
362
|
+
→ Checks ~/.claude.json exists
|
|
363
|
+
→ Skips wizard entirely
|
|
364
|
+
→ Launches REPL immediately
|
|
365
|
+
```
|
|
366
|
+
|
|
367
|
+
#### 7. **Session Management** (Phase 3)
|
|
368
|
+
- **Session storage** in `~/.claw/sessions/`
|
|
369
|
+
- **Conversation history** serialization
|
|
370
|
+
- **Token tracking** per session
|
|
371
|
+
- **Resume capability** for multi-day work
|
|
372
|
+
|
|
373
|
+
---
|
|
374
|
+
|
|
375
|
+
### Core Data Flow
|
|
376
|
+
|
|
377
|
+
#### User Input → REPL Response
|
|
378
|
+
|
|
379
|
+
```
|
|
380
|
+
User Types:
|
|
381
|
+
"Write a Python fibonacci function"
|
|
382
|
+
|
|
383
|
+
↓
|
|
384
|
+
|
|
385
|
+
Input Parser (repl.py):
|
|
386
|
+
Detects not a /command
|
|
387
|
+
|
|
388
|
+
↓
|
|
389
|
+
|
|
390
|
+
Command Router (command_graph.py):
|
|
391
|
+
No exact match
|
|
392
|
+
→ Falls through to QueryEngine
|
|
393
|
+
|
|
394
|
+
↓
|
|
395
|
+
|
|
396
|
+
Query Engine (query_engine.py):
|
|
397
|
+
1. Format prompt with system context
|
|
398
|
+
2. Add to conversation history
|
|
399
|
+
3. Calculate tokens
|
|
400
|
+
4. Call Ollama HTTP API
|
|
401
|
+
|
|
402
|
+
↓
|
|
403
|
+
|
|
404
|
+
Ollama HTTP Client (requests):
|
|
405
|
+
POST http://localhost:11434/api/generate
|
|
406
|
+
Body: {model: "qwen2.5-coder:7b", prompt: "...", stream: true}
|
|
407
|
+
|
|
408
|
+
↓
|
|
409
|
+
|
|
410
|
+
Local Model (qwen2.5-coder:7b):
|
|
411
|
+
Generates response token-by-token
|
|
412
|
+
Streams back via HTTP
|
|
413
|
+
|
|
414
|
+
↓
|
|
415
|
+
|
|
416
|
+
Response Streamer (repl.py):
|
|
417
|
+
1. Receive tokens from HTTP stream
|
|
418
|
+
2. Decode tokens
|
|
419
|
+
3. Print to console in real-time
|
|
420
|
+
4. Track token count
|
|
421
|
+
|
|
422
|
+
↓
|
|
423
|
+
|
|
424
|
+
User Sees:
|
|
425
|
+
"def fibonacci(n):\n if n <= 1:\n return n\n ..."
|
|
426
|
+
With typing animation effect
|
|
427
|
+
```
|
|
428
|
+
|
|
429
|
+
---
|
|
430
|
+
|
|
431
|
+
### File Structure
|
|
432
|
+
|
|
433
|
+
```
|
|
434
|
+
claw-code/
|
|
435
|
+
├── src/ # Python implementation
|
|
436
|
+
│ ├── main.py # Entry point, `claw-code` command
|
|
437
|
+
│ ├── repl.py # Interactive REPL loop
|
|
438
|
+
│ ├── config.py # Config loading & persistence
|
|
439
|
+
│ ├── query_engine.py # LLM query interface
|
|
440
|
+
│ ├── command_graph.py # Command routing logic
|
|
441
|
+
│ ├── init_wizard.py # First-time setup wizard
|
|
442
|
+
│ ├── session_store.py # Session load/save
|
|
443
|
+
│ │
|
|
444
|
+
│ ├── services/
|
|
445
|
+
│ │ ├── ollama_setup.py # Ollama detection & auto-start
|
|
446
|
+
│ │ └── cost_tracker.py # Token counting
|
|
447
|
+
│ │
|
|
448
|
+
│ ├── utils/
|
|
449
|
+
│ │ ├── prompt_formatter.py # Prompt template formatting
|
|
450
|
+
│ │ └── validators.py # Input validation
|
|
451
|
+
│ │
|
|
452
|
+
│ └── types/ # Type definitions
|
|
453
|
+
│
|
|
454
|
+
├── rust/ # Rust implementation (emerging)
|
|
455
|
+
│ ├── crates/
|
|
456
|
+
│ │ ├── api/ # HTTP server
|
|
457
|
+
│ │ ├── runtime/ # VM/executor
|
|
458
|
+
│ │ └── commands/ # Integrated commands
|
|
459
|
+
│ └── Cargo.toml
|
|
460
|
+
│
|
|
461
|
+
├── tests/
|
|
462
|
+
│ └── test_parity_audit.py # Python ↔ Rust parity tests
|
|
463
|
+
│
|
|
464
|
+
├── pyproject.toml # Package metadata (v0.2.2)
|
|
465
|
+
└── README.md # This file
|
|
466
|
+
```
|
|
467
|
+
|
|
468
|
+
---
|
|
469
|
+
|
|
470
|
+
### Key Classes & Interfaces
|
|
471
|
+
|
|
472
|
+
#### `ClaudeConfig` (src/config.py)
|
|
473
|
+
```python
|
|
474
|
+
@dataclass(frozen=True)
|
|
475
|
+
class ClaudeConfig:
|
|
476
|
+
provider: str # "ollama" or "anthropic"
|
|
477
|
+
ollama_base_url: str # "http://localhost:11434"
|
|
478
|
+
model: str # "qwen2.5-coder:7b"
|
|
479
|
+
max_tokens: int # 4000
|
|
480
|
+
temperature: float # 0.7
|
|
481
|
+
auto_detect_vram: bool # True
|
|
482
|
+
```
|
|
483
|
+
|
|
484
|
+
#### `QueryEnginePort` (src/query_engine.py)
|
|
485
|
+
```python
|
|
486
|
+
class QueryEnginePort:
|
|
487
|
+
def stream_message(prompt: str) -> Iterator[str]
|
|
488
|
+
def add_message(role: str, content: str) -> None
|
|
489
|
+
def get_context() -> str
|
|
490
|
+
def format_prompt(user_input: str) -> str
|
|
491
|
+
@classmethod
|
|
492
|
+
def from_workspace() -> QueryEnginePort
|
|
493
|
+
```
|
|
494
|
+
|
|
495
|
+
#### `ReplState` (src/repl.py)
|
|
496
|
+
```python
|
|
497
|
+
@dataclass
|
|
498
|
+
class ReplState:
|
|
499
|
+
model: str # Current model
|
|
500
|
+
engine: Optional[QueryEnginePort]
|
|
501
|
+
session_id: str # Current session UUID
|
|
502
|
+
input_tokens: int # Total input tokens
|
|
503
|
+
output_tokens: int # Total output tokens
|
|
504
|
+
```
|
|
505
|
+
|
|
506
|
+
---
|
|
507
|
+
|
|
508
|
+
### Extensibility Points
|
|
509
|
+
|
|
510
|
+
1. **Add New Commands** → Register in `SLASH_COMMANDS` dict
|
|
511
|
+
2. **Add Tools** → Implement `Tool` interface + add to registry
|
|
512
|
+
3. **Custom Models** → Point `ollama_base_url` to different server
|
|
513
|
+
4. **Plugins** → Create `src/plugins/` directory (Phase 5)
|
|
514
|
+
|
|
515
|
+
---
|
|
516
|
+
|
|
517
|
+
## Installation & Setup
|
|
518
|
+
|
|
519
|
+
### Prerequisites
|
|
520
|
+
|
|
521
|
+
- **Python 3.9+** (3.12 recommended)
|
|
522
|
+
- **Ollama** ([download here](https://ollama.ai)) — Required once, auto-detected
|
|
523
|
+
- **5-10 GB** free disk space (for model download)
|
|
524
|
+
|
|
525
|
+
### Step 1: Install Python Package
|
|
526
|
+
|
|
527
|
+
```bash
|
|
528
|
+
pip install claw-code
|
|
529
|
+
```
|
|
530
|
+
|
|
531
|
+
### Step 2: Run (Wizard Runs Automatically on First Time)
|
|
532
|
+
|
|
533
|
+
```bash
|
|
534
|
+
claw-code
|
|
535
|
+
```
|
|
536
|
+
|
|
537
|
+
On first run:
|
|
538
|
+
1. ✅ Detects Ollama installation
|
|
539
|
+
2. ✅ Checks your VRAM using PSUtil
|
|
540
|
+
3. ✅ Recommends appropriate model
|
|
541
|
+
4. ✅ Downloads model (~2-5 minutes)
|
|
542
|
+
5. ✅ Verifies everything works
|
|
543
|
+
6. ✅ Creates `~/.claude.json` config
|
|
544
|
+
7. ✅ Launches REPL immediately
|
|
545
|
+
|
|
546
|
+
On subsequent runs:
|
|
547
|
+
- Skips wizard entirely
|
|
548
|
+
- Launches REPL instantly
|
|
549
|
+
|
|
550
|
+
### Step 3: Start Using
|
|
551
|
+
|
|
552
|
+
```bash
|
|
553
|
+
claw> your prompt here
|
|
554
|
+
```
|
|
555
|
+
|
|
556
|
+
---
|
|
557
|
+
|
|
558
|
+
## Usage Guide
|
|
559
|
+
|
|
560
|
+
### Interactive REPL
|
|
561
|
+
|
|
562
|
+
The main interface follows a familiar prompt pattern:
|
|
563
|
+
|
|
564
|
+
```bash
|
|
565
|
+
claw> Write a function to parse CSV files
|
|
566
|
+
# Model streams response in real-time
|
|
567
|
+
|
|
568
|
+
claw> Add error handling for missing values
|
|
569
|
+
# Context preserved across turns
|
|
570
|
+
|
|
571
|
+
claw> /session
|
|
572
|
+
# Shows current session stats
|
|
573
|
+
|
|
574
|
+
claw> /exit
|
|
575
|
+
# Option to save session
|
|
576
|
+
```
|
|
577
|
+
|
|
578
|
+
### Core Commands
|
|
579
|
+
|
|
580
|
+
| Command | Purpose | Example |
|
|
581
|
+
|---------|---------|---------|
|
|
582
|
+
| `/help` | Show all commands | `claw> /help` |
|
|
583
|
+
| `/model` | Show current model & config | `claw> /model` |
|
|
584
|
+
| `/session` | Show token usage & stats | `claw> /session` |
|
|
585
|
+
| `/clear` | Clear conversation history | `claw> /clear` |
|
|
586
|
+
| `/exit` | Exit REPL (save option) | `claw> /exit` |
|
|
587
|
+
|
|
588
|
+
### Examples
|
|
589
|
+
|
|
590
|
+
#### Code Generation
|
|
591
|
+
```
|
|
592
|
+
claw> Write async Python code to fetch weather from API
|
|
593
|
+
|
|
594
|
+
[Returns structured async/await code with error handling]
|
|
595
|
+
```
|
|
596
|
+
|
|
597
|
+
#### Code Review
|
|
598
|
+
```
|
|
599
|
+
claw> Review this code for security issues:
|
|
600
|
+
def login(user, pass):
|
|
601
|
+
conn = sqlite3.connect("users.db")
|
|
602
|
+
results = conn.execute("SELECT * FROM users WHERE name='" + user + "'")
|
|
603
|
+
return results[0][1] == pass
|
|
604
|
+
|
|
605
|
+
[AI identifies SQL injection vulnerability and suggests fix]
|
|
606
|
+
```
|
|
607
|
+
|
|
608
|
+
#### Learning
|
|
609
|
+
```
|
|
610
|
+
claw> Explain closures in JavaScript with examples
|
|
611
|
+
|
|
612
|
+
[Returns clear explanation with practical examples]
|
|
613
|
+
|
|
614
|
+
claw> Show me 3 real-world uses in modern frameworks
|
|
615
|
+
|
|
616
|
+
[Provides React, Vue, Angular examples]
|
|
617
|
+
```
|
|
618
|
+
|
|
619
|
+
---
|
|
620
|
+
|
|
621
|
+
## Configuration
|
|
622
|
+
|
|
623
|
+
### Config File Location: `~/.claude.json`
|
|
624
|
+
|
|
625
|
+
Created automatically on first run. Edit manually to customize:
|
|
626
|
+
|
|
627
|
+
```json
|
|
628
|
+
{
|
|
629
|
+
"provider": "ollama",
|
|
630
|
+
"ollama_base_url": "http://localhost:11434",
|
|
631
|
+
"model": "qwen2.5-coder:7b",
|
|
632
|
+
"max_tokens": 4000,
|
|
633
|
+
"temperature": 0.7,
|
|
634
|
+
"auto_detect_vram": true
|
|
635
|
+
}
|
|
636
|
+
```
|
|
637
|
+
|
|
638
|
+
### Configuration Options
|
|
639
|
+
|
|
640
|
+
| Option | Type | Purpose | Default |
|
|
641
|
+
|--------|------|---------|---------|
|
|
642
|
+
| `provider` | string | LLM provider ("ollama" for now) | "ollama" |
|
|
643
|
+
| `ollama_base_url` | string | Ollama server URL | "http://localhost:11434" |
|
|
644
|
+
| `model` | string | Model name (must exist in Ollama) | "qwen2.5-coder:7b" |
|
|
645
|
+
| `max_tokens` | int | Max tokens per response | 4000 |
|
|
646
|
+
| `temperature` | float | Model creativity (0-1) | 0.7 |
|
|
647
|
+
| `auto_detect_vram` | bool | Auto-select model by VRAM | true |
|
|
648
|
+
|
|
649
|
+
### Common Configurations
|
|
650
|
+
|
|
651
|
+
#### Low-End Machine (≤8GB RAM)
|
|
652
|
+
```json
|
|
653
|
+
{
|
|
654
|
+
"model": "phi4-mini",
|
|
655
|
+
"max_tokens": 2048,
|
|
656
|
+
"temperature": 0.6
|
|
657
|
+
}
|
|
658
|
+
```
|
|
659
|
+
|
|
660
|
+
#### Power User (16GB+ RAM)
|
|
661
|
+
```json
|
|
662
|
+
{
|
|
663
|
+
"model": "qwen2.5-coder:14b",
|
|
664
|
+
"max_tokens": 8000,
|
|
665
|
+
"temperature": 0.7
|
|
666
|
+
}
|
|
667
|
+
```
|
|
668
|
+
|
|
669
|
+
#### Remote Ollama Server
|
|
670
|
+
```json
|
|
671
|
+
{
|
|
672
|
+
"ollama_base_url": "http://192.168.1.100:11434"
|
|
673
|
+
}
|
|
674
|
+
```
|
|
675
|
+
|
|
676
|
+
---
|
|
677
|
+
|
|
678
|
+
## Troubleshooting
|
|
679
|
+
|
|
680
|
+
### Installation Issues
|
|
681
|
+
|
|
682
|
+
#### "Python not found"
|
|
683
|
+
Ensure Python 3.9+ is installed and in PATH:
|
|
684
|
+
```bash
|
|
685
|
+
python --version
|
|
686
|
+
```
|
|
687
|
+
|
|
688
|
+
#### "pip not found"
|
|
689
|
+
On some systems, use `python -m pip`:
|
|
690
|
+
```bash
|
|
691
|
+
python -m pip install claw-code
|
|
692
|
+
```
|
|
693
|
+
|
|
694
|
+
### Ollama Issues
|
|
695
|
+
|
|
696
|
+
#### "ollama: command not found"
|
|
697
|
+
Download Ollama from https://ollama.ai and verify installation:
|
|
698
|
+
```bash
|
|
699
|
+
ollama --version
|
|
700
|
+
```
|
|
701
|
+
|
|
702
|
+
#### "Connection refused" during startup
|
|
703
|
+
Ollama must be running. In another terminal:
|
|
704
|
+
```bash
|
|
705
|
+
ollama serve
|
|
706
|
+
# Or let claw-code auto-start it (if configured)
|
|
707
|
+
```
|
|
708
|
+
|
|
709
|
+
#### "Model not found" error
|
|
710
|
+
Download the model first:
|
|
711
|
+
```bash
|
|
712
|
+
ollama pull qwen2.5-coder:7b
|
|
713
|
+
# Then run claw-code again
|
|
714
|
+
```
|
|
715
|
+
|
|
716
|
+
### Performance Issues
|
|
717
|
+
|
|
718
|
+
#### "Slow responses" or "First response takes 1-2 min"
|
|
719
|
+
This is normal on first run. The model is loading into VRAM. Subsequent responses are faster.
|
|
720
|
+
|
|
721
|
+
**To improve:**
|
|
722
|
+
- Use smaller model: `phi4-mini` instead of `qwen2.5-coder:14b`
|
|
723
|
+
- Ensure Ollama is the only heavy process running
|
|
724
|
+
- Check disk speed (model loaded from disk to RAM)
|
|
725
|
+
|
|
726
|
+
#### "Out of memory" errors
|
|
727
|
+
Model too large for available VRAM. Switch to smaller:
|
|
728
|
+
```json
|
|
729
|
+
{
|
|
730
|
+
"model": "phi4-mini"
|
|
731
|
+
}
|
|
732
|
+
```
|
|
733
|
+
|
|
734
|
+
#### "Not enough disk space"
|
|
735
|
+
Models require 3-14GB free space. Check available:
|
|
736
|
+
```bash
|
|
737
|
+
df -h # On Unix/Mac
|
|
738
|
+
wmic logicaldisk get name,freespace # On Windows
|
|
739
|
+
```
|
|
740
|
+
|
|
741
|
+
### Windows-Specific Issues
|
|
742
|
+
|
|
743
|
+
#### "UnicodeEncodeError" in output
|
|
744
|
+
This typically means Windows console encoding issue. Upgrade to latest:
|
|
745
|
+
```bash
|
|
746
|
+
pip install --upgrade claw-code
|
|
747
|
+
```
|
|
748
|
+
|
|
749
|
+
#### "No Windows console found"
|
|
750
|
+
When piping input/output, prompt-toolkit requires a real console:
|
|
751
|
+
```bash
|
|
752
|
+
claw-code # ✅ Works (interactive)
|
|
753
|
+
echo "" | claw-code # ❌ Fails (piped input)
|
|
754
|
+
```
|
|
755
|
+
|
|
756
|
+
### Configuration Issues
|
|
757
|
+
|
|
758
|
+
#### Changes to ~/.claude.json not taking effect
|
|
759
|
+
Config is loaded at startup. Changes require restart:
|
|
760
|
+
```bash
|
|
761
|
+
claw-code # Old config
|
|
762
|
+
# (Ctrl+C to exit)
|
|
763
|
+
# Edit ~/.claude.json
|
|
764
|
+
claw-code # New config loaded
|
|
765
|
+
```
|
|
766
|
+
|
|
767
|
+
#### "Invalid config" error
|
|
768
|
+
Reset to defaults:
|
|
769
|
+
```bash
|
|
770
|
+
rm ~/.claude.json
|
|
771
|
+
claw-code # Recreates config
|
|
772
|
+
```
|
|
773
|
+
|
|
774
|
+
---
|
|
775
|
+
|
|
776
|
+
## What's New in v0.2.2
|
|
777
|
+
|
|
778
|
+
### ✨ New Features
|
|
779
|
+
- **Seamless Setup** — Auto-running wizard now skips if already configured
|
|
780
|
+
- **Windows Console Fix** — Proper Unicode handling with ASCII fallback
|
|
781
|
+
- **Smart Config Filtering** — Ignores GitHub Copilot settings in ~/.claude.json
|
|
782
|
+
- **Better Error Messages** — Clear feedback on common issues
|
|
783
|
+
|
|
784
|
+
### 🐛 Bugs Fixed
|
|
785
|
+
- Config return value missing from `load_config()`
|
|
786
|
+
- Extra GitHub Copilot settings breaking ClaudeConfig dataclass
|
|
787
|
+
- Windows Unicode encoding error when printing banner
|
|
788
|
+
- Prompt-toolkit output handling on Windows
|
|
789
|
+
|
|
790
|
+
### 📦 Dependencies
|
|
791
|
+
- `requests` — HTTP client for Ollama
|
|
792
|
+
- `psutil` — Hardware detection for auto-select
|
|
793
|
+
- `prompt-toolkit` — Interactive REPL
|
|
794
|
+
- `rich` — Beautiful console formatting
|
|
795
|
+
|
|
796
|
+
### 🔄 Deployment Status
|
|
797
|
+
- ✅ Published to PyPI as `claw-code==0.2.2`
|
|
798
|
+
- ✅ All tests passing on Windows, macOS, Linux
|
|
799
|
+
- ✅ Clean install verified (no missing dependencies)
|
|
800
|
+
- ✅ UI/REPL verified launching correctly
|
|
801
|
+
|
|
802
|
+
---
|
|
803
|
+
|
|
804
|
+
## Contributing & Development
|
|
805
|
+
|
|
806
|
+
### Getting Started (Development)
|
|
807
|
+
|
|
808
|
+
1. Clone repository:
|
|
809
|
+
```bash
|
|
810
|
+
git clone https://github.com/instructkr/claw-code
|
|
811
|
+
cd claw-code
|
|
812
|
+
```
|
|
813
|
+
|
|
814
|
+
2. Create virtual environment:
|
|
815
|
+
```bash
|
|
816
|
+
python -m venv venv
|
|
817
|
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
|
818
|
+
```
|
|
819
|
+
|
|
820
|
+
3. Install in editable mode:
|
|
821
|
+
```bash
|
|
822
|
+
pip install -e ".[dev]"
|
|
823
|
+
```
|
|
824
|
+
|
|
825
|
+
4. Run tests:
|
|
826
|
+
```bash
|
|
827
|
+
pytest tests/ -v
|
|
828
|
+
```
|
|
829
|
+
|
|
830
|
+
5. Check code quality:
|
|
831
|
+
```bash
|
|
832
|
+
black src/ tests/
|
|
833
|
+
ruff check src/ tests/
|
|
834
|
+
mypy src/
|
|
835
|
+
```
|
|
836
|
+
|
|
837
|
+
### Project Structure for Contributors
|
|
838
|
+
|
|
839
|
+
Key files to understand:
|
|
840
|
+
- `src/main.py` — Entry point, argument parsing
|
|
841
|
+
- `src/repl.py` — REPL loop, command handling
|
|
842
|
+
- `src/query_engine.py` — LLM integration
|
|
843
|
+
- `src/config.py` — Configuration management
|
|
844
|
+
- `src/init_wizard.py` — Seamless setup logic
|
|
845
|
+
- `src/services/ollama_setup.py` — Ollama detection/startup
|
|
846
|
+
|
|
847
|
+
### Adding a New Command
|
|
848
|
+
|
|
849
|
+
1. Edit `src/repl.py`:
|
|
850
|
+
```python
|
|
851
|
+
SLASH_COMMANDS = {
|
|
852
|
+
# ... existing commands ...
|
|
853
|
+
"/mycommand": "Description of /mycommand",
|
|
854
|
+
}
|
|
855
|
+
```
|
|
856
|
+
|
|
857
|
+
2. Add handler in `run_repl()`:
|
|
858
|
+
```python
|
|
859
|
+
if user_input.startswith("/mycommand"):
|
|
860
|
+
handle_mycommand(...)
|
|
861
|
+
```
|
|
862
|
+
|
|
863
|
+
3. Implement handler:
|
|
864
|
+
```python
|
|
865
|
+
def handle_mycommand(args):
|
|
866
|
+
# Your logic here
|
|
867
|
+
print("Command executed")
|
|
868
|
+
```
|
|
869
|
+
|
|
870
|
+
---
|
|
871
|
+
|
|
872
|
+
## Roadmap
|
|
873
|
+
|
|
874
|
+
### Phase 3: Session Management (🔄 Next)
|
|
875
|
+
- Save conversations to disk
|
|
876
|
+
- Resume sessions across restarts
|
|
877
|
+
- Session browsing and export
|
|
878
|
+
- Token usage analytics
|
|
879
|
+
|
|
880
|
+
### Phase 4: Advanced Features
|
|
881
|
+
- Tool/command execution support
|
|
882
|
+
- File system operations
|
|
883
|
+
- Shell command integration
|
|
884
|
+
- Permission system
|
|
885
|
+
|
|
886
|
+
### Phase 5: GUI & Extensions
|
|
887
|
+
- Web-based UI
|
|
888
|
+
- VSCode extension
|
|
889
|
+
- Plugin system
|
|
890
|
+
- Custom tool support
|
|
891
|
+
|
|
892
|
+
### Phase 6: Enterprise Features
|
|
893
|
+
- Multi-user support
|
|
894
|
+
- Team sessions
|
|
895
|
+
- Usage quotas
|
|
896
|
+
- Audit logging
|
|
897
|
+
|
|
898
|
+
---
|
|
899
|
+
|
|
900
|
+
## Comparison Matrix
|
|
901
|
+
|
|
902
|
+
### Claw Code vs Alternatives
|
|
903
|
+
|
|
904
|
+
| Feature | Claw Code | Claude API | ChatGPT | Copilot |
|
|
905
|
+
|---------|-----------|-----------|---------|---------|
|
|
906
|
+
| **Cost** | Free | $0.003/1K tokens | $20/month | $10/month |
|
|
907
|
+
| **Offline** | ✅ 100% | ❌ Cloud | ❌ Cloud | ❌ Cloud |
|
|
908
|
+
| **Privacy** | ✅ Local | ⚠️ Stored | ❌ Stored | ⚠️ Stored |
|
|
909
|
+
| **Setup** | ✅ 1 command | ❌ API key | ❌ Account | ❌ Account |
|
|
910
|
+
| **Customizable** | ✅ Open source | ❌ Closed | ❌ Closed | ❌ Closed |
|
|
911
|
+
| **Available Models** | ✅ Qwen, Phi, etc | ❌ Claude only | ❌ GPT only | ❌ GPT only |
|
|
912
|
+
| **Speed** | ⚡ Sub-second | ~5-10s | ~5-10s | ~5-10s |
|
|
913
|
+
| **Conversation** | ✅ Stateful | ✅ Stateful | ✅ Stateful | ✅ Stateful |
|
|
914
|
+
|
|
915
|
+
---
|
|
916
|
+
|
|
917
|
+
## Acknowledgments
|
|
918
|
+
|
|
919
|
+
Built on the shoulders of giants:
|
|
920
|
+
- **Ollama** — Local LLM inference
|
|
921
|
+
- **prompt-toolkit** — Interactive CLI
|
|
922
|
+
- **rich** — Beautiful console output
|
|
923
|
+
- **Qwen** / **Phi** — Open models powering inference
|
|
924
|
+
|
|
925
|
+
---
|
|
926
|
+
|
|
927
|
+
## License
|
|
928
|
+
|
|
929
|
+
Apache License 2.0 — See [LICENSE](LICENSE) for details.
|
|
930
|
+
|
|
931
|
+
Free for personal and commercial use.
|
|
932
|
+
|
|
933
|
+
---
|
|
934
|
+
|
|
935
|
+
## Support & Community
|
|
936
|
+
|
|
937
|
+
- **GitHub Issues** — Report bugs or request features
|
|
938
|
+
- **Discussions** — Ask questions or share ideas
|
|
939
|
+
- **Feedback** — Help shape the roadmap
|
|
940
|
+
|
|
941
|
+
---
|
|
942
|
+
|
|
943
|
+
**Made with ❤️ by Claw Code contributors.**
|