claw-code 0.2.0__tar.gz → 0.2.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (120) hide show
  1. {claw_code-0.2.0 → claw_code-0.2.2}/PKG-INFO +1 -1
  2. {claw_code-0.2.0 → claw_code-0.2.2}/claw_code.egg-info/PKG-INFO +1 -1
  3. {claw_code-0.2.0 → claw_code-0.2.2}/claw_code.egg-info/SOURCES.txt +0 -2
  4. {claw_code-0.2.0 → claw_code-0.2.2}/pyproject.toml +1 -1
  5. {claw_code-0.2.0 → claw_code-0.2.2}/src/config.py +30 -0
  6. {claw_code-0.2.0 → claw_code-0.2.2}/src/init_wizard.py +32 -11
  7. {claw_code-0.2.0 → claw_code-0.2.2}/src/main.py +9 -1
  8. {claw_code-0.2.0 → claw_code-0.2.2}/src/services/ollama_adapter.py +65 -8
  9. {claw_code-0.2.0 → claw_code-0.2.2}/src/services/ollama_setup.py +17 -2
  10. claw_code-0.2.0/PHASE1_COMPLETE.md +0 -317
  11. claw_code-0.2.0/PHASE1_IMPLEMENTATION.md +0 -287
  12. {claw_code-0.2.0 → claw_code-0.2.2}/LICENSE +0 -0
  13. {claw_code-0.2.0 → claw_code-0.2.2}/MANIFEST.in +0 -0
  14. {claw_code-0.2.0 → claw_code-0.2.2}/README.md +0 -0
  15. {claw_code-0.2.0 → claw_code-0.2.2}/claw_code.egg-info/dependency_links.txt +0 -0
  16. {claw_code-0.2.0 → claw_code-0.2.2}/claw_code.egg-info/entry_points.txt +0 -0
  17. {claw_code-0.2.0 → claw_code-0.2.2}/claw_code.egg-info/requires.txt +0 -0
  18. {claw_code-0.2.0 → claw_code-0.2.2}/claw_code.egg-info/top_level.txt +0 -0
  19. {claw_code-0.2.0 → claw_code-0.2.2}/setup.cfg +0 -0
  20. {claw_code-0.2.0 → claw_code-0.2.2}/setup.py +0 -0
  21. {claw_code-0.2.0 → claw_code-0.2.2}/src/QueryEngine.py +0 -0
  22. {claw_code-0.2.0 → claw_code-0.2.2}/src/Tool.py +0 -0
  23. {claw_code-0.2.0 → claw_code-0.2.2}/src/__init__.py +0 -0
  24. {claw_code-0.2.0 → claw_code-0.2.2}/src/assistant/__init__.py +0 -0
  25. {claw_code-0.2.0 → claw_code-0.2.2}/src/bootstrap/__init__.py +0 -0
  26. {claw_code-0.2.0 → claw_code-0.2.2}/src/bootstrap_graph.py +0 -0
  27. {claw_code-0.2.0 → claw_code-0.2.2}/src/bridge/__init__.py +0 -0
  28. {claw_code-0.2.0 → claw_code-0.2.2}/src/buddy/__init__.py +0 -0
  29. {claw_code-0.2.0 → claw_code-0.2.2}/src/cli/__init__.py +0 -0
  30. {claw_code-0.2.0 → claw_code-0.2.2}/src/command_graph.py +0 -0
  31. {claw_code-0.2.0 → claw_code-0.2.2}/src/commands.py +0 -0
  32. {claw_code-0.2.0 → claw_code-0.2.2}/src/components/__init__.py +0 -0
  33. {claw_code-0.2.0 → claw_code-0.2.2}/src/constants/__init__.py +0 -0
  34. {claw_code-0.2.0 → claw_code-0.2.2}/src/context.py +0 -0
  35. {claw_code-0.2.0 → claw_code-0.2.2}/src/coordinator/__init__.py +0 -0
  36. {claw_code-0.2.0 → claw_code-0.2.2}/src/costHook.py +0 -0
  37. {claw_code-0.2.0 → claw_code-0.2.2}/src/cost_tracker.py +0 -0
  38. {claw_code-0.2.0 → claw_code-0.2.2}/src/deferred_init.py +0 -0
  39. {claw_code-0.2.0 → claw_code-0.2.2}/src/dialogLaunchers.py +0 -0
  40. {claw_code-0.2.0 → claw_code-0.2.2}/src/direct_modes.py +0 -0
  41. {claw_code-0.2.0 → claw_code-0.2.2}/src/entrypoints/__init__.py +0 -0
  42. {claw_code-0.2.0 → claw_code-0.2.2}/src/execution_registry.py +0 -0
  43. {claw_code-0.2.0 → claw_code-0.2.2}/src/history.py +0 -0
  44. {claw_code-0.2.0 → claw_code-0.2.2}/src/hooks/__init__.py +0 -0
  45. {claw_code-0.2.0 → claw_code-0.2.2}/src/ink.py +0 -0
  46. {claw_code-0.2.0 → claw_code-0.2.2}/src/interactiveHelpers.py +0 -0
  47. {claw_code-0.2.0 → claw_code-0.2.2}/src/keybindings/__init__.py +0 -0
  48. {claw_code-0.2.0 → claw_code-0.2.2}/src/memdir/__init__.py +0 -0
  49. {claw_code-0.2.0 → claw_code-0.2.2}/src/migrations/__init__.py +0 -0
  50. {claw_code-0.2.0 → claw_code-0.2.2}/src/model_detection.py +0 -0
  51. {claw_code-0.2.0 → claw_code-0.2.2}/src/models.py +0 -0
  52. {claw_code-0.2.0 → claw_code-0.2.2}/src/moreright/__init__.py +0 -0
  53. {claw_code-0.2.0 → claw_code-0.2.2}/src/native_ts/__init__.py +0 -0
  54. {claw_code-0.2.0 → claw_code-0.2.2}/src/outputStyles/__init__.py +0 -0
  55. {claw_code-0.2.0 → claw_code-0.2.2}/src/parity_audit.py +0 -0
  56. {claw_code-0.2.0 → claw_code-0.2.2}/src/permissions.py +0 -0
  57. {claw_code-0.2.0 → claw_code-0.2.2}/src/plugins/__init__.py +0 -0
  58. {claw_code-0.2.0 → claw_code-0.2.2}/src/port_manifest.py +0 -0
  59. {claw_code-0.2.0 → claw_code-0.2.2}/src/prefetch.py +0 -0
  60. {claw_code-0.2.0 → claw_code-0.2.2}/src/projectOnboardingState.py +0 -0
  61. {claw_code-0.2.0 → claw_code-0.2.2}/src/query.py +0 -0
  62. {claw_code-0.2.0 → claw_code-0.2.2}/src/query_engine.py +0 -0
  63. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/__init__.py +0 -0
  64. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/archive_surface_snapshot.json +0 -0
  65. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/commands_snapshot.json +0 -0
  66. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/assistant.json +0 -0
  67. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/bootstrap.json +0 -0
  68. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/bridge.json +0 -0
  69. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/buddy.json +0 -0
  70. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/cli.json +0 -0
  71. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/components.json +0 -0
  72. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/constants.json +0 -0
  73. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/coordinator.json +0 -0
  74. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/entrypoints.json +0 -0
  75. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/hooks.json +0 -0
  76. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/keybindings.json +0 -0
  77. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/memdir.json +0 -0
  78. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/migrations.json +0 -0
  79. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/moreright.json +0 -0
  80. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/native_ts.json +0 -0
  81. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/outputStyles.json +0 -0
  82. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/plugins.json +0 -0
  83. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/remote.json +0 -0
  84. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/schemas.json +0 -0
  85. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/screens.json +0 -0
  86. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/server.json +0 -0
  87. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/services.json +0 -0
  88. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/skills.json +0 -0
  89. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/state.json +0 -0
  90. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/types.json +0 -0
  91. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/upstreamproxy.json +0 -0
  92. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/utils.json +0 -0
  93. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/vim.json +0 -0
  94. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/subsystems/voice.json +0 -0
  95. {claw_code-0.2.0 → claw_code-0.2.2}/src/reference_data/tools_snapshot.json +0 -0
  96. {claw_code-0.2.0 → claw_code-0.2.2}/src/remote/__init__.py +0 -0
  97. {claw_code-0.2.0 → claw_code-0.2.2}/src/remote_runtime.py +0 -0
  98. {claw_code-0.2.0 → claw_code-0.2.2}/src/repl.py +0 -0
  99. {claw_code-0.2.0 → claw_code-0.2.2}/src/replLauncher.py +0 -0
  100. {claw_code-0.2.0 → claw_code-0.2.2}/src/runtime.py +0 -0
  101. {claw_code-0.2.0 → claw_code-0.2.2}/src/schemas/__init__.py +0 -0
  102. {claw_code-0.2.0 → claw_code-0.2.2}/src/screens/__init__.py +0 -0
  103. {claw_code-0.2.0 → claw_code-0.2.2}/src/server/__init__.py +0 -0
  104. {claw_code-0.2.0 → claw_code-0.2.2}/src/services/__init__.py +0 -0
  105. {claw_code-0.2.0 → claw_code-0.2.2}/src/session_store.py +0 -0
  106. {claw_code-0.2.0 → claw_code-0.2.2}/src/setup.py +0 -0
  107. {claw_code-0.2.0 → claw_code-0.2.2}/src/skills/__init__.py +0 -0
  108. {claw_code-0.2.0 → claw_code-0.2.2}/src/state/__init__.py +0 -0
  109. {claw_code-0.2.0 → claw_code-0.2.2}/src/system_init.py +0 -0
  110. {claw_code-0.2.0 → claw_code-0.2.2}/src/task.py +0 -0
  111. {claw_code-0.2.0 → claw_code-0.2.2}/src/tasks.py +0 -0
  112. {claw_code-0.2.0 → claw_code-0.2.2}/src/tool_pool.py +0 -0
  113. {claw_code-0.2.0 → claw_code-0.2.2}/src/tools.py +0 -0
  114. {claw_code-0.2.0 → claw_code-0.2.2}/src/transcript.py +0 -0
  115. {claw_code-0.2.0 → claw_code-0.2.2}/src/types/__init__.py +0 -0
  116. {claw_code-0.2.0 → claw_code-0.2.2}/src/upstreamproxy/__init__.py +0 -0
  117. {claw_code-0.2.0 → claw_code-0.2.2}/src/utils/__init__.py +0 -0
  118. {claw_code-0.2.0 → claw_code-0.2.2}/src/vim/__init__.py +0 -0
  119. {claw_code-0.2.0 → claw_code-0.2.2}/src/voice/__init__.py +0 -0
  120. {claw_code-0.2.0 → claw_code-0.2.2}/tests/test_porting_workspace.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: claw-code
3
- Version: 0.2.0
3
+ Version: 0.2.2
4
4
  Summary: Local Claude Code alternative powered by Ollama - zero API costs
5
5
  Author-email: Claw Code Contributors <instructkr@github.com>
6
6
  License: Apache-2.0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: claw-code
3
- Version: 0.2.0
3
+ Version: 0.2.2
4
4
  Summary: Local Claude Code alternative powered by Ollama - zero API costs
5
5
  Author-email: Claw Code Contributors <instructkr@github.com>
6
6
  License: Apache-2.0
@@ -1,7 +1,5 @@
1
1
  LICENSE
2
2
  MANIFEST.in
3
- PHASE1_COMPLETE.md
4
- PHASE1_IMPLEMENTATION.md
5
3
  README.md
6
4
  pyproject.toml
7
5
  setup.py
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "claw-code"
7
- version = "0.2.0"
7
+ version = "0.2.2"
8
8
  description = "Local Claude Code alternative powered by Ollama - zero API costs"
9
9
  readme = "README.md"
10
10
  requires-python = ">=3.9"
@@ -48,6 +48,36 @@ def load_config() -> ClaudeConfig:
48
48
  except (json.JSONDecodeError, IOError) as e:
49
49
  logger.warning(f"Failed to load config from {config_path}: {e}, using defaults")
50
50
 
51
+
52
+ def write_model_to_config(model: str) -> None:
53
+ """Write the selected model name to ~/.claude.json"""
54
+ config_path = Path.home() / ".claude.json"
55
+
56
+ try:
57
+ # Load existing config
58
+ if config_path.exists():
59
+ with open(config_path) as f:
60
+ config = json.load(f)
61
+ else:
62
+ config = {
63
+ "provider": "ollama",
64
+ "ollama_base_url": "http://localhost:11434",
65
+ "max_tokens": 2048,
66
+ "temperature": 0.7,
67
+ "auto_detect_vram": True,
68
+ }
69
+
70
+ # Update model
71
+ config["model"] = model
72
+
73
+ # Write back
74
+ with open(config_path, "w") as f:
75
+ json.dump(config, f, indent=2)
76
+
77
+ logger.info(f"Updated config with model: {model}")
78
+ except Exception as e:
79
+ logger.warning(f"Failed to write model to config: {e}")
80
+
51
81
  return ClaudeConfig(
52
82
  provider=defaults.get("provider", "ollama"),
53
83
  ollama_base_url=defaults.get("ollama_base_url", "http://localhost:11434"),
@@ -98,8 +98,8 @@ def step_5_pull_model(recommended_model: str, available_models: list[str] | None
98
98
  """Step 5: Pull model if needed"""
99
99
  print("📋 Step 5: Model setup...")
100
100
 
101
- # Check if recommended is already available
102
- if available_models and recommended_model in available_models:
101
+ # Check if recommended is already available (handles :latest suffix)
102
+ if available_models and any(recommended_model in m for m in available_models):
103
103
  print(f"✓ {recommended_model} already installed\n")
104
104
  return recommended_model
105
105
 
@@ -190,12 +190,30 @@ def print_next_steps():
190
190
 
191
191
 
192
192
  def run_init_wizard() -> bool:
193
- """Run the complete initialization wizard"""
193
+ """Run the complete initialization wizard with smart skipping"""
194
+ from .services.ollama_setup import check_ollama_installed
195
+
196
+ # Check if already fully set up
197
+ try:
198
+ available = get_available_models()
199
+ if available and len(available) > 0:
200
+ # At least one model is already installed
201
+ print("✓ Ollama and model already installed. Skipping wizard.\n")
202
+ return True
203
+ except Exception:
204
+ pass
205
+
194
206
  print_banner()
195
207
 
196
- # Step 1: Check Ollama
197
- if not step_1_check_ollama():
208
+ # Step 1: Check Ollama installation
209
+ if not check_ollama_installed():
210
+ print("📋 Step 1: Checking Ollama installation...")
211
+ print("✗ Ollama not installed")
212
+ print("\n→ Download from: https://ollama.ai")
213
+ print("→ Then run: claw-code init\n")
198
214
  return False
215
+ else:
216
+ print("📋 Step 1: Ollama found ✓\n")
199
217
 
200
218
  # Step 2: Detect system
201
219
  vram_gb = step_2_detect_system()
@@ -206,14 +224,17 @@ def run_init_wizard() -> bool:
206
224
  # Step 4: Check available models
207
225
  available = step_4_check_models()
208
226
 
209
- # Step 5: Pull model if needed
210
- model = step_5_pull_model(recommended, available)
211
-
212
- # Step 6: Check if Ollama is running
227
+ # Step 5: Check if Ollama is running (before pull, so server is available)
213
228
  ollama_running = step_6_start_ollama()
214
229
 
215
- # Step 7: Create config
216
- step_7_create_config()
230
+ # Step 6: Pull model if needed
231
+ model = step_5_pull_model(recommended, available)
232
+
233
+ # Step 7: Create config and write model name
234
+ config_path = step_7_create_config()
235
+ if model:
236
+ from .config import write_model_to_config
237
+ write_model_to_config(model)
217
238
 
218
239
  # Step 8: Validate
219
240
  ready = step_8_validate()
@@ -133,8 +133,16 @@ def main(argv: list[str] | None = None) -> int:
133
133
  print(f"✗ Failed to load session: {e}")
134
134
  return 1
135
135
 
136
- # Default to REPL if no command
136
+ # Default: smart flow for REPL (init wizard if first-time, else REPL)
137
137
  if not args.command:
138
+ from pathlib import Path
139
+ config_path = Path.home() / ".claude.json"
140
+
141
+ # First time: run wizard automatically
142
+ if not config_path.exists():
143
+ run_init_wizard()
144
+
145
+ # Then launch REPL (whether first-time or returning user)
138
146
  run_repl()
139
147
  return 0
140
148
 
@@ -7,6 +7,7 @@ from __future__ import annotations
7
7
 
8
8
  import json
9
9
  import logging
10
+ import subprocess
10
11
  from dataclasses import dataclass
11
12
  from typing import Generator
12
13
 
@@ -88,15 +89,71 @@ class OllamaAdapter:
88
89
 
89
90
  @staticmethod
90
91
  def get_available_vram_gb() -> float:
91
- """Get available system VRAM in gigabytes"""
92
- if not PSUTIL_AVAILABLE:
93
- logger.warning("psutil not available; assuming 8GB VRAM")
94
- return 8.0
92
+ """Get available GPU VRAM in gigabytes, with GPU detection fallback chain"""
93
+ # Try NVIDIA GPU via nvidia-smi
95
94
  try:
96
- return psutil.virtual_memory().total / (1024 ** 3)
97
- except Exception as e:
98
- logger.warning(f"Failed to detect VRAM: {e}; assuming 8GB")
99
- return 8.0
95
+ result = subprocess.run(
96
+ ["nvidia-smi", "--query-gpu=memory.total", "--format=csv,noheader,nounits"],
97
+ capture_output=True, text=True, timeout=5, check=True
98
+ )
99
+ if result.stdout.strip():
100
+ vram_mib = float(result.stdout.strip().split('\n')[0])
101
+ vram_gb = vram_mib / 1024
102
+ logger.info(f"Detected {vram_gb:.1f}GB NVIDIA GPU VRAM")
103
+ return vram_gb
104
+ except (FileNotFoundError, subprocess.TimeoutExpired, ValueError):
105
+ pass
106
+
107
+ # Try Apple Silicon / macOS via system_profiler
108
+ try:
109
+ result = subprocess.run(
110
+ ["system_profiler", "SPHardwareDataType"],
111
+ capture_output=True, text=True, timeout=5, check=True
112
+ )
113
+ for line in result.stdout.split('\n'):
114
+ if 'Memory:' in line:
115
+ # Parse "Memory: 16 GB" format
116
+ parts = line.split(':')
117
+ if len(parts) > 1:
118
+ mem_str = parts[1].strip().split()[0]
119
+ vram_gb = float(mem_str)
120
+ logger.info(f"Detected {vram_gb:.1f}GB Apple Silicon unified memory")
121
+ return vram_gb
122
+ except (FileNotFoundError, subprocess.TimeoutExpired, ValueError):
123
+ pass
124
+
125
+ # Try AMD GPU via rocm-smi
126
+ try:
127
+ result = subprocess.run(
128
+ ["rocm-smi", "--showmeminfo", "vram", "--csv"],
129
+ capture_output=True, text=True, timeout=5, check=True
130
+ )
131
+ if result.stdout.strip():
132
+ lines = result.stdout.strip().split('\n')
133
+ if len(lines) > 1:
134
+ try:
135
+ vram_mb = float(lines[1].split(',')[0].strip())
136
+ vram_gb = vram_mb / 1024
137
+ logger.info(f"Detected {vram_gb:.1f}GB AMD GPU VRAM")
138
+ return vram_gb
139
+ except (IndexError, ValueError):
140
+ pass
141
+ except (FileNotFoundError, subprocess.TimeoutExpired):
142
+ pass
143
+
144
+ # Fallback: 50% of system RAM (GPU rarely gets full RAM allocation)
145
+ if PSUTIL_AVAILABLE:
146
+ try:
147
+ system_ram = psutil.virtual_memory().total / (1024 ** 3)
148
+ fallback_vram = system_ram * 0.5
149
+ logger.info(f"No GPU detected; using 50% of RAM: {fallback_vram:.1f}GB")
150
+ return fallback_vram
151
+ except Exception as e:
152
+ logger.warning(f"Failed to detect system RAM: {e}; assuming 8GB")
153
+ return 8.0
154
+
155
+ logger.warning("Could not detect VRAM; assuming 8GB")
156
+ return 8.0
100
157
 
101
158
  @classmethod
102
159
  def recommend_model(cls) -> str:
@@ -9,6 +9,7 @@ import json
9
9
  import logging
10
10
  import subprocess
11
11
  import sys
12
+ import time
12
13
  from pathlib import Path
13
14
 
14
15
  from .ollama_adapter import OllamaAdapter, MODEL_TIERS
@@ -52,14 +53,28 @@ def check_ollama_installed() -> bool:
52
53
 
53
54
 
54
55
  def pull_model(model: str) -> bool:
55
- """Pull the specified model using Ollama CLI"""
56
+ """Pull the specified model using Ollama CLI with auto-start of ollama serve"""
56
57
  if not check_ollama_installed():
57
58
  logger.error("Ollama CLI not found. Install from https://ollama.ai")
58
59
  return False
59
60
 
60
61
  try:
62
+ # Try to ensure ollama serve is running in background
63
+ try:
64
+ logger.info("Starting ollama serve in background...")
65
+ subprocess.Popen(
66
+ ["ollama", "serve"],
67
+ stdout=subprocess.DEVNULL,
68
+ stderr=subprocess.DEVNULL,
69
+ start_new_session=True if sys.platform != "win32" else False
70
+ )
71
+ time.sleep(2) # Give it a moment to start
72
+ except Exception as e:
73
+ logger.warning(f"Could not auto-start ollama serve: {e}")
74
+
61
75
  logger.info(f"Pulling {model}...")
62
- subprocess.run(["ollama", "pull", model], check=True)
76
+ # Don't capture output so Ollama's progress bar displays to user
77
+ subprocess.run(["ollama", "pull", model], check=True, text=True)
63
78
  logger.info(f"✓ Successfully pulled {model}")
64
79
  return True
65
80
  except subprocess.CalledProcessError as e:
@@ -1,317 +0,0 @@
1
- # Phase 1 Complete: Ollama as the Backbone
2
-
3
- **Status:** ✅ **COMPLETE** — April 10, 2026
4
- **Duration:** Weeks 1–2 (Python only)
5
- **Goal:** Replace Anthropic API with local Ollama, zero API costs
6
-
7
- ---
8
-
9
- ## 🎯 Milestone Achieved
10
-
11
- **Claw Code now works end-to-end with Ollama:**
12
- ```bash
13
- python -m src turn-loop "write a Python quicksort"
14
- # ✓ Auto-detects VRAM
15
- # ✓ Selects best model tier (phi4-mini / qwen2.5-coder:7b / qwen2.5-coder:14b)
16
- # ✓ Queries local Ollama
17
- # ✓ No API keys required
18
- # ✓ Zero costs
19
- ```
20
-
21
- ---
22
-
23
- ## 📋 Work Completed
24
-
25
- ### 1. ✅ API Client → Ollama Adapter Integration
26
-
27
- **File:** `src/services/ollama_adapter.py` (already done)
28
-
29
- **Features:**
30
- - Auto-detect VRAM and select model tier
31
- - Non-streaming generation (`generate()`)
32
- - Real-time streaming (`stream_generate()`)
33
- - Connection verification
34
- - Graceful fallback error handling
35
-
36
- ### 2. ✅ Dynamic Model Detection
37
-
38
- **File:** `src/model_detection.py` (NEW)
39
-
40
- **Features:**
41
- - `get_available_models()` — calls `ollama list` to discover installed models
42
- - `select_best_model()` — picks best from available (prefers qwen2.5-coder:7b)
43
- - `detect_best_model()` — main entry point with auto-detection
44
- - Falls back gracefully if no models found
45
-
46
- ### 3. ✅ Configuration System
47
-
48
- **File:** `src/config.py` (NEW)
49
-
50
- **Features:**
51
- - `load_config()` — reads `~/.claude.json` or uses defaults
52
- - `ClaudeConfig` dataclass with:
53
- - `provider` (ollama)
54
- - `ollama_base_url` (localhost:11434)
55
- - `model` (auto-detect by default)
56
- - `max_tokens`, `temperature`, etc.
57
- - Zero Anthropic API key logic — all removed
58
-
59
- ### 4. ✅ Query Engine Integration
60
-
61
- **File:** `src/query_engine.py` (MODIFIED)
62
-
63
- **Changes:**
64
- - Import OllamaAdapter and model detection
65
- - Add `ollama_client` field to QueryEnginePort
66
- - `from_workspace()` now:
67
- - Loads config from `~/.claude.json`
68
- - Auto-detects model (calls `ollama list`)
69
- - Initializes OllamaAdapter
70
- - Gracefully falls back if Ollama unavailable
71
- - `submit_message()` now:
72
- - Calls Ollama for actual LLM generation
73
- - Builds context from matched commands/tools
74
- - Returns real model output (not just summaries)
75
- - `stream_submit_message()` now:
76
- - Streams tokens in real-time from Ollama
77
- - Yields events for UI integration
78
-
79
- ### 5. ✅ Streaming Support
80
-
81
- **Files:** `src/main.py`, `src/runtime.py` (MODIFIED)
82
-
83
- **Features:**
84
- - New `stream_turn_loop()` method in PortRuntime
85
- - CLI flag `--stream` for real-time output:
86
- ```bash
87
- python -m src turn-loop "prompt" --stream
88
- # Shows tokens as they arrive in real-time
89
- ```
90
- - Events streamed:
91
- - `message_start` — session info
92
- - `command_match` / `tool_match` — routing results
93
- - `message_delta` — token text
94
- - `message_stop` — usage & stop reason
95
-
96
- ### 6. ✅ End-to-End Test Suite
97
-
98
- **File:** `test_phase1.py` (NEW)
99
-
100
- **Tests:**
101
- 1. Configuration loading
102
- 2. Model detection (`ollama list`)
103
- 3. Ollama connection
104
- 4. QueryEngine initialization
105
- 5. Single-turn code generation (quicksort)
106
- 6. Multi-turn conversation
107
-
108
- **Run with:**
109
- ```bash
110
- python test_phase1.py
111
- ```
112
-
113
- **Expected output:**
114
- ```
115
- 🧪 CLAW CODE PHASE 1 - END-TO-END TEST SUITE
116
-
117
- TEST 1: Configuration Loading
118
- ✓ Provider: ollama
119
- ✓ Ollama URL: http://localhost:11434
120
- ...
121
-
122
- TEST 5: Single Turn (Generate Python Quicksort)
123
- Prompt: Write a Python function that implements quicksort algorithm
124
-
125
- Querying Ollama...
126
-
127
- Response (512 chars):
128
- def quicksort(arr):
129
- if len(arr) <= 1:
130
- return arr
131
- pivot = arr[len(arr) // 2]
132
- ...
133
-
134
- 🎉 ALL TESTS PASSED! Phase 1 is complete.
135
- ```
136
-
137
- ---
138
-
139
- ## 🏗️ Architecture
140
-
141
- ### Flow: `python -m src turn-loop "write code"`
142
-
143
- ```
144
- main.py
145
-
146
- PortRuntime.run_turn_loop()
147
-
148
- QueryEnginePort.from_workspace()
149
- ├─ load_config() from ~/.claude.json
150
- ├─ detect_best_model() via `ollama list`
151
- └─ OllamaAdapter(model="qwen2.5-coder:7b")
152
-
153
- submit_message(prompt)
154
- ├─ Build context from commands/tools
155
- ├─ ollama_client.generate(full_prompt)
156
- └─ Return TurnResult with model output
157
-
158
- Display response
159
- ```
160
-
161
- ### Streaming Flow: `python -m src turn-loop "write code" --stream`
162
-
163
- ```
164
- main.py --stream flag
165
-
166
- PortRuntime.stream_turn_loop()
167
-
168
- QueryEnginePort.stream_submit_message()
169
- ├─ ollama_client.stream_generate(prompt)
170
- └─ yield tokens as they arrive
171
-
172
- main.py displays tokens in real-time
173
- ```
174
-
175
- ---
176
-
177
- ## 📁 Files Modified/Created
178
-
179
- | File | Type | Purpose |
180
- |------|------|---------|
181
- | `src/config.py` | ✨ NEW | Configuration loading from ~/.claude.json |
182
- | `src/model_detection.py` | ✨ NEW | Dynamic model detection via `ollama list` |
183
- | `src/query_engine.py` | 📝 MOD | Ollama integration, real LLM calls |
184
- | `src/main.py` | 📝 MOD | Added `--stream` flag, streaming logic |
185
- | `src/runtime.py` | 📝 MOD | Added `stream_turn_loop()` method |
186
- | `test_phase1.py` | ✨ NEW | End-to-end test suite |
187
-
188
- ---
189
-
190
- ## 🚀 Quick Start
191
-
192
- ### Prerequisites
193
- ```bash
194
- # 1. Install Ollama from ollama.ai
195
- # 2. Pull a model
196
- ollama pull qwen2.5-coder:7b
197
- # 3. Start Ollama
198
- ollama serve
199
- ```
200
-
201
- ### Usage
202
- ```bash
203
- # Single prompt
204
- python -m src turn-loop "write a Python quicksort"
205
-
206
- # With streaming output
207
- python -m src turn-loop "write a Python quicksort" --stream
208
-
209
- # Multi-turn conversation
210
- python -m src turn-loop "explain quicksort" --max-turns 3
211
-
212
- # Run validation tests
213
- python test_phase1.py
214
- ```
215
-
216
- ---
217
-
218
- ## ✅ Phase 1 Checklist
219
-
220
- - ✅ Swap API client — Anthropic → Ollama
221
- - ✅ Support streaming (stream:true)
222
- - ✅ Model auto-detection (VRAM-based + `ollama list`)
223
- - ✅ Config layer (.claude.json, provider defaults)
224
- - ✅ Remove Anthropic API key logic
225
- - ✅ End-to-end test — `turn-loop` works with qwen2.5-coder:7b
226
- - ✅ Zero API keys required
227
- - ✅ Real model outputs (not stubs)
228
- - ✅ Streaming support for real-time responses
229
- - ✅ Model tier selection working
230
-
231
- ---
232
-
233
- ## 🎯 Verification
234
-
235
- To verify Phase 1 is complete:
236
-
237
- ```bash
238
- # 1. Run the test suite
239
- python test_phase1.py
240
-
241
- # 2. Manual test—generate quicksort
242
- python -m src turn-loop "write a Python function that implements quicksort algorithm"
243
-
244
- # 3. Streaming test
245
- python -m src turn-loop "write a test case" --stream
246
-
247
- # Expected: Real code output from local Ollama, no API costs
248
- ```
249
-
250
- ---
251
-
252
- ## 📊 Model Tiers (Auto-Selected)
253
-
254
- | VRAM | Model | Speed | Status |
255
- |------|-------|-------|--------|
256
- | ≤8GB | phi4-mini (3.8B) | 15-20 tok/s | ✅ Ready |
257
- | 8-12GB | qwen2.5-coder:7b | 25-40 tok/s | ✅ PRIMARY |
258
- | 10GB+ | qwen2.5-coder:14b | 10-20 tok/s | ✅ Ready |
259
-
260
- ---
261
-
262
- ## 🔗 Configuration File
263
-
264
- **Location:** `~/.claude.json`
265
- **Auto-created by:** `src/services/ollama_setup.py`
266
-
267
- ```json
268
- {
269
- "provider": "ollama",
270
- "ollama_base_url": "http://localhost:11434",
271
- "model": "auto-detect",
272
- "auto_detect_vram": true,
273
- "use_api_key": false,
274
- "max_tokens": 2048,
275
- "temperature": 0.7
276
- }
277
- ```
278
-
279
- ---
280
-
281
- ## 🎓 What Changed
282
-
283
- ### Before Phase 1
284
- - Only stub responses (no actual LLM calls)
285
- - Anthropic API client (unused)
286
- - No model selection logic
287
- - "write a Python quicksort" → summary, not code
288
-
289
- ### After Phase 1
290
- - Real Ollama calls
291
- - Auto-model detection
292
- - Actual code generation
293
- - "write a Python quicksort" → **working Python code in real-time**
294
- - Streaming support
295
- - Zero API costs
296
-
297
- ---
298
-
299
- ## 🚧 Next Steps (Phase 2+)
300
-
301
- 1. **Tool integration** — Wire up actual tool execution (file ops, git, etc.)
302
- 2. **Permission system** — Enforce tool access controls
303
- 3. **Session persistence** — Save/resume multi-turn conversations
304
- 4. **MCP integration** — Connect Model Context Protocol tools
305
- 5. **Rust runtime** — Performance improvements
306
- 6. **VSCode extension** — UI wrapper
307
-
308
- ---
309
-
310
- ## 📝 Summary
311
-
312
- **Phase 1 transforms Claw Code from a API-dependent stub into a fully-functional local coding agent powered by Ollama. Users can now generate, refactor, and debug code locally with zero API costs.**
313
-
314
- ✅ **Milestone: Achieved**
315
- 🎯 **Ready for Phase 2**
316
- 📊 **Status: Production-Ready (Python)**
317
-