get-claudia 1.2.4 → 1.2.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -43,39 +43,36 @@ Say hi. She'll introduce herself and set things up for you.
43
43
 
44
44
  ---
45
45
 
46
+ ## What's New in v1.2.5
47
+
48
+ **Fully automatic memory system** - No more manual steps after install:
49
+
50
+ - **Works after reboot** - Ollama and the memory daemon auto-start on login (macOS LaunchAgent)
51
+ - **Python 3.13 support** - sqlite-vec now works on all Python versions
52
+ - **Boot resilience** - Daemon waits for Ollama to start instead of failing silently
53
+ - **Better diagnostics** - Run `~/.claudia/diagnose.sh` to check all services
54
+
55
+ The memory system just works now. Install, reboot, and everything comes back up.
56
+
57
+ ---
58
+
46
59
  ## Already Have Claudia? Add Memory.
47
60
 
48
- If you installed Claudia before the memory system existed, you can upgrade:
61
+ If you installed Claudia before the memory system existed, just run the installer again:
49
62
 
50
63
  ```bash
51
- # Clone the repo (or pull latest if you have it)
52
- git clone https://github.com/kbanc/claudia.git
53
- cd claudia/memory-daemon
54
-
55
- # Run the installer
56
- ./scripts/install.sh
64
+ npx get-claudia
57
65
  ```
58
66
 
59
- The installer will:
67
+ When prompted, say **yes** to install the memory system. The installer will:
60
68
  - Set up the memory daemon at `~/.claudia/daemon/`
61
- - Install Ollama for local embeddings (optional but recommended)
62
- - Configure auto-start so the daemon runs on login
63
- - Show you what to add to your `.mcp.json`
64
-
65
- After installing, add this to your Claudia project's `.mcp.json`:
66
-
67
- ```json
68
- {
69
- "mcpServers": {
70
- "claudia-memory": {
71
- "command": "~/.claudia/daemon/venv/bin/python",
72
- "args": ["-m", "claudia_memory.mcp.server"]
73
- }
74
- }
75
- }
76
- ```
69
+ - Install Ollama for semantic search (optional but recommended)
70
+ - Configure auto-start so everything runs on login
71
+ - Pull the embedding model automatically
72
+ - Verify all services are working
73
+ - Update your `.mcp.json` automatically
77
74
 
78
- Restart Claude Code, and Claudia now has persistent memory.
75
+ Restart Claude Code in a new terminal, and Claudia now has persistent memory.
79
76
 
80
77
  ---
81
78
 
@@ -83,11 +80,12 @@ Restart Claude Code, and Claudia now has persistent memory.
83
80
 
84
81
  | Traditional AI | Claudia |
85
82
  |----------------|---------|
86
- | Forgets everything between sessions | **Persistent memory** — SQLite + vector search, never forgets |
83
+ | Forgets everything between sessions | **Persistent memory** — SQLite + vector search, survives reboots |
87
84
  | Treats conversations as isolated | **Tracks relationships** — People files, not just tasks |
88
85
  | Waits for instructions | **Proactive** — Surfaces risks before they become problems |
89
86
  | One-size-fits-all | **Personalized** — Structure generated for your work style |
90
87
  | Cloud-based, data harvested | **Local** — Runs on your machine, your context stays yours |
88
+ | Breaks after system updates | **Resilient** — Auto-starts on boot, retries on failure |
91
89
 
92
90
  ---
93
91
 
@@ -256,6 +254,44 @@ See what she surfaces. Then tell her about a person you work with.
256
254
 
257
255
  ---
258
256
 
257
+ ## Troubleshooting
258
+
259
+ **Memory tools not appearing?**
260
+ ```bash
261
+ # Check all services
262
+ ~/.claudia/diagnose.sh
263
+
264
+ # Common fixes:
265
+ # 1. Restart Claude Code in a NEW terminal (it reads .mcp.json at startup)
266
+ # 2. Check daemon health: curl http://localhost:3848/health
267
+ # 3. View logs: tail -f ~/.claudia/daemon-stderr.log
268
+ ```
269
+
270
+ **Ollama not running after reboot?**
271
+ ```bash
272
+ # Load the LaunchAgent
273
+ launchctl load ~/Library/LaunchAgents/com.ollama.serve.plist
274
+
275
+ # Or start manually
276
+ ollama serve
277
+ ```
278
+
279
+ **Vector search not working?**
280
+ ```bash
281
+ # Check if sqlite-vec is installed
282
+ ~/.claudia/daemon/venv/bin/python -c "import sqlite_vec; print('ok')"
283
+
284
+ # If not, install it
285
+ ~/.claudia/daemon/venv/bin/pip install sqlite-vec
286
+ ```
287
+
288
+ **Pull the embedding model**
289
+ ```bash
290
+ ollama pull all-minilm:l6-v2
291
+ ```
292
+
293
+ ---
294
+
259
295
  ## License
260
296
 
261
297
  Apache 2.0 — Use it, modify it, make it yours.
package/bin/index.js CHANGED
@@ -56,37 +56,55 @@ async function main() {
56
56
  const args = process.argv.slice(2);
57
57
  const arg = args[0];
58
58
 
59
- // Support "." for current directory
60
- const isCurrentDir = arg === '.';
59
+ // Support "." or "upgrade" for current directory
60
+ const isCurrentDir = arg === '.' || arg === 'upgrade';
61
61
  const targetDir = isCurrentDir ? '.' : (arg || 'claudia');
62
62
  const targetPath = isCurrentDir ? process.cwd() : join(process.cwd(), targetDir);
63
63
  const displayDir = isCurrentDir ? 'current directory' : targetDir;
64
64
 
65
65
  // Check if directory already exists and has conflicting files
66
+ let isUpgrade = false;
67
+
66
68
  if (existsSync(targetPath)) {
67
69
  const contents = readdirSync(targetPath);
68
70
  const hasConflict = contents.some(f => f === 'CLAUDE.md' || f === '.claude');
71
+
69
72
  if (hasConflict) {
70
- console.log(`\n${colors.yellow}⚠${colors.reset} Claudia files already exist in ${displayDir}.`);
71
- console.log(` Remove CLAUDE.md and .claude/ first, or choose a different location.\n`);
72
- process.exit(1);
73
+ // Check if this is an upgrade (existing memories to migrate)
74
+ const hasExistingMemories = existsSync(join(targetPath, 'context', 'me.md')) ||
75
+ existsSync(join(targetPath, 'context', 'learnings.md')) ||
76
+ existsSync(join(targetPath, 'context', 'patterns.md')) ||
77
+ existsSync(join(targetPath, 'people'));
78
+
79
+ if (hasExistingMemories) {
80
+ console.log(`\n${colors.cyan}✓${colors.reset} Found existing Claudia instance with memories.`);
81
+ isUpgrade = true;
82
+ // Don't exit - proceed to memory installation
83
+ } else {
84
+ // Fresh install conflict - exit as before
85
+ console.log(`\n${colors.yellow}⚠${colors.reset} Claudia files already exist in ${displayDir}.`);
86
+ console.log(` Remove CLAUDE.md and .claude/ first, or choose a different location.\n`);
87
+ process.exit(1);
88
+ }
73
89
  }
74
90
  }
75
91
 
76
- // Create target directory if not current dir
77
- if (!isCurrentDir) {
92
+ // Create target directory if not current dir (only for fresh installs)
93
+ if (!isCurrentDir && !isUpgrade) {
78
94
  mkdirSync(targetPath, { recursive: true });
79
95
  }
80
96
 
81
- // Copy template files (v2 - minimal seed)
82
- const templatePath = join(__dirname, '..', 'template-v2');
97
+ // Copy template files (v2 - minimal seed) - skip for upgrades
98
+ if (!isUpgrade) {
99
+ const templatePath = join(__dirname, '..', 'template-v2');
83
100
 
84
- try {
85
- cpSync(templatePath, targetPath, { recursive: true });
86
- console.log(`${colors.green}✓${colors.reset} Installed in ${displayDir}`);
87
- } catch (error) {
88
- console.error(`\n${colors.yellow}⚠${colors.reset} Error copying files: ${error.message}`);
89
- process.exit(1);
101
+ try {
102
+ cpSync(templatePath, targetPath, { recursive: true });
103
+ console.log(`${colors.green}✓${colors.reset} Installed in ${displayDir}`);
104
+ } catch (error) {
105
+ console.error(`\n${colors.yellow}⚠${colors.reset} Error copying files: ${error.message}`);
106
+ process.exit(1);
107
+ }
90
108
  }
91
109
 
92
110
  // Ask about enhanced memory system
@@ -113,10 +131,14 @@ async function main() {
113
131
 
114
132
  if (existsSync(memoryDaemonPath)) {
115
133
  try {
116
- // Run the install script
134
+ // Run the install script, passing project path for upgrades
117
135
  const result = spawn('bash', [memoryDaemonPath], {
118
136
  stdio: 'inherit',
119
- shell: true
137
+ shell: true,
138
+ env: {
139
+ ...process.env,
140
+ CLAUDIA_PROJECT_PATH: isUpgrade ? targetPath : ''
141
+ }
120
142
  });
121
143
 
122
144
  result.on('close', (code) => {
@@ -46,46 +46,54 @@ class Database:
46
46
  conn.execute("PRAGMA synchronous = NORMAL")
47
47
  conn.execute("PRAGMA foreign_keys = ON")
48
48
 
49
- # Try to load sqlite-vec extension
49
+ # Try to load sqlite-vec for vector search
50
+ # Priority: sqlite_vec Python package first (works on Python 3.13+),
51
+ # then fall back to native extension loading
52
+ loaded = False
53
+
54
+ # Method 1: Try sqlite_vec Python package (recommended, works everywhere)
50
55
  try:
51
- conn.enable_load_extension(True)
52
- # Try common locations for sqlite-vec
53
- sqlite_vec_paths = [
54
- "vec0", # If installed system-wide
55
- "/usr/local/lib/sqlite-vec/vec0",
56
- "/opt/homebrew/lib/sqlite-vec/vec0",
57
- str(Path.home() / ".local" / "lib" / "sqlite-vec" / "vec0"),
58
- ]
59
-
60
- loaded = False
61
- for path in sqlite_vec_paths:
62
- try:
63
- conn.load_extension(path)
64
- loaded = True
65
- logger.debug(f"Loaded sqlite-vec from {path}")
66
- break
67
- except sqlite3.OperationalError:
68
- continue
69
-
70
- if not loaded:
71
- # Try loading via sqlite_vec Python package
72
- try:
73
- import sqlite_vec
74
- sqlite_vec.load(conn)
75
- loaded = True
76
- logger.debug("Loaded sqlite-vec via Python package")
77
- except ImportError:
78
- pass
79
-
80
- if not loaded:
81
- logger.warning(
82
- "sqlite-vec extension not found. Vector search will be unavailable. "
83
- "Install with: pip install sqlite-vec"
84
- )
85
-
86
- conn.enable_load_extension(False)
56
+ import sqlite_vec
57
+ sqlite_vec.load(conn)
58
+ loaded = True
59
+ logger.debug("Loaded sqlite-vec via Python package")
60
+ except ImportError:
61
+ logger.debug("sqlite_vec package not installed")
87
62
  except Exception as e:
88
- logger.warning(f"Could not load sqlite-vec: {e}")
63
+ logger.debug(f"sqlite_vec package failed: {e}")
64
+
65
+ # Method 2: Try native extension loading (for systems with pre-installed sqlite-vec)
66
+ if not loaded:
67
+ try:
68
+ conn.enable_load_extension(True)
69
+ sqlite_vec_paths = [
70
+ "vec0", # If installed system-wide
71
+ "/usr/local/lib/sqlite-vec/vec0",
72
+ "/opt/homebrew/lib/sqlite-vec/vec0",
73
+ str(Path.home() / ".local" / "lib" / "sqlite-vec" / "vec0"),
74
+ ]
75
+
76
+ for path in sqlite_vec_paths:
77
+ try:
78
+ conn.load_extension(path)
79
+ loaded = True
80
+ logger.debug(f"Loaded sqlite-vec from {path}")
81
+ break
82
+ except sqlite3.OperationalError:
83
+ continue
84
+
85
+ conn.enable_load_extension(False)
86
+ except AttributeError:
87
+ # Python 3.13+ may not have enable_load_extension
88
+ logger.debug("enable_load_extension not available (Python 3.13+)")
89
+ except Exception as e:
90
+ logger.debug(f"Extension loading failed: {e}")
91
+
92
+ if not loaded:
93
+ logger.warning(
94
+ "sqlite-vec not available. Vector search will be disabled. "
95
+ "Install with: pip install sqlite-vec"
96
+ )
89
97
 
90
98
  self._local.connection = conn
91
99
 
@@ -3,10 +3,13 @@ Embedding service for Claudia Memory System
3
3
 
4
4
  Connects to local Ollama for generating text embeddings.
5
5
  Uses all-minilm:l6-v2 model (384 dimensions) for semantic search.
6
+
7
+ Includes retry logic to wait for Ollama to start (e.g., after system boot).
6
8
  """
7
9
 
8
10
  import asyncio
9
11
  import logging
12
+ import time
10
13
  from typing import List, Optional
11
14
 
12
15
  import httpx
@@ -15,6 +18,10 @@ from .config import get_config
15
18
 
16
19
  logger = logging.getLogger(__name__)
17
20
 
21
+ # Retry configuration for waiting on Ollama
22
+ OLLAMA_RETRY_ATTEMPTS = 5
23
+ OLLAMA_RETRY_DELAY = 2 # seconds
24
+
18
25
 
19
26
  class EmbeddingService:
20
27
  """Generate embeddings using local Ollama"""
@@ -40,11 +47,52 @@ class EmbeddingService:
40
47
  self._sync_client = httpx.Client(timeout=30.0)
41
48
  return self._sync_client
42
49
 
50
+ async def _wait_for_ollama(self, max_retries: int = OLLAMA_RETRY_ATTEMPTS, delay: float = OLLAMA_RETRY_DELAY) -> bool:
51
+ """Wait for Ollama to be available with retries (async)"""
52
+ client = await self._get_client()
53
+ for i in range(max_retries):
54
+ try:
55
+ response = await client.get(f"{self.host}/api/tags", timeout=5.0)
56
+ if response.status_code == 200:
57
+ logger.debug(f"Ollama available after {i + 1} attempt(s)")
58
+ return True
59
+ except Exception:
60
+ pass
61
+ if i < max_retries - 1:
62
+ logger.debug(f"Waiting for Ollama (attempt {i + 1}/{max_retries})...")
63
+ await asyncio.sleep(delay)
64
+ return False
65
+
66
+ def _wait_for_ollama_sync(self, max_retries: int = OLLAMA_RETRY_ATTEMPTS, delay: float = OLLAMA_RETRY_DELAY) -> bool:
67
+ """Wait for Ollama to be available with retries (sync)"""
68
+ client = self._get_sync_client()
69
+ for i in range(max_retries):
70
+ try:
71
+ response = client.get(f"{self.host}/api/tags", timeout=5.0)
72
+ if response.status_code == 200:
73
+ logger.debug(f"Ollama available after {i + 1} attempt(s)")
74
+ return True
75
+ except Exception:
76
+ pass
77
+ if i < max_retries - 1:
78
+ logger.debug(f"Waiting for Ollama (attempt {i + 1}/{max_retries})...")
79
+ time.sleep(delay)
80
+ return False
81
+
43
82
  async def is_available(self) -> bool:
44
- """Check if Ollama is running and model is available"""
83
+ """Check if Ollama is running and model is available.
84
+
85
+ Uses retry logic to wait for Ollama if it's starting up (e.g., after boot).
86
+ """
45
87
  if self._available is not None:
46
88
  return self._available
47
89
 
90
+ # Wait for Ollama to be available (with retries)
91
+ if not await self._wait_for_ollama():
92
+ logger.warning("Ollama not available after retries. Vector search disabled.")
93
+ self._available = False
94
+ return self._available
95
+
48
96
  try:
49
97
  client = await self._get_client()
50
98
  response = await client.get(f"{self.host}/api/tags")
@@ -65,16 +113,25 @@ class EmbeddingService:
65
113
  else:
66
114
  self._available = False
67
115
  except Exception as e:
68
- logger.warning(f"Ollama not available: {e}")
116
+ logger.warning(f"Ollama error: {e}")
69
117
  self._available = False
70
118
 
71
119
  return self._available
72
120
 
73
121
  def is_available_sync(self) -> bool:
74
- """Synchronous check if Ollama is available"""
122
+ """Synchronous check if Ollama is available.
123
+
124
+ Uses retry logic to wait for Ollama if it's starting up (e.g., after boot).
125
+ """
75
126
  if self._available is not None:
76
127
  return self._available
77
128
 
129
+ # Wait for Ollama to be available (with retries)
130
+ if not self._wait_for_ollama_sync():
131
+ logger.warning("Ollama not available after retries. Vector search disabled.")
132
+ self._available = False
133
+ return self._available
134
+
78
135
  try:
79
136
  client = self._get_sync_client()
80
137
  response = client.get(f"{self.host}/api/tags")
@@ -88,7 +145,7 @@ class EmbeddingService:
88
145
  else:
89
146
  self._available = False
90
147
  except Exception as e:
91
- logger.warning(f"Ollama not available: {e}")
148
+ logger.warning(f"Ollama error: {e}")
92
149
  self._available = False
93
150
 
94
151
  return self._available
@@ -94,18 +94,73 @@ else
94
94
  echo -e " ${DIM}Database will be created on first use.${NC}"
95
95
  fi
96
96
 
97
- # Check 7: Ollama (optional)
98
- echo -n "7. Ollama for vectors... "
97
+ # Check 7: Ollama installed
98
+ echo -n "7. Ollama installed... "
99
+ if command -v ollama &> /dev/null; then
100
+ echo -e "${GREEN}✓ OK${NC}"
101
+ else
102
+ echo -e "${YELLOW}○ Not installed (keyword search will be used)${NC}"
103
+ echo -e " ${DIM}Optional: brew install ollama${NC}"
104
+ fi
105
+
106
+ # Check 8: Ollama running
107
+ echo -n "8. Ollama running... "
108
+ if curl -s http://localhost:11434/api/tags &>/dev/null; then
109
+ echo -e "${GREEN}✓ Running${NC}"
110
+ else
111
+ echo -e "${YELLOW}○ Not running${NC}"
112
+ if [[ "$OSTYPE" == "darwin"* ]]; then
113
+ if [ -f "$HOME/Library/LaunchAgents/com.ollama.serve.plist" ]; then
114
+ echo -e " ${DIM}LaunchAgent exists - try: launchctl load ~/Library/LaunchAgents/com.ollama.serve.plist${NC}"
115
+ else
116
+ echo -e " ${DIM}Start with: ollama serve${NC}"
117
+ fi
118
+ else
119
+ echo -e " ${DIM}Start with: ollama serve${NC}"
120
+ fi
121
+ ISSUES_FOUND=$((ISSUES_FOUND + 1))
122
+ fi
123
+
124
+ # Check 9: Ollama auto-start (macOS only)
125
+ if [[ "$OSTYPE" == "darwin"* ]]; then
126
+ echo -n "9. Ollama auto-start... "
127
+ if [ -f "$HOME/Library/LaunchAgents/com.ollama.serve.plist" ]; then
128
+ echo -e "${GREEN}✓ LaunchAgent configured${NC}"
129
+ else
130
+ echo -e "${YELLOW}○ No LaunchAgent${NC}"
131
+ echo -e " ${DIM}Ollama won't start on boot. Re-run memory installer to configure.${NC}"
132
+ fi
133
+ else
134
+ echo -e "9. Ollama auto-start... ${DIM}(Linux - check systemd if needed)${NC}"
135
+ fi
136
+
137
+ # Check 10: Embedding model
138
+ echo -n "10. Embedding model... "
99
139
  if command -v ollama &> /dev/null; then
100
140
  if ollama list 2>/dev/null | grep -q "minilm"; then
101
- echo -e "${GREEN}✓ Available with embedding model${NC}"
141
+ echo -e "${GREEN}✓ all-minilm model available${NC}"
102
142
  else
103
- echo -e "${YELLOW}○ Installed but no embedding model${NC}"
143
+ echo -e "${YELLOW}○ No embedding model${NC}"
104
144
  echo -e " ${DIM}Run: ollama pull all-minilm:l6-v2${NC}"
145
+ ISSUES_FOUND=$((ISSUES_FOUND + 1))
105
146
  fi
106
147
  else
107
- echo -e "${YELLOW}○ Not installed (keyword search will be used)${NC}"
108
- echo -e " ${DIM}Optional: brew install ollama${NC}"
148
+ echo -e "${DIM}○ Skipped (Ollama not installed)${NC}"
149
+ fi
150
+
151
+ # Check 11: sqlite-vec (vector search)
152
+ echo -n "11. Vector search (sqlite-vec)... "
153
+ VENV_PYTHON="$HOME/.claudia/daemon/venv/bin/python"
154
+ if [ -f "$VENV_PYTHON" ]; then
155
+ if $VENV_PYTHON -c "import sqlite_vec; print('ok')" 2>/dev/null | grep -q "ok"; then
156
+ echo -e "${GREEN}✓ sqlite-vec available${NC}"
157
+ else
158
+ echo -e "${YELLOW}○ sqlite-vec not working${NC}"
159
+ echo -e " ${DIM}Fix: $HOME/.claudia/daemon/venv/bin/pip install sqlite-vec${NC}"
160
+ ISSUES_FOUND=$((ISSUES_FOUND + 1))
161
+ fi
162
+ else
163
+ echo -e "${RED}✗ Virtual environment missing${NC}"
109
164
  fi
110
165
 
111
166
  # Summary
@@ -94,7 +94,7 @@ random_message() {
94
94
  }
95
95
 
96
96
  # Check Python
97
- echo -e "${BOLD}Step 1/7: Environment Check${NC}"
97
+ echo -e "${BOLD}Step 1/8: Environment Check${NC}"
98
98
  echo
99
99
  if command -v python3 &> /dev/null; then
100
100
  PYTHON=$(command -v python3)
@@ -137,22 +137,88 @@ else
137
137
  fi
138
138
  fi
139
139
 
140
+ # Configure Ollama to auto-start on boot (macOS)
141
+ if [[ "$OSTYPE" == "darwin"* ]] && [ "$OLLAMA_AVAILABLE" = true ]; then
142
+ OLLAMA_PLIST="$HOME/Library/LaunchAgents/com.ollama.serve.plist"
143
+
144
+ if [ ! -f "$OLLAMA_PLIST" ]; then
145
+ # Find Ollama binary location
146
+ OLLAMA_BIN=$(command -v ollama)
147
+
148
+ mkdir -p "$HOME/Library/LaunchAgents"
149
+ cat > "$OLLAMA_PLIST" << PLIST
150
+ <?xml version="1.0" encoding="UTF-8"?>
151
+ <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
152
+ <plist version="1.0">
153
+ <dict>
154
+ <key>Label</key>
155
+ <string>com.ollama.serve</string>
156
+ <key>ProgramArguments</key>
157
+ <array>
158
+ <string>$OLLAMA_BIN</string>
159
+ <string>serve</string>
160
+ </array>
161
+ <key>RunAtLoad</key>
162
+ <true/>
163
+ <key>KeepAlive</key>
164
+ <true/>
165
+ <key>StandardOutPath</key>
166
+ <string>/tmp/ollama.log</string>
167
+ <key>StandardErrorPath</key>
168
+ <string>/tmp/ollama.err</string>
169
+ </dict>
170
+ </plist>
171
+ PLIST
172
+ launchctl load "$OLLAMA_PLIST" 2>/dev/null || true
173
+ echo -e " ${GREEN}✓${NC} Ollama configured to auto-start on boot"
174
+ else
175
+ echo -e " ${GREEN}✓${NC} Ollama LaunchAgent already configured"
176
+ fi
177
+ fi
178
+
140
179
  echo
141
180
  echo -e "${DIM}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
142
181
  echo
143
182
 
144
183
  # Pull embedding model
145
- echo -e "${BOLD}Step 2/7: AI Models${NC}"
184
+ echo -e "${BOLD}Step 2/8: AI Models${NC}"
146
185
  echo
147
186
  if [ "$OLLAMA_AVAILABLE" = true ]; then
187
+ # Ensure Ollama is running before pulling model
188
+ if ! curl -s http://localhost:11434/api/tags &>/dev/null; then
189
+ echo -e " ${CYAN}◐${NC} Starting Ollama server..."
190
+ ollama serve &>/dev/null &
191
+ OLLAMA_PID=$!
192
+
193
+ # Wait for Ollama to be ready (up to 10 seconds)
194
+ for i in {1..10}; do
195
+ if curl -s http://localhost:11434/api/tags &>/dev/null; then
196
+ break
197
+ fi
198
+ sleep 1
199
+ done
200
+
201
+ if curl -s http://localhost:11434/api/tags &>/dev/null; then
202
+ echo -e " ${GREEN}✓${NC} Ollama server running"
203
+ else
204
+ echo -e " ${YELLOW}!${NC} Could not start Ollama (will retry on boot)"
205
+ fi
206
+ else
207
+ echo -e " ${GREEN}✓${NC} Ollama server already running"
208
+ fi
209
+
210
+ # Pull embedding model
148
211
  if ollama list 2>/dev/null | grep -q "all-minilm"; then
149
212
  echo -e " ${GREEN}✓${NC} Embedding model ready"
150
213
  else
151
214
  echo -e " ${CYAN}◐${NC} Downloading embedding model (45MB)..."
152
215
  echo -e " ${DIM}This gives Claudia semantic understanding${NC}"
153
216
  echo
154
- ollama pull all-minilm:l6-v2 2>/dev/null || echo -e " ${YELLOW}!${NC} Model pull failed, continuing"
155
- echo -e " ${GREEN}✓${NC} Model downloaded"
217
+ if ollama pull all-minilm:l6-v2 2>/dev/null; then
218
+ echo -e " ${GREEN}✓${NC} Model downloaded"
219
+ else
220
+ echo -e " ${YELLOW}!${NC} Model pull failed (will retry when Ollama runs)"
221
+ fi
156
222
  fi
157
223
  else
158
224
  echo -e " ${YELLOW}○${NC} Skipping (Ollama not available)"
@@ -164,7 +230,7 @@ echo -e "${DIM}━━━━━━━━━━━━━━━━━━━━━
164
230
  echo
165
231
 
166
232
  # Create directories
167
- echo -e "${BOLD}Step 3/7: Creating Home${NC}"
233
+ echo -e "${BOLD}Step 3/8: Creating Home${NC}"
168
234
  echo
169
235
  mkdir -p "$DAEMON_DIR"
170
236
  mkdir -p "$MEMORY_DIR"
@@ -181,10 +247,11 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
181
247
  SOURCE_DIR="$(dirname "$SCRIPT_DIR")"
182
248
 
183
249
  # Copy daemon files
184
- echo -e "${BOLD}Step 4/7: Installing Core${NC}"
250
+ echo -e "${BOLD}Step 4/8: Installing Core${NC}"
185
251
  echo
186
252
  echo -e " ${CYAN}◐${NC} Copying memory system files..."
187
253
  cp -r "$SOURCE_DIR/claudia_memory" "$DAEMON_DIR/"
254
+ cp -r "$SOURCE_DIR/scripts" "$DAEMON_DIR/"
188
255
  cp "$SOURCE_DIR/pyproject.toml" "$DAEMON_DIR/"
189
256
  cp "$SOURCE_DIR/requirements.txt" "$DAEMON_DIR/"
190
257
  echo -e " ${GREEN}✓${NC} Core files installed"
@@ -199,7 +266,7 @@ echo -e "${DIM}━━━━━━━━━━━━━━━━━━━━━
199
266
  echo
200
267
 
201
268
  # Create virtual environment
202
- echo -e "${BOLD}Step 5/7: Python Environment${NC}"
269
+ echo -e "${BOLD}Step 5/8: Python Environment${NC}"
203
270
  echo
204
271
  echo -e " ${CYAN}◐${NC} Creating isolated environment..."
205
272
  $PYTHON -m venv "$VENV_DIR"
@@ -224,7 +291,7 @@ echo -e "${DIM}━━━━━━━━━━━━━━━━━━━━━
224
291
  echo
225
292
 
226
293
  # Configure auto-start
227
- echo -e "${BOLD}Step 6/7: Auto-Start Setup${NC}"
294
+ echo -e "${BOLD}Step 6/8: Auto-Start Setup${NC}"
228
295
  echo
229
296
 
230
297
  if [[ "$OSTYPE" == "darwin"* ]]; then
@@ -309,20 +376,90 @@ echo
309
376
  echo -e "${DIM}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
310
377
  echo
311
378
 
312
- # Verify
313
- echo -e "${BOLD}Step 7/7: Verification${NC}"
379
+ # Memory Migration (for upgrades)
380
+ echo -e "${BOLD}Step 7/8: Memory Migration${NC}"
381
+ echo
382
+
383
+ if [ -n "$CLAUDIA_PROJECT_PATH" ]; then
384
+ # Check if there are memories to migrate
385
+ if [ -d "$CLAUDIA_PROJECT_PATH/context" ] || [ -d "$CLAUDIA_PROJECT_PATH/people" ]; then
386
+ echo -e " ${CYAN}◐${NC} Found existing memories to migrate..."
387
+
388
+ # Wait a moment for daemon to start
389
+ sleep 2
390
+
391
+ # Run migration in quiet mode
392
+ "$VENV_DIR/bin/python" "$DAEMON_DIR/scripts/migrate_markdown.py" --quiet "$CLAUDIA_PROJECT_PATH"
393
+
394
+ if [ $? -eq 0 ]; then
395
+ echo -e " ${GREEN}✓${NC} Memories migrated to database"
396
+ else
397
+ echo -e " ${YELLOW}!${NC} Migration had issues (memories still in markdown)"
398
+ echo -e " ${DIM}You can retry manually: ~/.claudia/daemon/venv/bin/python -m claudia_memory.scripts.migrate_markdown $CLAUDIA_PROJECT_PATH${NC}"
399
+ fi
400
+ else
401
+ echo -e " ${DIM}No existing memories found to migrate${NC}"
402
+ fi
403
+ else
404
+ echo -e " ${DIM}Fresh install - no migration needed${NC}"
405
+ fi
406
+
407
+ echo
408
+ echo -e "${DIM}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
409
+ echo
410
+
411
+ # Verify all services
412
+ echo -e "${BOLD}Step 8/8: Verification${NC}"
314
413
  echo
315
- echo -e " ${CYAN}◐${NC} Waiting for daemon to start..."
414
+ echo -e " ${CYAN}◐${NC} Checking all services..."
316
415
  sleep 3
317
416
 
417
+ # Check 1: Ollama running
418
+ if curl -s http://localhost:11434/api/tags &>/dev/null; then
419
+ echo -e " ${GREEN}✓${NC} Ollama running"
420
+ else
421
+ echo -e " ${YELLOW}○${NC} Ollama not running (will start on next boot)"
422
+ fi
423
+
424
+ # Check 2: Ollama LaunchAgent (macOS)
425
+ if [[ "$OSTYPE" == "darwin"* ]]; then
426
+ if [ -f "$HOME/Library/LaunchAgents/com.ollama.serve.plist" ]; then
427
+ echo -e " ${GREEN}✓${NC} Ollama auto-start configured"
428
+ else
429
+ echo -e " ${YELLOW}○${NC} Ollama auto-start not configured"
430
+ fi
431
+ fi
432
+
433
+ # Check 3: Embedding model
434
+ if [ "$OLLAMA_AVAILABLE" = true ] && ollama list 2>/dev/null | grep -q "minilm"; then
435
+ echo -e " ${GREEN}✓${NC} Embedding model ready"
436
+ else
437
+ echo -e " ${YELLOW}○${NC} Embedding model pending"
438
+ fi
439
+
440
+ # Check 4: sqlite-vec (vector search)
441
+ if "$VENV_DIR/bin/python" -c "import sqlite_vec" 2>/dev/null; then
442
+ echo -e " ${GREEN}✓${NC} Vector search available (sqlite-vec)"
443
+ else
444
+ echo -e " ${YELLOW}○${NC} Vector search unavailable (keyword search only)"
445
+ fi
446
+
447
+ # Check 5: Memory daemon health
318
448
  if curl -s "http://localhost:3848/health" 2>/dev/null | grep -q "healthy"; then
319
- echo -e " ${GREEN}✓${NC} Health check passed"
449
+ echo -e " ${GREEN}✓${NC} Memory daemon running"
320
450
  HEALTH_OK=true
321
451
  else
322
- echo -e " ${YELLOW}○${NC} Health check pending (daemon still starting)"
452
+ echo -e " ${YELLOW}○${NC} Memory daemon starting..."
323
453
  HEALTH_OK=false
324
454
  fi
325
455
 
456
+ # Check 6: Claudia LaunchAgent (macOS)
457
+ if [[ "$OSTYPE" == "darwin"* ]]; then
458
+ if [ -f "$HOME/Library/LaunchAgents/com.claudia.memory.plist" ]; then
459
+ echo -e " ${GREEN}✓${NC} Claudia auto-start configured"
460
+ fi
461
+ fi
462
+
326
463
  echo
327
464
  echo -e "${DIM}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
328
465
  echo
@@ -346,13 +346,22 @@ def main():
346
346
  action="store_true",
347
347
  help="Enable debug logging",
348
348
  )
349
+ parser.add_argument(
350
+ "--quiet",
351
+ action="store_true",
352
+ help="Non-interactive mode (for automated migration)",
353
+ )
349
354
 
350
355
  args = parser.parse_args()
351
356
 
352
- logging.basicConfig(
353
- level=logging.DEBUG if args.debug else logging.INFO,
354
- format="%(asctime)s - %(levelname)s - %(message)s",
355
- )
357
+ if args.quiet:
358
+ # Suppress all logging in quiet mode
359
+ logging.basicConfig(level=logging.ERROR)
360
+ else:
361
+ logging.basicConfig(
362
+ level=logging.DEBUG if args.debug else logging.INFO,
363
+ format="%(asctime)s - %(levelname)s - %(message)s",
364
+ )
356
365
 
357
366
  # Initialize database
358
367
  db = get_db()
@@ -363,8 +372,24 @@ def main():
363
372
  if not args.path.exists():
364
373
  logger.error(f"Path not found: {args.path}")
365
374
  sys.exit(1)
375
+
376
+ if not args.quiet:
377
+ print(f"Migrating: {args.path}")
378
+
366
379
  stats = migrate_instance(args.path, args.dry_run)
367
- print(f"\nMigrated: {stats}")
380
+
381
+ if args.quiet:
382
+ # Quiet mode - minimal output for automated migration
383
+ total = sum(stats.values())
384
+ if total > 0:
385
+ print(f" - Migrated {stats['me']} items from context/me.md") if stats.get('me') else None
386
+ print(f" - Migrated {stats['learnings']} items from context/learnings.md") if stats.get('learnings') else None
387
+ print(f" - Migrated {stats['patterns']} items from context/patterns.md") if stats.get('patterns') else None
388
+ print(f" - Migrated {stats['commitments']} items from context/commitments.md") if stats.get('commitments') else None
389
+ if stats.get('people'):
390
+ print(f" - Migrated {stats['people']} people with {stats.get('memories', 0)} facts")
391
+ else:
392
+ print(f"\nMigrated: {stats}")
368
393
 
369
394
  elif args.all:
370
395
  # Migrate all instances
@@ -373,11 +398,12 @@ def main():
373
398
  logger.info("No Claudia instances found")
374
399
  sys.exit(0)
375
400
 
376
- print(f"Found {len(instances)} Claudia instance(s):\n")
377
- for i, instance in enumerate(instances, 1):
378
- print(f" {i}. {instance}")
401
+ if not args.quiet:
402
+ print(f"Found {len(instances)} Claudia instance(s):\n")
403
+ for i, instance in enumerate(instances, 1):
404
+ print(f" {i}. {instance}")
379
405
 
380
- if not args.dry_run:
406
+ if not args.dry_run and not args.quiet:
381
407
  confirm = input("\nMigrate all? (y/n) ")
382
408
  if confirm.lower() != "y":
383
409
  sys.exit(0)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "get-claudia",
3
- "version": "1.2.4",
3
+ "version": "1.2.6",
4
4
  "description": "An AI assistant who learns how you work.",
5
5
  "keywords": [
6
6
  "claudia",