tlc-claude-code 2.10.0 → 2.10.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  # /tlc:recall - Team Memory Search
2
2
 
3
- Search TLC team memory using plain grep. This is intentional: no vector search, no extra services.
3
+ Search TLC team memory through the context engine hybrid search pipeline. Use FTS5 plus vector search when the engine is available, and fall back to grep only when it is not.
4
4
 
5
5
  ## Usage
6
6
 
@@ -10,33 +10,58 @@ Search TLC team memory using plain grep. This is intentional: no vector search,
10
10
 
11
11
  ## What This Does
12
12
 
13
- 1. Searches `.tlc/memory/team/` with `grep -rl "query" .tlc/memory/team/`.
14
- 2. Looks in both `.tlc/memory/team/decisions/` and `.tlc/memory/team/gotchas/`.
15
- 3. Shows the file path, date from the filename, and a matching excerpt.
16
- 4. Uses grep only because it works everywhere.
13
+ 1. Opens `.tlc/context.sqlite` through `createContextEngine()`.
14
+ 2. Builds a search pipeline with `createSearchPipeline` from `@tlc/context-engine/search-pipeline`.
15
+ 3. Runs the context leg with the engine's hybrid FTS5 plus vector search.
16
+ 4. Searches TLC memory under `.tlc/memory/team/`.
17
+ 5. Falls back to recursive grep only when the context engine or database is unavailable.
17
18
 
18
19
  ## Search Rules
19
20
 
20
21
  - Search both `decisions/` and `gotchas/`.
21
- - Do not use vector search.
22
- - Do not depend on embeddings, databases, or external services.
23
- - Prefer simple recursive grep so the command works across projects.
22
+ - Prefer context-engine hybrid search over grep.
23
+ - Use vector ranking when embeddings are available; FTS5-only results are acceptable when vectors are missing.
24
+ - Fall back to grep if `@tlc/context-engine` cannot be loaded or `.tlc/context.sqlite` does not exist yet.
24
25
 
25
26
  ## Process
26
27
 
27
- ### Step 1: Find matching files
28
+ ### Step 1: Try the context engine
29
+
30
+ ```js
31
+ import { createContextEngine } from '@tlc/context-engine';
32
+ import { createSearch } from '@tlc/context-engine/retrieval/search';
33
+ import { createSearchPipeline } from '@tlc/context-engine/search-pipeline';
34
+
35
+ const engine = createContextEngine({ dbPath: '.tlc/context.sqlite' });
36
+ const hybridSearch = createSearch({
37
+ embeddingStore: {
38
+ findSimilar(db, queryVec, options) {
39
+ return engine.embeddings.findSimilar(queryVec, options);
40
+ },
41
+ },
42
+ });
43
+
44
+ const searchAll = createSearchPipeline({
45
+ search: (db, query, options) => hybridSearch(db, query, options),
46
+ });
47
+
48
+ const results = await searchAll(engine.db, query, { limit: 10 });
49
+ ```
50
+
51
+ ### Step 2: If the engine is unavailable, fall back to grep
28
52
 
29
53
  ```bash
30
- grep -rl "query" .tlc/memory/team/
54
+ grep -Ril "query" .tlc/memory/team/
31
55
  ```
32
56
 
33
- ### Step 2: Display matches
57
+ ### Step 3: Display matches
34
58
 
35
59
  For each matching file, show:
36
60
 
37
61
  - File path
38
62
  - Date parsed from the filename prefix
39
63
  - A short matching excerpt from the file body
64
+ - Search source: `context-engine` or `grep-fallback`
40
65
 
41
66
  ## Example Output
42
67
 
@@ -57,3 +82,4 @@ Excerpt: Local timezone conversion caused reporting drift in scheduled jobs.
57
82
  - If there are no matches, say so directly.
58
83
  - Keep excerpts short and relevant.
59
84
  - Search TLC memory under `.tlc/memory/team/`, not Claude-managed memory.
85
+ - Close the context engine after the search when it was opened successfully.
@@ -17,31 +17,32 @@ Or without arguments:
17
17
  ## What This Does
18
18
 
19
19
  1. Captures the provided decision text, or extracts decisions from the recent conversation context if no argument is given.
20
- 2. Writes a markdown file to `.tlc/memory/team/decisions/{date}-{slug}.md`.
21
- 3. Stores TLC-owned frontmatter for later grep-based recall.
22
- 4. Never writes to `~/.claude/projects/.../memory/` because that is Claude memory, not TLC memory.
20
+ 2. Appends a JSONL event to `.tlc/events/memory.jsonl` through the context engine event log.
21
+ 3. Replays the event into `.tlc/context.sqlite`.
22
+ 4. Exports visible `.md` memory files for git visibility under `.tlc/memory/team/decisions/`.
23
+ 5. Never writes to `~/.claude/projects/.../memory/` because that is Claude memory, not TLC memory.
23
24
 
24
25
  ## Write Format
25
26
 
26
- Write the decision as a markdown file with YAML frontmatter:
27
+ Write the canonical record as a JSONL event first:
28
+
29
+ ```json
30
+ {"type":"decision","category":"decision","scope":"team","subject":"use-utc-timestamps","body":"Always use UTC timestamps in the database and in exported logs."}
31
+ ```
32
+
33
+ Then export the visible markdown file with YAML frontmatter:
27
34
 
28
35
  ```yaml
29
36
  ---
30
- provider: claude
31
- source: manual
32
37
  timestamp: 2026-03-28T12:34:56Z
33
38
  type: decision
34
39
  scope: team
40
+ category: decision
41
+ subject: "use-utc-timestamps"
35
42
  ---
36
43
  ```
37
44
 
38
- The filename must be:
39
-
40
- ```text
41
- .tlc/memory/team/decisions/{date}-{slug}.md
42
- ```
43
-
44
- Example:
45
+ The markdown filename must be:
45
46
 
46
47
  ```text
47
48
  .tlc/memory/team/decisions/2026-03-28-use-utc-timestamps.md
@@ -53,19 +54,43 @@ Example:
53
54
 
54
55
  - Use the argument as the decision content.
55
56
  - Generate a short slug from the decision.
56
- - Write the file under `.tlc/memory/team/decisions/`.
57
+ - Append the event with `engine.events.appendEvent('.tlc/events/memory.jsonl', event)`.
58
+ - Replay the log into the database.
59
+ - Export markdown back to `.tlc/memory/team/decisions/`.
57
60
 
58
61
  ### Without an argument
59
62
 
60
63
  - Review the recent conversation context.
61
64
  - Extract concrete decisions, conventions, or constraints worth preserving.
62
65
  - Summarize them clearly.
63
- - Write one decision file under `.tlc/memory/team/decisions/`.
66
+ - Append one JSONL event.
67
+ - Export one visible markdown file under `.tlc/memory/team/decisions/`.
68
+
69
+ ## Implementation Sketch
70
+
71
+ ```js
72
+ import { createContextEngine } from '@tlc/context-engine';
73
+ import { exportFactsToMarkdown } from '@tlc/context-engine/import/markdown-exporter';
74
+
75
+ const engine = createContextEngine({ dbPath: '.tlc/context.sqlite' });
76
+ const event = engine.events.appendEvent('.tlc/events/memory.jsonl', {
77
+ type: 'decision',
78
+ category: 'decision',
79
+ scope: 'team',
80
+ subject: slug,
81
+ body: decisionText,
82
+ });
83
+
84
+ engine.events.replayEvents([event]);
85
+ exportFactsToMarkdown(engine.db, '.tlc/memory/team/decisions');
86
+ engine.close();
87
+ ```
64
88
 
65
89
  ## Confirmation
66
90
 
67
91
  ```text
68
92
  Remembered: {summary}
93
+ Event: .tlc/events/memory.jsonl
69
94
  File: .tlc/memory/team/decisions/2026-03-28-use-utc-timestamps.md
70
95
  ```
71
96
 
@@ -74,3 +99,4 @@ File: .tlc/memory/team/decisions/2026-03-28-use-utc-timestamps.md
74
99
  - Prefer concise, durable statements over raw transcript dumps.
75
100
  - Record decisions, not general chatter.
76
101
  - Use `.tlc/memory/team/gotchas/` for gotchas, not this command.
102
+ - JSONL is the source of truth; markdown is the human-visible export.
@@ -44,6 +44,15 @@ mkdir -p "$PROJECT_DIR/.tlc/memory/team/decisions" \
44
44
  "$PROJECT_DIR/.tlc/memory/team/gotchas" \
45
45
  "$PROJECT_DIR/.tlc/memory/.local/sessions" 2>/dev/null
46
46
 
47
+ # Bootstrap context engine on first run by importing visible markdown memory.
48
+ CONTEXT_BOOTSTRAP=""
49
+ if node -e "import('@tlc/context-engine/bootstrap').then(()=>process.exit(0)).catch(()=>process.exit(1))" >/dev/null 2>&1; then
50
+ CONTEXT_BOOTSTRAP="import('@tlc/context-engine').then(async ({ createContextEngine }) => { const { createBootstrap } = await import('@tlc/context-engine/bootstrap'); const { importMarkdownMemories } = await import('@tlc/context-engine/import/markdown-importer'); const { indexProject } = await import('@tlc/context-engine/retrieval/indexer'); const bootstrap = createBootstrap({ contextEngine: createContextEngine, markdownImporter: { importMarkdownMemories }, indexer: { indexProject } }); const result = await bootstrap(process.argv[1]); if ((result.imported || 0) > 0 || (result.indexed || 0) > 0) { console.log(' Context Engine: ✅ bootstrapped (' + result.imported + ' imported, ' + result.indexed + ' indexed)'); } }).catch(() => {})"
51
+ elif [ -d "$PROJECT_DIR/server/node_modules/@tlc/context-engine" ]; then
52
+ CONTEXT_BOOTSTRAP="(async () => { const [{ createContextEngine }, { createBootstrap }, { importMarkdownMemories }, { indexProject }] = await Promise.all([import('$PROJECT_DIR/server/node_modules/@tlc/context-engine/src/index.js'), import('$PROJECT_DIR/server/node_modules/@tlc/context-engine/src/bootstrap.js'), import('$PROJECT_DIR/server/node_modules/@tlc/context-engine/src/import/markdown-importer.js'), import('$PROJECT_DIR/server/node_modules/@tlc/context-engine/src/retrieval/indexer.js')]); const bootstrap = createBootstrap({ contextEngine: createContextEngine, markdownImporter: { importMarkdownMemories }, indexer: { indexProject } }); const result = await bootstrap(process.argv[1]); if ((result.imported || 0) > 0 || (result.indexed || 0) > 0) { console.log(' Context Engine: ✅ bootstrapped (' + result.imported + ' imported, ' + result.indexed + ' indexed)'); } })().catch(() => {})"
53
+ fi
54
+ [ -n "$CONTEXT_BOOTSTRAP" ] && node -e "$CONTEXT_BOOTSTRAP" "$PROJECT_DIR" 2>/dev/null
55
+
47
56
  if curl -sf --max-time 1 "http://localhost:${TLC_PORT}/api/health" > /dev/null 2>&1; then
48
57
  echo " TLC Server: ✅ running (:${TLC_PORT})"
49
58
  # Drain spool
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "tlc-claude-code",
3
- "version": "2.10.0",
3
+ "version": "2.10.1",
4
4
  "description": "TLC - Test Led Coding for Claude Code",
5
5
  "bin": {
6
6
  "tlc-claude-code": "./bin/install.js",
package/server/index.js CHANGED
@@ -23,9 +23,9 @@ const { parsePlan, parseBugs } = require('./lib/plan-parser');
23
23
  const { autoProvision, stopDatabase } = require('./lib/auto-database');
24
24
  const { GlobalConfig } = require('./lib/global-config');
25
25
  const { ProjectScanner } = require('./lib/project-scanner');
26
- const { createWorkspaceRouter } = require('./lib/workspace-api');
27
- const { createMemoryApi } = require('./lib/memory-api');
28
- const { createMemoryStoreAdapter } = require('./lib/memory-store-adapter');
26
+ const { createWorkspaceRouter } = require('./lib/workspace-api');
27
+ const { createMemoryApi } = require('./lib/memory-api');
28
+ const { createMemoryStoreAdapter } = require('./lib/memory-store-adapter');
29
29
  const {
30
30
  createUserStore,
31
31
  createAuthMiddleware,
@@ -86,12 +86,11 @@ const cors = require('cors');
86
86
  app.use(cors({ origin: true, credentials: true }));
87
87
 
88
88
  // Workspace API
89
- const globalConfig = new GlobalConfig();
90
- const projectScanner = new ProjectScanner();
91
- const { observeAndRemember } = require('./lib/memory-observer');
92
- const { createServerMemoryCapture } = require('./lib/memory-hooks');
93
- const { createMemoryStoreAdapter } = require('./lib/memory-store-adapter');
94
- const { checkOllamaHealth } = require('./lib/ollama-health');
89
+ const globalConfig = new GlobalConfig();
90
+ const projectScanner = new ProjectScanner();
91
+ const { observeAndRemember } = require('./lib/memory-observer');
92
+ const { createServerMemoryCapture } = require('./lib/memory-hooks');
93
+ const { checkOllamaHealth } = require('./lib/ollama-health');
95
94
 
96
95
  // Initialize memory directory structure (non-blocking)
97
96
  (async () => {
@@ -110,72 +109,103 @@ const memoryCapture = createServerMemoryCapture({
110
109
  });
111
110
 
112
111
  // Lazy-initialized memory dependencies (ESM modules loaded async)
113
- const memoryDeps = {
114
- observeAndRemember,
115
- semanticRecall: null,
116
- vectorIndexer: null,
117
- embeddingClient: null,
118
- vectorStore: null,
119
- };
112
+ const memoryDeps = {
113
+ observeAndRemember,
114
+ contextEngine: null,
115
+ contextSearch: null,
116
+ semanticRecall: null,
117
+ vectorIndexer: null,
118
+ };
120
119
 
121
120
  // Lazy init: load ESM memory modules on first use
122
121
  let memoryInitPromise = null;
123
- async function initMemoryPipeline() {
124
- if (memoryInitPromise) return memoryInitPromise;
125
- memoryInitPromise = (async () => {
126
- try {
127
- const os = require('os');
128
- const dbPath = path.join(os.homedir(), '.tlc', 'memory', 'vectors.db');
129
-
130
- // Ensure directory exists
131
- const dbDir = path.dirname(dbPath);
132
- if (!fs.existsSync(dbDir)) {
133
- fs.mkdirSync(dbDir, { recursive: true });
134
- }
135
-
136
- const { createEmbeddingClient } = await import('./lib/embedding-client.js');
137
- const { createVectorStore } = await import('./lib/vector-store.js');
138
- const { createVectorIndexer } = await import('./lib/vector-indexer.js');
139
- const { createSemanticRecall } = await import('./lib/semantic-recall.js');
140
-
141
- memoryDeps.embeddingClient = createEmbeddingClient();
142
- memoryDeps.vectorStore = await createVectorStore({ dbPath });
143
- memoryDeps.vectorIndexer = createVectorIndexer({
144
- vectorStore: memoryDeps.vectorStore,
145
- embeddingClient: memoryDeps.embeddingClient,
146
- });
147
- memoryDeps.semanticRecall = createSemanticRecall({
148
- vectorStore: memoryDeps.vectorStore,
149
- embeddingClient: memoryDeps.embeddingClient,
150
- });
151
-
152
- console.log('[TLC] Memory pipeline initialized (vector store at', dbPath + ')');
153
- } catch (err) {
154
- console.warn('[TLC] Memory pipeline unavailable:', err.message);
155
- // Non-fatal: server works without vector store
156
- }
157
- })();
122
+ async function initMemoryPipeline() {
123
+ if (memoryInitPromise) return memoryInitPromise;
124
+ memoryInitPromise = (async () => {
125
+ try {
126
+ const dbPath = path.join(PROJECT_DIR, '.tlc', 'context.sqlite');
127
+
128
+ // Ensure directory exists
129
+ const dbDir = path.dirname(dbPath);
130
+ if (!fs.existsSync(dbDir)) {
131
+ fs.mkdirSync(dbDir, { recursive: true });
132
+ }
133
+
134
+ const [{ createContextEngine }, { createSearchPipeline }, { createSearch }, { indexProject }] = await Promise.all([
135
+ import('@tlc/context-engine'),
136
+ import('@tlc/context-engine/search-pipeline'),
137
+ import('@tlc/context-engine/retrieval/search'),
138
+ import('@tlc/context-engine/retrieval/indexer'),
139
+ ]);
140
+
141
+ const contextEngine = createContextEngine({ dbPath });
142
+ const search = createSearch({
143
+ embeddingStore: {
144
+ findSimilar(db, queryVec, options) {
145
+ return contextEngine.embeddings.findSimilar(queryVec, options);
146
+ },
147
+ },
148
+ });
149
+ const searchPipeline = createSearchPipeline({
150
+ search: (db, query, options = {}) => search(db, query, options),
151
+ });
152
+
153
+ memoryDeps.contextEngine = contextEngine;
154
+ memoryDeps.contextSearch = searchPipeline;
155
+ memoryDeps.vectorIndexer = {
156
+ indexAll: async (projectRoot) => indexProject(contextEngine.db, projectRoot),
157
+ };
158
+ memoryDeps.semanticRecall = {
159
+ recall: async (query, _context = {}, options = {}) => {
160
+ const scope = options.scope ? [options.scope] : undefined;
161
+ const results = await searchPipeline(contextEngine.db, query, {
162
+ limit: options.limit,
163
+ scopes: scope,
164
+ queryVec: options.queryVec ?? null,
165
+ });
166
+
167
+ return results.map((result) => ({
168
+ id: result.uri,
169
+ uri: result.uri,
170
+ text: result.body || result.title || '',
171
+ title: result.title || result.uri,
172
+ body: result.body || '',
173
+ score: Number.isFinite(result.score) ? result.score : 0,
174
+ type: result.source === 'context' ? 'context' : 'codedb',
175
+ source: {
176
+ sourceFile: result.uri,
177
+ provider: result.source,
178
+ },
179
+ sourceFile: result.uri,
180
+ permanent: false,
181
+ }));
182
+ },
183
+ };
184
+
185
+ console.log('[TLC] Memory pipeline initialized (context engine at', dbPath + ')');
186
+ } catch (err) {
187
+ console.warn('[TLC] Memory pipeline unavailable:', err.message);
188
+ // Non-fatal: server works without vector store
189
+ }
190
+ })();
158
191
  return memoryInitPromise;
159
192
  }
160
193
 
161
194
  // Start memory init in background (non-blocking)
162
195
  initMemoryPipeline();
163
196
 
164
- const memoryApi = createMemoryApi({
165
- semanticRecall: { recall: async (...args) => {
166
- await initMemoryPipeline();
167
- return memoryDeps.semanticRecall ? memoryDeps.semanticRecall.recall(...args) : [];
168
- }},
169
- vectorIndexer: { indexAll: async (...args) => {
170
- await initMemoryPipeline();
171
- return memoryDeps.vectorIndexer ? memoryDeps.vectorIndexer.indexAll(...args) : { indexed: 0 };
172
- }},
173
- richCapture: { processChunk: async () => ({ stored: false }) },
174
- embeddingClient: { embed: async (...args) => {
175
- await initMemoryPipeline();
176
- return memoryDeps.embeddingClient ? memoryDeps.embeddingClient.embed(...args) : [];
177
- }},
178
- memoryStore: (() => {
197
+ const memoryApi = createMemoryApi({
198
+ semanticRecall: { recall: async (...args) => {
199
+ await initMemoryPipeline();
200
+ return memoryDeps.semanticRecall ? memoryDeps.semanticRecall.recall(...args) : [];
201
+ }},
202
+ vectorIndexer: { indexAll: async (...args) => {
203
+ await initMemoryPipeline();
204
+ return memoryDeps.vectorIndexer ? memoryDeps.vectorIndexer.indexAll(...args) : { indexed: 0 };
205
+ }},
206
+ richCapture: { processChunk: async () => ({ stored: false }) },
207
+ embeddingClient: { embed: async () => [] },
208
+ memoryStore: (() => {
179
209
  const adapter = createMemoryStoreAdapter(PROJECT_DIR);
180
210
  return {
181
211
  listConversations: async () => ({ items: [], total: 0 }), // TODO Phase 74: rich conversation capture
@@ -10,6 +10,7 @@
10
10
  "dependencies": {
11
11
  "@electric-sql/pglite": "^0.3.15",
12
12
  "@electric-sql/pglite-socket": "^0.0.20",
13
+ "@tlc/context-engine": "file:../../tlc-context-engine",
13
14
  "better-sqlite3": "^12.6.2",
14
15
  "chokidar": "^3.5.3",
15
16
  "cookie-parser": "^1.4.7",
@@ -22,6 +23,7 @@
22
23
  "sqlite-vec": "^0.1.7-alpha.2",
23
24
  "ssh2": "^1.17.0",
24
25
  "sudo-prompt": "^9.2.1",
26
+ "tlc-context-engine": "^0.1.0",
25
27
  "typescript": "^5.9.3",
26
28
  "ws": "^8.14.2"
27
29
  },
@@ -33,6 +35,17 @@
33
35
  "vitest": "^4.0.18"
34
36
  }
35
37
  },
38
+ "../../tlc-context-engine": {
39
+ "version": "0.1.0",
40
+ "license": "MIT",
41
+ "devDependencies": {
42
+ "better-sqlite3": "^11.9.1",
43
+ "vitest": "^3.2.4"
44
+ },
45
+ "peerDependencies": {
46
+ "better-sqlite3": ">=11.0.0"
47
+ }
48
+ },
36
49
  "node_modules/@balena/dockerignore": {
37
50
  "version": "1.0.2",
38
51
  "resolved": "https://registry.npmjs.org/@balena/dockerignore/-/dockerignore-1.0.2.tgz",
@@ -1009,6 +1022,10 @@
1009
1022
  "dev": true,
1010
1023
  "license": "MIT"
1011
1024
  },
1025
+ "node_modules/@tlc/context-engine": {
1026
+ "resolved": "../../tlc-context-engine",
1027
+ "link": true
1028
+ },
1012
1029
  "node_modules/@types/chai": {
1013
1030
  "version": "5.2.3",
1014
1031
  "resolved": "https://registry.npmjs.org/@types/chai/-/chai-5.2.3.tgz",
@@ -3714,6 +3731,15 @@
3714
3731
  "node": ">=14.0.0"
3715
3732
  }
3716
3733
  },
3734
+ "node_modules/tlc-context-engine": {
3735
+ "version": "0.1.0",
3736
+ "resolved": "https://registry.npmjs.org/tlc-context-engine/-/tlc-context-engine-0.1.0.tgz",
3737
+ "integrity": "sha512-OhLcPBmHuAYSv3XniJPnGcyDB2RaBi8wk/TsWQAaokkX7NkYhDFWWCrVsOqHTSKNMVf1M+xx+gvUFib5qaUCXA==",
3738
+ "license": "MIT",
3739
+ "peerDependencies": {
3740
+ "better-sqlite3": ">=11.0.0"
3741
+ }
3742
+ },
3717
3743
  "node_modules/to-regex-range": {
3718
3744
  "version": "5.0.1",
3719
3745
  "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
@@ -27,6 +27,7 @@
27
27
  "sqlite-vec": "^0.1.7-alpha.2",
28
28
  "ssh2": "^1.17.0",
29
29
  "sudo-prompt": "^9.2.1",
30
+ "tlc-context-engine": "^0.1.0",
30
31
  "typescript": "^5.9.3",
31
32
  "ws": "^8.14.2"
32
33
  },
package/server/setup.sh CHANGED
@@ -1,271 +1,271 @@
1
- #!/bin/bash
2
- #
3
- # TLC Server Setup Script
4
- # Installs Docker and other requirements for TLC dev server
5
- #
6
- # Usage: sudo ./setup.sh
7
- #
8
-
9
- set -e
10
-
11
- # Colors for output
12
- RED='\033[0;31m'
13
- GREEN='\033[0;32m'
14
- YELLOW='\033[1;33m'
15
- NC='\033[0m' # No Color
16
-
17
- log_info() {
18
- echo -e "${GREEN}[TLC]${NC} $1"
19
- }
20
-
21
- log_warn() {
22
- echo -e "${YELLOW}[TLC]${NC} $1"
23
- }
24
-
25
- log_error() {
26
- echo -e "${RED}[TLC]${NC} $1"
27
- }
28
-
29
- # Check if running as root
30
- if [ "$EUID" -ne 0 ]; then
31
- log_error "Please run with sudo: sudo ./setup.sh"
32
- exit 1
33
- fi
34
-
35
- # Get the actual user (not root)
36
- ACTUAL_USER=${SUDO_USER:-$USER}
37
- if [ "$ACTUAL_USER" = "root" ]; then
38
- log_error "Please run as a regular user with sudo, not as root directly"
39
- exit 1
40
- fi
41
-
42
- log_info "Setting up TLC server requirements for user: $ACTUAL_USER"
43
-
44
- # Detect OS
45
- detect_os() {
46
- if [ -f /etc/os-release ]; then
47
- . /etc/os-release
48
- OS=$ID
49
- OS_VERSION=$VERSION_ID
50
- elif [ "$(uname)" = "Darwin" ]; then
51
- OS="macos"
52
- OS_VERSION=$(sw_vers -productVersion)
53
- else
54
- OS="unknown"
55
- fi
56
- echo "$OS"
57
- }
58
-
59
- OS=$(detect_os)
60
- log_info "Detected OS: $OS"
61
-
62
- # Install Docker based on OS
63
- install_docker() {
64
- if command -v docker &> /dev/null; then
65
- log_info "Docker already installed: $(docker --version)"
66
- return 0
67
- fi
68
-
69
- log_info "Installing Docker..."
70
-
71
- case $OS in
72
- ubuntu|debian|pop)
73
- # Remove old versions
74
- apt-get remove -y docker docker-engine docker.io containerd runc 2>/dev/null || true
75
-
76
- # Install prerequisites
77
- apt-get update
78
- apt-get install -y \
79
- ca-certificates \
80
- curl \
81
- gnupg \
82
- lsb-release
83
-
84
- # Add Docker's official GPG key
85
- install -m 0755 -d /etc/apt/keyrings
86
- curl -fsSL https://download.docker.com/linux/$OS/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
87
- chmod a+r /etc/apt/keyrings/docker.gpg
88
-
89
- # Set up repository
90
- echo \
91
- "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/$OS \
92
- $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
93
-
94
- # Install Docker
95
- apt-get update
96
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
97
-
98
- log_info "Docker installed successfully"
99
- ;;
100
-
101
- fedora|rhel|centos)
102
- # Install prerequisites
103
- dnf -y install dnf-plugins-core
104
-
105
- # Add Docker repo
106
- dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
107
-
108
- # Install Docker
109
- dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
110
-
111
- log_info "Docker installed successfully"
112
- ;;
113
-
114
- arch|manjaro)
115
- pacman -S --noconfirm docker docker-compose
116
- log_info "Docker installed successfully"
117
- ;;
118
-
119
- macos)
120
- log_warn "Please install Docker Desktop from https://www.docker.com/products/docker-desktop"
121
- log_warn "After installation, enable WSL integration if using WSL"
122
- return 1
123
- ;;
124
-
125
- *)
126
- log_error "Unsupported OS: $OS"
127
- log_error "Please install Docker manually: https://docs.docker.com/engine/install/"
128
- return 1
129
- ;;
130
- esac
131
- }
132
-
133
- # Configure Docker for non-root access
134
- configure_docker_user() {
135
- log_info "Configuring Docker for user: $ACTUAL_USER"
136
-
137
- # Add user to docker group
138
- if ! getent group docker > /dev/null; then
139
- groupadd docker
140
- fi
141
-
142
- usermod -aG docker "$ACTUAL_USER"
143
- log_info "Added $ACTUAL_USER to docker group"
144
- }
145
-
146
- # Start Docker service
147
- start_docker() {
148
- log_info "Starting Docker service..."
149
-
150
- # Check if systemd is available (native Linux)
151
- if command -v systemctl &> /dev/null && systemctl is-system-running &> /dev/null; then
152
- systemctl enable docker
153
- systemctl start docker
154
- # Check if we're in WSL
155
- elif grep -qi microsoft /proc/version 2>/dev/null; then
156
- # WSL - use service command
157
- service docker start || true
158
- else
159
- # Try service command as fallback
160
- service docker start || true
161
- fi
162
-
163
- # Wait for Docker to be ready
164
- log_info "Waiting for Docker to be ready..."
165
- for i in {1..30}; do
166
- if docker info &> /dev/null; then
167
- log_info "Docker is ready"
168
- return 0
169
- fi
170
- sleep 1
171
- done
172
-
173
- log_warn "Docker may not be fully started. You might need to restart your terminal or run: sudo service docker start"
174
- }
175
-
176
- # Pull PostgreSQL image
177
- pull_postgres_image() {
178
- log_info "Pulling PostgreSQL image (this may take a moment)..."
179
-
180
- # Run as the actual user to ensure proper permissions
181
- su - "$ACTUAL_USER" -c "docker pull postgres:16-alpine" 2>/dev/null || \
182
- docker pull postgres:16-alpine
183
-
184
- log_info "PostgreSQL image ready"
185
- }
186
-
187
- # Install Node.js if not present
188
- install_nodejs() {
189
- if command -v node &> /dev/null; then
190
- NODE_VERSION=$(node --version)
191
- log_info "Node.js already installed: $NODE_VERSION"
192
- return 0
193
- fi
194
-
195
- log_info "Installing Node.js..."
196
-
197
- case $OS in
198
- ubuntu|debian|pop)
199
- # Install Node.js 20.x LTS
200
- curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
201
- apt-get install -y nodejs
202
- ;;
203
-
204
- fedora|rhel|centos)
205
- curl -fsSL https://rpm.nodesource.com/setup_20.x | bash -
206
- dnf install -y nodejs
207
- ;;
208
-
209
- arch|manjaro)
210
- pacman -S --noconfirm nodejs npm
211
- ;;
212
-
213
- macos)
214
- log_warn "Please install Node.js from https://nodejs.org/"
215
- return 1
216
- ;;
217
-
218
- *)
219
- log_warn "Please install Node.js manually: https://nodejs.org/"
220
- return 1
221
- ;;
222
- esac
223
-
224
- log_info "Node.js installed: $(node --version)"
225
- }
226
-
227
- # Main setup
228
- main() {
229
- echo ""
230
- echo " ████████╗██╗ ██████╗"
231
- echo " ╚══██╔══╝██║ ██╔════╝"
232
- echo " ██║ ██║ ██║"
233
- echo " ██║ ██║ ██║"
234
- echo " ██║ ███████╗╚██████╗"
235
- echo " ╚═╝ ╚══════╝ ╚═════╝"
236
- echo ""
237
- echo " TLC Server Setup"
238
- echo ""
239
-
240
- # Install Docker
241
- install_docker
242
-
243
- # Configure Docker for user
244
- configure_docker_user
245
-
246
- # Start Docker
247
- start_docker
248
-
249
- # Pull PostgreSQL image
250
- pull_postgres_image
251
-
252
- # Check/install Node.js
253
- install_nodejs
254
-
255
- echo ""
256
- log_info "=========================================="
257
- log_info "Setup complete!"
258
- log_info "=========================================="
259
- echo ""
260
- log_info "IMPORTANT: Log out and log back in (or restart your terminal)"
261
- log_info "for Docker group permissions to take effect."
262
- echo ""
263
- log_info "Then you can run TLC server with:"
264
- log_info " cd your-project && npx tlc-claude-code server"
265
- echo ""
266
- log_info "Or test Docker now with:"
267
- log_info " sudo docker run hello-world"
268
- echo ""
269
- }
270
-
271
- main "$@"
1
+ #!/bin/bash
2
+ #
3
+ # TLC Server Setup Script
4
+ # Installs Docker and other requirements for TLC dev server
5
+ #
6
+ # Usage: sudo ./setup.sh
7
+ #
8
+
9
+ set -e
10
+
11
+ # Colors for output
12
+ RED='\033[0;31m'
13
+ GREEN='\033[0;32m'
14
+ YELLOW='\033[1;33m'
15
+ NC='\033[0m' # No Color
16
+
17
+ log_info() {
18
+ echo -e "${GREEN}[TLC]${NC} $1"
19
+ }
20
+
21
+ log_warn() {
22
+ echo -e "${YELLOW}[TLC]${NC} $1"
23
+ }
24
+
25
+ log_error() {
26
+ echo -e "${RED}[TLC]${NC} $1"
27
+ }
28
+
29
+ # Check if running as root
30
+ if [ "$EUID" -ne 0 ]; then
31
+ log_error "Please run with sudo: sudo ./setup.sh"
32
+ exit 1
33
+ fi
34
+
35
+ # Get the actual user (not root)
36
+ ACTUAL_USER=${SUDO_USER:-$USER}
37
+ if [ "$ACTUAL_USER" = "root" ]; then
38
+ log_error "Please run as a regular user with sudo, not as root directly"
39
+ exit 1
40
+ fi
41
+
42
+ log_info "Setting up TLC server requirements for user: $ACTUAL_USER"
43
+
44
+ # Detect OS
45
+ detect_os() {
46
+ if [ -f /etc/os-release ]; then
47
+ . /etc/os-release
48
+ OS=$ID
49
+ OS_VERSION=$VERSION_ID
50
+ elif [ "$(uname)" = "Darwin" ]; then
51
+ OS="macos"
52
+ OS_VERSION=$(sw_vers -productVersion)
53
+ else
54
+ OS="unknown"
55
+ fi
56
+ echo "$OS"
57
+ }
58
+
59
+ OS=$(detect_os)
60
+ log_info "Detected OS: $OS"
61
+
62
+ # Install Docker based on OS
63
+ install_docker() {
64
+ if command -v docker &> /dev/null; then
65
+ log_info "Docker already installed: $(docker --version)"
66
+ return 0
67
+ fi
68
+
69
+ log_info "Installing Docker..."
70
+
71
+ case $OS in
72
+ ubuntu|debian|pop)
73
+ # Remove old versions
74
+ apt-get remove -y docker docker-engine docker.io containerd runc 2>/dev/null || true
75
+
76
+ # Install prerequisites
77
+ apt-get update
78
+ apt-get install -y \
79
+ ca-certificates \
80
+ curl \
81
+ gnupg \
82
+ lsb-release
83
+
84
+ # Add Docker's official GPG key
85
+ install -m 0755 -d /etc/apt/keyrings
86
+ curl -fsSL https://download.docker.com/linux/$OS/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
87
+ chmod a+r /etc/apt/keyrings/docker.gpg
88
+
89
+ # Set up repository
90
+ echo \
91
+ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/$OS \
92
+ $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
93
+
94
+ # Install Docker
95
+ apt-get update
96
+ apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
97
+
98
+ log_info "Docker installed successfully"
99
+ ;;
100
+
101
+ fedora|rhel|centos)
102
+ # Install prerequisites
103
+ dnf -y install dnf-plugins-core
104
+
105
+ # Add Docker repo
106
+ dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
107
+
108
+ # Install Docker
109
+ dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
110
+
111
+ log_info "Docker installed successfully"
112
+ ;;
113
+
114
+ arch|manjaro)
115
+ pacman -S --noconfirm docker docker-compose
116
+ log_info "Docker installed successfully"
117
+ ;;
118
+
119
+ macos)
120
+ log_warn "Please install Docker Desktop from https://www.docker.com/products/docker-desktop"
121
+ log_warn "After installation, enable WSL integration if using WSL"
122
+ return 1
123
+ ;;
124
+
125
+ *)
126
+ log_error "Unsupported OS: $OS"
127
+ log_error "Please install Docker manually: https://docs.docker.com/engine/install/"
128
+ return 1
129
+ ;;
130
+ esac
131
+ }
132
+
133
+ # Configure Docker for non-root access
134
+ configure_docker_user() {
135
+ log_info "Configuring Docker for user: $ACTUAL_USER"
136
+
137
+ # Add user to docker group
138
+ if ! getent group docker > /dev/null; then
139
+ groupadd docker
140
+ fi
141
+
142
+ usermod -aG docker "$ACTUAL_USER"
143
+ log_info "Added $ACTUAL_USER to docker group"
144
+ }
145
+
146
+ # Start Docker service
147
+ start_docker() {
148
+ log_info "Starting Docker service..."
149
+
150
+ # Check if systemd is available (native Linux)
151
+ if command -v systemctl &> /dev/null && systemctl is-system-running &> /dev/null; then
152
+ systemctl enable docker
153
+ systemctl start docker
154
+ # Check if we're in WSL
155
+ elif grep -qi microsoft /proc/version 2>/dev/null; then
156
+ # WSL - use service command
157
+ service docker start || true
158
+ else
159
+ # Try service command as fallback
160
+ service docker start || true
161
+ fi
162
+
163
+ # Wait for Docker to be ready
164
+ log_info "Waiting for Docker to be ready..."
165
+ for i in {1..30}; do
166
+ if docker info &> /dev/null; then
167
+ log_info "Docker is ready"
168
+ return 0
169
+ fi
170
+ sleep 1
171
+ done
172
+
173
+ log_warn "Docker may not be fully started. You might need to restart your terminal or run: sudo service docker start"
174
+ }
175
+
176
+ # Pull PostgreSQL image
177
+ pull_postgres_image() {
178
+ log_info "Pulling PostgreSQL image (this may take a moment)..."
179
+
180
+ # Run as the actual user to ensure proper permissions
181
+ su - "$ACTUAL_USER" -c "docker pull postgres:16-alpine" 2>/dev/null || \
182
+ docker pull postgres:16-alpine
183
+
184
+ log_info "PostgreSQL image ready"
185
+ }
186
+
187
+ # Install Node.js if not present
188
+ install_nodejs() {
189
+ if command -v node &> /dev/null; then
190
+ NODE_VERSION=$(node --version)
191
+ log_info "Node.js already installed: $NODE_VERSION"
192
+ return 0
193
+ fi
194
+
195
+ log_info "Installing Node.js..."
196
+
197
+ case $OS in
198
+ ubuntu|debian|pop)
199
+ # Install Node.js 20.x LTS
200
+ curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
201
+ apt-get install -y nodejs
202
+ ;;
203
+
204
+ fedora|rhel|centos)
205
+ curl -fsSL https://rpm.nodesource.com/setup_20.x | bash -
206
+ dnf install -y nodejs
207
+ ;;
208
+
209
+ arch|manjaro)
210
+ pacman -S --noconfirm nodejs npm
211
+ ;;
212
+
213
+ macos)
214
+ log_warn "Please install Node.js from https://nodejs.org/"
215
+ return 1
216
+ ;;
217
+
218
+ *)
219
+ log_warn "Please install Node.js manually: https://nodejs.org/"
220
+ return 1
221
+ ;;
222
+ esac
223
+
224
+ log_info "Node.js installed: $(node --version)"
225
+ }
226
+
227
+ # Main setup
228
+ main() {
229
+ echo ""
230
+ echo " ████████╗██╗ ██████╗"
231
+ echo " ╚══██╔══╝██║ ██╔════╝"
232
+ echo " ██║ ██║ ██║"
233
+ echo " ██║ ██║ ██║"
234
+ echo " ██║ ███████╗╚██████╗"
235
+ echo " ╚═╝ ╚══════╝ ╚═════╝"
236
+ echo ""
237
+ echo " TLC Server Setup"
238
+ echo ""
239
+
240
+ # Install Docker
241
+ install_docker
242
+
243
+ # Configure Docker for user
244
+ configure_docker_user
245
+
246
+ # Start Docker
247
+ start_docker
248
+
249
+ # Pull PostgreSQL image
250
+ pull_postgres_image
251
+
252
+ # Check/install Node.js
253
+ install_nodejs
254
+
255
+ echo ""
256
+ log_info "=========================================="
257
+ log_info "Setup complete!"
258
+ log_info "=========================================="
259
+ echo ""
260
+ log_info "IMPORTANT: Log out and log back in (or restart your terminal)"
261
+ log_info "for Docker group permissions to take effect."
262
+ echo ""
263
+ log_info "Then you can run TLC server with:"
264
+ log_info " cd your-project && npx tlc-claude-code server"
265
+ echo ""
266
+ log_info "Or test Docker now with:"
267
+ log_info " sudo docker run hello-world"
268
+ echo ""
269
+ }
270
+
271
+ main "$@"