lynkr 8.0.0 → 9.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.lynkr/telemetry.db +0 -0
- package/.lynkr/telemetry.db-shm +0 -0
- package/.lynkr/telemetry.db-wal +0 -0
- package/README.md +196 -322
- package/lynkr-skill.tar.gz +0 -0
- package/package.json +4 -3
- package/src/api/openai-router.js +64 -13
- package/src/api/providers-handler.js +171 -3
- package/src/api/router.js +9 -2
- package/src/clients/circuit-breaker.js +10 -247
- package/src/clients/codex-process.js +342 -0
- package/src/clients/codex-utils.js +143 -0
- package/src/clients/databricks.js +210 -63
- package/src/clients/resilience.js +540 -0
- package/src/clients/retry.js +22 -167
- package/src/clients/standard-tools.js +23 -0
- package/src/config/index.js +77 -0
- package/src/context/compression.js +42 -9
- package/src/context/distill.js +492 -0
- package/src/orchestrator/index.js +48 -8
- package/src/routing/complexity-analyzer.js +258 -5
- package/src/routing/index.js +12 -2
- package/src/routing/latency-tracker.js +148 -0
- package/src/routing/model-tiers.js +2 -0
- package/src/routing/quality-scorer.js +113 -0
- package/src/routing/telemetry.js +464 -0
- package/src/server.js +13 -12
- package/src/tools/code-graph.js +538 -0
- package/src/tools/code-mode.js +304 -0
- package/src/tools/index.js +4 -0
- package/src/tools/lazy-loader.js +18 -0
- package/src/tools/mcp-remote.js +7 -0
- package/src/tools/smart-selection.js +11 -0
- package/src/tools/tinyfish.js +358 -0
- package/src/tools/truncate.js +1 -0
- package/src/utils/payload.js +206 -0
- package/src/utils/perf-timer.js +80 -0
- package/.github/FUNDING.yml +0 -15
- package/.github/workflows/README.md +0 -215
- package/.github/workflows/ci.yml +0 -69
- package/.github/workflows/index.yml +0 -62
- package/.github/workflows/web-tools-tests.yml +0 -56
- package/CITATIONS.bib +0 -6
- package/DEPLOYMENT.md +0 -1001
- package/LYNKR-TUI-PLAN.md +0 -984
- package/PERFORMANCE-REPORT.md +0 -866
- package/PLAN-per-client-model-routing.md +0 -252
- package/docs/42642f749da6234f41b6b425c3bb07c9.txt +0 -1
- package/docs/BingSiteAuth.xml +0 -4
- package/docs/docs-style.css +0 -478
- package/docs/docs.html +0 -198
- package/docs/google5be250e608e6da39.html +0 -1
- package/docs/index.html +0 -577
- package/docs/index.md +0 -584
- package/docs/robots.txt +0 -4
- package/docs/sitemap.xml +0 -44
- package/docs/style.css +0 -1223
- package/docs/toon-integration-spec.md +0 -130
- package/documentation/README.md +0 -101
- package/documentation/api.md +0 -806
- package/documentation/claude-code-cli.md +0 -679
- package/documentation/codex-cli.md +0 -397
- package/documentation/contributing.md +0 -571
- package/documentation/cursor-integration.md +0 -734
- package/documentation/docker.md +0 -874
- package/documentation/embeddings.md +0 -762
- package/documentation/faq.md +0 -713
- package/documentation/features.md +0 -403
- package/documentation/headroom.md +0 -519
- package/documentation/installation.md +0 -758
- package/documentation/memory-system.md +0 -476
- package/documentation/production.md +0 -636
- package/documentation/providers.md +0 -1009
- package/documentation/routing.md +0 -476
- package/documentation/testing.md +0 -629
- package/documentation/token-optimization.md +0 -325
- package/documentation/tools.md +0 -697
- package/documentation/troubleshooting.md +0 -969
- package/final-test.js +0 -33
- package/headroom-sidecar/config.py +0 -93
- package/headroom-sidecar/requirements.txt +0 -14
- package/headroom-sidecar/server.py +0 -451
- package/monitor-agents.sh +0 -31
- package/scripts/audit-log-reader.js +0 -399
- package/scripts/compact-dictionary.js +0 -204
- package/scripts/test-deduplication.js +0 -448
- package/src/db/database.sqlite +0 -0
- package/te +0 -11622
- package/test/README.md +0 -212
- package/test/azure-openai-config.test.js +0 -213
- package/test/azure-openai-error-resilience.test.js +0 -238
- package/test/azure-openai-format-conversion.test.js +0 -354
- package/test/azure-openai-integration.test.js +0 -287
- package/test/azure-openai-routing.test.js +0 -175
- package/test/azure-openai-streaming.test.js +0 -171
- package/test/bedrock-integration.test.js +0 -457
- package/test/comprehensive-test-suite.js +0 -928
- package/test/config-validation.test.js +0 -207
- package/test/cursor-integration.test.js +0 -484
- package/test/format-conversion.test.js +0 -578
- package/test/hybrid-routing-integration.test.js +0 -269
- package/test/hybrid-routing-performance.test.js +0 -428
- package/test/llamacpp-integration.test.js +0 -882
- package/test/lmstudio-integration.test.js +0 -347
- package/test/memory/extractor.test.js +0 -398
- package/test/memory/retriever.test.js +0 -613
- package/test/memory/retriever.test.js.bak +0 -585
- package/test/memory/search.test.js +0 -537
- package/test/memory/search.test.js.bak +0 -389
- package/test/memory/store.test.js +0 -344
- package/test/memory/store.test.js.bak +0 -312
- package/test/memory/surprise.test.js +0 -300
- package/test/memory-performance.test.js +0 -472
- package/test/openai-integration.test.js +0 -683
- package/test/openrouter-error-resilience.test.js +0 -418
- package/test/passthrough-mode.test.js +0 -385
- package/test/performance-benchmark.js +0 -351
- package/test/performance-tests.js +0 -528
- package/test/routing.test.js +0 -225
- package/test/toon-compression.test.js +0 -131
- package/test/web-tools.test.js +0 -329
- package/test-agents-simple.js +0 -43
- package/test-cli-connection.sh +0 -33
- package/test-learning-unit.js +0 -126
- package/test-learning.js +0 -112
- package/test-parallel-agents.sh +0 -124
- package/test-parallel-direct.js +0 -155
- package/test-subagents.sh +0 -117
|
@@ -1,758 +0,0 @@
|
|
|
1
|
-
# Installation Guide
|
|
2
|
-
|
|
3
|
-
Complete installation instructions for all supported methods. Choose the option that best fits your workflow.
|
|
4
|
-
|
|
5
|
-
---
|
|
6
|
-
|
|
7
|
-
## Prerequisites
|
|
8
|
-
|
|
9
|
-
Before installing Lynkr, ensure you have:
|
|
10
|
-
|
|
11
|
-
- **Node.js 18+** (required for the global `fetch` API)
|
|
12
|
-
- **npm** (bundled with Node.js)
|
|
13
|
-
- At least one of the following:
|
|
14
|
-
- **Databricks account** with Claude serving endpoint
|
|
15
|
-
- **AWS account** with Bedrock access
|
|
16
|
-
- **OpenRouter API key** (get from [openrouter.ai/keys](https://openrouter.ai/keys))
|
|
17
|
-
- **Azure OpenAI** or **Azure Anthropic** subscription
|
|
18
|
-
- **OpenAI API key** (get from [platform.openai.com/api-keys](https://platform.openai.com/api-keys))
|
|
19
|
-
- **Moonshot AI API key** (get from [platform.moonshot.ai](https://platform.moonshot.ai))
|
|
20
|
-
- **Ollama** installed locally (for free local models)
|
|
21
|
-
- Optional: **Docker** for containerized deployment or MCP sandboxing
|
|
22
|
-
- Optional: **Claude Code CLI** (latest release) for CLI usage
|
|
23
|
-
|
|
24
|
-
---
|
|
25
|
-
|
|
26
|
-
## Installation Methods
|
|
27
|
-
|
|
28
|
-
### Method 1: NPM Package (Recommended)
|
|
29
|
-
|
|
30
|
-
**Fastest way to get started:**
|
|
31
|
-
|
|
32
|
-
```bash
|
|
33
|
-
# Install globally
|
|
34
|
-
npm install -g lynkr
|
|
35
|
-
|
|
36
|
-
# Verify installation
|
|
37
|
-
lynkr --version
|
|
38
|
-
```
|
|
39
|
-
|
|
40
|
-
**Start the server:**
|
|
41
|
-
```bash
|
|
42
|
-
lynkr start
|
|
43
|
-
# Or simply:
|
|
44
|
-
lynkr
|
|
45
|
-
```
|
|
46
|
-
|
|
47
|
-
**Benefits:**
|
|
48
|
-
- ✅ Global `lynkr` command available everywhere
|
|
49
|
-
- ✅ Automatic updates via `npm update -g lynkr`
|
|
50
|
-
- ✅ No repository cloning required
|
|
51
|
-
- ✅ Works immediately after install
|
|
52
|
-
|
|
53
|
-
---
|
|
54
|
-
|
|
55
|
-
### Method 2: Quick Install Script (curl)
|
|
56
|
-
|
|
57
|
-
**One-line installation:**
|
|
58
|
-
|
|
59
|
-
```bash
|
|
60
|
-
curl -fsSL https://raw.githubusercontent.com/vishalveerareddy123/Lynkr/main/install.sh | bash
|
|
61
|
-
```
|
|
62
|
-
|
|
63
|
-
This will:
|
|
64
|
-
- Clone Lynkr to `~/.lynkr`
|
|
65
|
-
- Install dependencies
|
|
66
|
-
- Create a default `.env` file
|
|
67
|
-
- Set up the `lynkr` command
|
|
68
|
-
|
|
69
|
-
**Custom installation directory:**
|
|
70
|
-
```bash
|
|
71
|
-
curl -fsSL https://raw.githubusercontent.com/vishalveerareddy123/Lynkr/main/install.sh | bash -s -- --dir /opt/lynkr
|
|
72
|
-
```
|
|
73
|
-
|
|
74
|
-
---
|
|
75
|
-
|
|
76
|
-
### Method 3: Git Clone (For Development)
|
|
77
|
-
|
|
78
|
-
**Clone from source:**
|
|
79
|
-
|
|
80
|
-
```bash
|
|
81
|
-
# Clone repository
|
|
82
|
-
git clone https://github.com/vishalveerareddy123/Lynkr.git
|
|
83
|
-
cd Lynkr
|
|
84
|
-
|
|
85
|
-
# Install dependencies
|
|
86
|
-
npm install
|
|
87
|
-
|
|
88
|
-
# Create .env from example
|
|
89
|
-
cp .env.example .env
|
|
90
|
-
|
|
91
|
-
# Edit .env with your provider credentials
|
|
92
|
-
nano .env
|
|
93
|
-
|
|
94
|
-
# Start server
|
|
95
|
-
npm start
|
|
96
|
-
```
|
|
97
|
-
|
|
98
|
-
**Development mode (auto-restart on changes):**
|
|
99
|
-
```bash
|
|
100
|
-
npm run dev
|
|
101
|
-
```
|
|
102
|
-
|
|
103
|
-
**Benefits:**
|
|
104
|
-
- ✅ Full source code access
|
|
105
|
-
- ✅ Easy to contribute changes
|
|
106
|
-
- ✅ Run latest development version
|
|
107
|
-
- ✅ Auto-restart in dev mode
|
|
108
|
-
|
|
109
|
-
---
|
|
110
|
-
|
|
111
|
-
### Method 4: Homebrew (macOS/Linux)
|
|
112
|
-
|
|
113
|
-
**Install via Homebrew:**
|
|
114
|
-
|
|
115
|
-
```bash
|
|
116
|
-
# Add the Lynkr tap
|
|
117
|
-
brew tap vishalveerareddy123/lynkr
|
|
118
|
-
|
|
119
|
-
# Install Lynkr
|
|
120
|
-
brew install lynkr
|
|
121
|
-
|
|
122
|
-
# Verify installation
|
|
123
|
-
lynkr --version
|
|
124
|
-
|
|
125
|
-
# Start server
|
|
126
|
-
lynkr start
|
|
127
|
-
```
|
|
128
|
-
|
|
129
|
-
**Update Lynkr:**
|
|
130
|
-
```bash
|
|
131
|
-
brew upgrade lynkr
|
|
132
|
-
```
|
|
133
|
-
|
|
134
|
-
**Benefits:**
|
|
135
|
-
- ✅ Native macOS/Linux package management
|
|
136
|
-
- ✅ Automatic dependency resolution
|
|
137
|
-
- ✅ Easy updates via Homebrew
|
|
138
|
-
- ✅ System-wide installation
|
|
139
|
-
|
|
140
|
-
---
|
|
141
|
-
|
|
142
|
-
### Method 5: Docker (Production)
|
|
143
|
-
|
|
144
|
-
**Docker Compose (Recommended for Production):**
|
|
145
|
-
|
|
146
|
-
```bash
|
|
147
|
-
# Clone repository
|
|
148
|
-
git clone https://github.com/vishalveerareddy123/Lynkr.git
|
|
149
|
-
cd Lynkr
|
|
150
|
-
|
|
151
|
-
# Copy environment template
|
|
152
|
-
cp .env.example .env
|
|
153
|
-
|
|
154
|
-
# Edit .env with your credentials
|
|
155
|
-
nano .env
|
|
156
|
-
|
|
157
|
-
# Start services (Lynkr + Ollama)
|
|
158
|
-
docker-compose up -d
|
|
159
|
-
|
|
160
|
-
# Pull Ollama model (if using Ollama)
|
|
161
|
-
docker exec ollama ollama pull llama3.1:8b
|
|
162
|
-
|
|
163
|
-
# Verify it's running
|
|
164
|
-
curl http://localhost:8081/health/live
|
|
165
|
-
```
|
|
166
|
-
|
|
167
|
-
**Standalone Docker:**
|
|
168
|
-
|
|
169
|
-
```bash
|
|
170
|
-
# Build image
|
|
171
|
-
docker build -t lynkr:latest .
|
|
172
|
-
|
|
173
|
-
# Run container
|
|
174
|
-
docker run -d \
|
|
175
|
-
--name lynkr \
|
|
176
|
-
-p 8081:8081 \
|
|
177
|
-
-e MODEL_PROVIDER=databricks \
|
|
178
|
-
-e DATABRICKS_API_BASE=https://your-workspace.databricks.com \
|
|
179
|
-
-e DATABRICKS_API_KEY=your-key \
|
|
180
|
-
-v $(pwd)/data:/app/data \
|
|
181
|
-
lynkr:latest
|
|
182
|
-
```
|
|
183
|
-
|
|
184
|
-
**Benefits:**
|
|
185
|
-
- ✅ Isolated environment
|
|
186
|
-
- ✅ Easy deployment to Kubernetes/cloud
|
|
187
|
-
- ✅ Bundled with Ollama (docker-compose)
|
|
188
|
-
- ✅ Volume persistence for data
|
|
189
|
-
- ✅ Production-ready configuration
|
|
190
|
-
|
|
191
|
-
See [Docker Deployment Guide](docker.md) for advanced options (GPU support, K8s, health checks).
|
|
192
|
-
|
|
193
|
-
---
|
|
194
|
-
|
|
195
|
-
## Configuration
|
|
196
|
-
|
|
197
|
-
After installation, configure Lynkr for your chosen provider:
|
|
198
|
-
|
|
199
|
-
### Creating Configuration File
|
|
200
|
-
|
|
201
|
-
**Option A: Environment Variables (Recommended for Quick Start)**
|
|
202
|
-
```bash
|
|
203
|
-
export MODEL_PROVIDER=databricks
|
|
204
|
-
export DATABRICKS_API_BASE=https://your-workspace.databricks.com
|
|
205
|
-
export DATABRICKS_API_KEY=your-key
|
|
206
|
-
lynkr start
|
|
207
|
-
```
|
|
208
|
-
|
|
209
|
-
**Option B: .env File (Recommended for Production)**
|
|
210
|
-
```bash
|
|
211
|
-
# Copy example file
|
|
212
|
-
cp .env.example .env
|
|
213
|
-
|
|
214
|
-
# Edit with your credentials
|
|
215
|
-
nano .env
|
|
216
|
-
```
|
|
217
|
-
|
|
218
|
-
Example `.env` file:
|
|
219
|
-
```env
|
|
220
|
-
# Core Configuration
|
|
221
|
-
MODEL_PROVIDER=databricks
|
|
222
|
-
PORT=8081
|
|
223
|
-
LOG_LEVEL=info
|
|
224
|
-
WORKSPACE_ROOT=/path/to/your/projects
|
|
225
|
-
|
|
226
|
-
# Databricks Configuration
|
|
227
|
-
DATABRICKS_API_BASE=https://your-workspace.cloud.databricks.com
|
|
228
|
-
DATABRICKS_API_KEY=dapi1234567890abcdef
|
|
229
|
-
|
|
230
|
-
# Tool Execution
|
|
231
|
-
TOOL_EXECUTION_MODE=server
|
|
232
|
-
|
|
233
|
-
# Memory System (optional)
|
|
234
|
-
MEMORY_ENABLED=true
|
|
235
|
-
MEMORY_RETRIEVAL_LIMIT=5
|
|
236
|
-
```
|
|
237
|
-
|
|
238
|
-
---
|
|
239
|
-
|
|
240
|
-
## Understanding Provider Selection
|
|
241
|
-
|
|
242
|
-
Lynkr has two modes for selecting which AI provider handles your requests:
|
|
243
|
-
|
|
244
|
-
| Mode | Config | How it works | Best for |
|
|
245
|
-
|------|--------|-------------|----------|
|
|
246
|
-
| **Static** | `MODEL_PROVIDER=ollama` | All requests go to one provider | Simple setups, single provider |
|
|
247
|
-
| **Tier-based** | All 4 `TIER_*` vars set | Requests route by complexity score | Cost optimization, multi-provider |
|
|
248
|
-
|
|
249
|
-
**Static mode** — Set `MODEL_PROVIDER` to your provider. Every request goes there. Simple and predictable.
|
|
250
|
-
|
|
251
|
-
**Tier-based mode** — Set all 4 `TIER_*` env vars (`TIER_SIMPLE`, `TIER_MEDIUM`, `TIER_COMPLEX`, `TIER_REASONING`). Each request is scored for complexity and routed to the appropriate tier's provider. When all 4 are set, they **override** `MODEL_PROVIDER` for routing decisions.
|
|
252
|
-
|
|
253
|
-
> **Note:** If only some `TIER_*` vars are set (not all 4), tier routing is disabled and `MODEL_PROVIDER` is used instead. `MODEL_PROVIDER` is always required as a fallback default even when tiers are configured.
|
|
254
|
-
|
|
255
|
-
See [Tier-Based Routing](#tier-based-routing-cost-optimization) below for full setup, or pick a single provider from the Quick Start examples to get running immediately.
|
|
256
|
-
|
|
257
|
-
---
|
|
258
|
-
|
|
259
|
-
## Quick Start Examples
|
|
260
|
-
|
|
261
|
-
Choose your provider and follow the setup steps:
|
|
262
|
-
|
|
263
|
-
### 1. Databricks (Production)
|
|
264
|
-
|
|
265
|
-
**Best for:** Enterprise production use, Claude Sonnet 4.5, Claude Opus 4.5
|
|
266
|
-
|
|
267
|
-
```bash
|
|
268
|
-
# Install
|
|
269
|
-
npm install -g lynkr
|
|
270
|
-
|
|
271
|
-
# Configure
|
|
272
|
-
export MODEL_PROVIDER=databricks
|
|
273
|
-
export DATABRICKS_API_BASE=https://your-workspace.cloud.databricks.com
|
|
274
|
-
export DATABRICKS_API_KEY=dapi1234567890abcdef
|
|
275
|
-
|
|
276
|
-
# Start
|
|
277
|
-
lynkr start
|
|
278
|
-
```
|
|
279
|
-
|
|
280
|
-
**Get Databricks credentials:**
|
|
281
|
-
1. Log in to your Databricks workspace
|
|
282
|
-
2. Go to **Settings** → **User Settings**
|
|
283
|
-
3. Click **Generate New Token**
|
|
284
|
-
4. Copy the token (this is your `DATABRICKS_API_KEY`)
|
|
285
|
-
5. Your workspace URL is like `https://your-workspace.cloud.databricks.com`
|
|
286
|
-
|
|
287
|
-
---
|
|
288
|
-
|
|
289
|
-
### 2. AWS Bedrock (100+ Models)
|
|
290
|
-
|
|
291
|
-
**Best for:** AWS ecosystem, multi-model flexibility, Claude + alternatives
|
|
292
|
-
|
|
293
|
-
```bash
|
|
294
|
-
# Install
|
|
295
|
-
npm install -g lynkr
|
|
296
|
-
|
|
297
|
-
# Configure
|
|
298
|
-
export MODEL_PROVIDER=bedrock
|
|
299
|
-
export AWS_BEDROCK_API_KEY=AKIAIOSFODNN7EXAMPLE
|
|
300
|
-
export AWS_BEDROCK_REGION=us-east-1
|
|
301
|
-
export AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0
|
|
302
|
-
|
|
303
|
-
# Start
|
|
304
|
-
lynkr start
|
|
305
|
-
```
|
|
306
|
-
|
|
307
|
-
**Get AWS Bedrock credentials:**
|
|
308
|
-
1. Log in to AWS Console
|
|
309
|
-
2. Navigate to **IAM** → **Security Credentials**
|
|
310
|
-
3. Create new access key
|
|
311
|
-
4. Enable Bedrock in your region (us-east-1, us-west-2, etc.)
|
|
312
|
-
5. Request model access in Bedrock console
|
|
313
|
-
|
|
314
|
-
**Popular Bedrock models:**
|
|
315
|
-
- `anthropic.claude-3-5-sonnet-20241022-v2:0` - Claude 3.5 Sonnet
|
|
316
|
-
- `us.anthropic.claude-sonnet-4-5-20250929-v1:0` - Claude 4.5 Sonnet
|
|
317
|
-
- `amazon.titan-text-express-v1` - Amazon Titan
|
|
318
|
-
- `meta.llama3-70b-instruct-v1:0` - Llama 3
|
|
319
|
-
- See [BEDROCK_MODELS.md](../BEDROCK_MODELS.md) for complete list
|
|
320
|
-
|
|
321
|
-
---
|
|
322
|
-
|
|
323
|
-
### 3. OpenRouter (Simplest Cloud)
|
|
324
|
-
|
|
325
|
-
**Best for:** Quick setup, 100+ models, cost flexibility
|
|
326
|
-
|
|
327
|
-
```bash
|
|
328
|
-
# Install
|
|
329
|
-
npm install -g lynkr
|
|
330
|
-
|
|
331
|
-
# Configure
|
|
332
|
-
export MODEL_PROVIDER=openrouter
|
|
333
|
-
export OPENROUTER_API_KEY=sk-or-v1-your-key
|
|
334
|
-
export OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
|
|
335
|
-
|
|
336
|
-
# Start
|
|
337
|
-
lynkr start
|
|
338
|
-
```
|
|
339
|
-
|
|
340
|
-
**Get OpenRouter API key:**
|
|
341
|
-
1. Visit [openrouter.ai](https://openrouter.ai)
|
|
342
|
-
2. Sign in with GitHub, Google, or email
|
|
343
|
-
3. Go to [openrouter.ai/keys](https://openrouter.ai/keys)
|
|
344
|
-
4. Create a new API key
|
|
345
|
-
5. Add credits (pay-as-you-go, no subscription)
|
|
346
|
-
|
|
347
|
-
**Popular OpenRouter models:**
|
|
348
|
-
- `anthropic/claude-3.5-sonnet` - Claude 3.5 Sonnet
|
|
349
|
-
- `openai/gpt-4o` - GPT-4o
|
|
350
|
-
- `openai/gpt-4o-mini` - GPT-4o mini (affordable)
|
|
351
|
-
- `google/gemini-pro-1.5` - Gemini Pro
|
|
352
|
-
- `meta-llama/llama-3.1-70b-instruct` - Llama 3.1
|
|
353
|
-
- See [openrouter.ai/models](https://openrouter.ai/models) for complete list
|
|
354
|
-
|
|
355
|
-
---
|
|
356
|
-
|
|
357
|
-
### 4. Ollama (100% Local, FREE)
|
|
358
|
-
|
|
359
|
-
**Best for:** Local development, privacy, offline use, no API costs
|
|
360
|
-
|
|
361
|
-
```bash
|
|
362
|
-
# Install Ollama first
|
|
363
|
-
brew install ollama # macOS
|
|
364
|
-
# Or download from: https://ollama.ai/download
|
|
365
|
-
|
|
366
|
-
# Start Ollama service
|
|
367
|
-
ollama serve
|
|
368
|
-
|
|
369
|
-
# Pull a model (in separate terminal)
|
|
370
|
-
ollama pull llama3.1:8b
|
|
371
|
-
|
|
372
|
-
# Install Lynkr
|
|
373
|
-
npm install -g lynkr
|
|
374
|
-
|
|
375
|
-
# Configure
|
|
376
|
-
export MODEL_PROVIDER=ollama
|
|
377
|
-
export OLLAMA_MODEL=llama3.1:8b
|
|
378
|
-
|
|
379
|
-
# Start
|
|
380
|
-
lynkr start
|
|
381
|
-
```
|
|
382
|
-
|
|
383
|
-
**Recommended Ollama models for Claude Code:**
|
|
384
|
-
- `llama3.1:8b` - Good balance (tool calling supported)
|
|
385
|
-
- `llama3.2` - Latest Llama (tool calling supported)
|
|
386
|
-
- `qwen2.5:14b` - Strong reasoning (larger model, 7b struggles with tools)
|
|
387
|
-
- `mistral:7b-instruct` - Fast and capable
|
|
388
|
-
|
|
389
|
-
**Model sizes:**
|
|
390
|
-
- 7B models: ~4-5GB download
|
|
391
|
-
- 8B models: ~4.7GB download
|
|
392
|
-
- 14B models: ~8GB download
|
|
393
|
-
- 32B models: ~18GB download
|
|
394
|
-
|
|
395
|
-
---
|
|
396
|
-
|
|
397
|
-
### 5. llama.cpp (Maximum Performance)
|
|
398
|
-
|
|
399
|
-
**Best for:** Custom GGUF models, maximum control, optimized inference
|
|
400
|
-
|
|
401
|
-
```bash
|
|
402
|
-
# Install and build llama.cpp
|
|
403
|
-
git clone https://github.com/ggerganov/llama.cpp
|
|
404
|
-
cd llama.cpp && make
|
|
405
|
-
|
|
406
|
-
# Download a GGUF model
|
|
407
|
-
wget https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/qwen2.5-coder-7b-instruct-q4_k_m.gguf
|
|
408
|
-
|
|
409
|
-
# Start llama-server
|
|
410
|
-
./llama-server -m qwen2.5-coder-7b-instruct-q4_k_m.gguf --port 8080
|
|
411
|
-
|
|
412
|
-
# In separate terminal, install Lynkr
|
|
413
|
-
npm install -g lynkr
|
|
414
|
-
|
|
415
|
-
# Configure
|
|
416
|
-
export MODEL_PROVIDER=llamacpp
|
|
417
|
-
export LLAMACPP_ENDPOINT=http://localhost:8080
|
|
418
|
-
export LLAMACPP_MODEL=qwen2.5-coder-7b
|
|
419
|
-
|
|
420
|
-
# Start
|
|
421
|
-
lynkr start
|
|
422
|
-
```
|
|
423
|
-
|
|
424
|
-
**llama.cpp vs Ollama:**
|
|
425
|
-
|
|
426
|
-
| Feature | Ollama | llama.cpp |
|
|
427
|
-
|---------|--------|-----------|
|
|
428
|
-
| Setup | Easy (app) | Manual (compile) |
|
|
429
|
-
| Model Format | Ollama-specific | Any GGUF model |
|
|
430
|
-
| Performance | Good | Excellent (optimized C++) |
|
|
431
|
-
| GPU Support | Yes | Yes (CUDA, Metal, ROCm, Vulkan) |
|
|
432
|
-
| Memory Usage | Higher | Lower (quantization options) |
|
|
433
|
-
| API | Custom | OpenAI-compatible |
|
|
434
|
-
| Flexibility | Limited models | Any GGUF from HuggingFace |
|
|
435
|
-
|
|
436
|
-
---
|
|
437
|
-
|
|
438
|
-
### 6. Azure OpenAI
|
|
439
|
-
|
|
440
|
-
**Best for:** Azure integration, Microsoft ecosystem, GPT-4o, o1, o3
|
|
441
|
-
|
|
442
|
-
```bash
|
|
443
|
-
# Install
|
|
444
|
-
npm install -g lynkr
|
|
445
|
-
|
|
446
|
-
# Configure (IMPORTANT: Use full endpoint URL)
|
|
447
|
-
export MODEL_PROVIDER=azure-openai
|
|
448
|
-
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/openai/deployments/YOUR-DEPLOYMENT/chat/completions?api-version=2025-01-01-preview"
|
|
449
|
-
export AZURE_OPENAI_API_KEY=your-azure-api-key
|
|
450
|
-
export AZURE_OPENAI_DEPLOYMENT=gpt-4o
|
|
451
|
-
|
|
452
|
-
# Start
|
|
453
|
-
lynkr start
|
|
454
|
-
```
|
|
455
|
-
|
|
456
|
-
**Get Azure OpenAI credentials:**
|
|
457
|
-
1. Log in to [Azure Portal](https://portal.azure.com)
|
|
458
|
-
2. Navigate to **Azure OpenAI** service
|
|
459
|
-
3. Go to **Keys and Endpoint**
|
|
460
|
-
4. Copy **KEY 1** (this is your API key)
|
|
461
|
-
5. Copy **Endpoint** URL
|
|
462
|
-
6. Create a deployment (gpt-4o, gpt-4o-mini, etc.)
|
|
463
|
-
|
|
464
|
-
**Supported deployments:**
|
|
465
|
-
- `gpt-4o` - Latest GPT-4o
|
|
466
|
-
- `gpt-4o-mini` - Smaller, faster GPT-4o
|
|
467
|
-
- `gpt-5-chat` - GPT-5 (if available in your region)
|
|
468
|
-
- `o1-preview` - Reasoning model
|
|
469
|
-
- `o3-mini` - Latest reasoning model
|
|
470
|
-
|
|
471
|
-
---
|
|
472
|
-
|
|
473
|
-
### 7. Azure Anthropic
|
|
474
|
-
|
|
475
|
-
**Best for:** Azure-hosted Claude models
|
|
476
|
-
|
|
477
|
-
```bash
|
|
478
|
-
# Install
|
|
479
|
-
npm install -g lynkr
|
|
480
|
-
|
|
481
|
-
# Configure
|
|
482
|
-
export MODEL_PROVIDER=azure-anthropic
|
|
483
|
-
export AZURE_ANTHROPIC_ENDPOINT=https://your-resource.services.ai.azure.com/anthropic/v1/messages
|
|
484
|
-
export AZURE_ANTHROPIC_API_KEY=your-azure-api-key
|
|
485
|
-
|
|
486
|
-
# Start
|
|
487
|
-
lynkr start
|
|
488
|
-
```
|
|
489
|
-
|
|
490
|
-
---
|
|
491
|
-
|
|
492
|
-
### 8. OpenAI (Direct)
|
|
493
|
-
|
|
494
|
-
**Best for:** Direct OpenAI API access, lowest latency to OpenAI
|
|
495
|
-
|
|
496
|
-
```bash
|
|
497
|
-
# Install
|
|
498
|
-
npm install -g lynkr
|
|
499
|
-
|
|
500
|
-
# Configure
|
|
501
|
-
export MODEL_PROVIDER=openai
|
|
502
|
-
export OPENAI_API_KEY=sk-your-openai-api-key
|
|
503
|
-
export OPENAI_MODEL=gpt-4o
|
|
504
|
-
|
|
505
|
-
# Start
|
|
506
|
-
lynkr start
|
|
507
|
-
```
|
|
508
|
-
|
|
509
|
-
**Get OpenAI API key:**
|
|
510
|
-
1. Visit [platform.openai.com](https://platform.openai.com)
|
|
511
|
-
2. Sign up or log in
|
|
512
|
-
3. Go to [API Keys](https://platform.openai.com/api-keys)
|
|
513
|
-
4. Create a new API key
|
|
514
|
-
5. Add credits to your account
|
|
515
|
-
|
|
516
|
-
**Supported models:**
|
|
517
|
-
- `gpt-4o` - Latest GPT-4o
|
|
518
|
-
- `gpt-4o-mini` - Affordable GPT-4o
|
|
519
|
-
- `o1-preview` - Reasoning model
|
|
520
|
-
- `o1-mini` - Smaller reasoning model
|
|
521
|
-
|
|
522
|
-
---
|
|
523
|
-
|
|
524
|
-
### 9. Moonshot AI / Kimi (Affordable Cloud)
|
|
525
|
-
|
|
526
|
-
**Best for:** Affordable cloud models, thinking/reasoning models
|
|
527
|
-
|
|
528
|
-
```bash
|
|
529
|
-
# Install
|
|
530
|
-
npm install -g lynkr
|
|
531
|
-
|
|
532
|
-
# Configure
|
|
533
|
-
export MODEL_PROVIDER=moonshot
|
|
534
|
-
export MOONSHOT_API_KEY=sk-your-moonshot-api-key
|
|
535
|
-
export MOONSHOT_MODEL=kimi-k2-turbo-preview
|
|
536
|
-
|
|
537
|
-
# Start
|
|
538
|
-
lynkr start
|
|
539
|
-
```
|
|
540
|
-
|
|
541
|
-
**Get Moonshot API key:**
|
|
542
|
-
1. Visit [platform.moonshot.ai](https://platform.moonshot.ai)
|
|
543
|
-
2. Sign up or log in
|
|
544
|
-
3. Create a new API key
|
|
545
|
-
4. Add credits to your account
|
|
546
|
-
|
|
547
|
-
**Available models:**
|
|
548
|
-
- `kimi-k2-turbo-preview` - Fast, efficient, tool calling support
|
|
549
|
-
- `kimi-k2-thinking` - Chain-of-thought reasoning model
|
|
550
|
-
|
|
551
|
-
---
|
|
552
|
-
|
|
553
|
-
### 10. LM Studio (Local with GUI)
|
|
554
|
-
|
|
555
|
-
**Best for:** Local models with graphical interface
|
|
556
|
-
|
|
557
|
-
```bash
|
|
558
|
-
# Download and install LM Studio from: https://lmstudio.ai
|
|
559
|
-
|
|
560
|
-
# In LM Studio:
|
|
561
|
-
# 1. Download a model (e.g., Qwen2.5-Coder-7B)
|
|
562
|
-
# 2. Start local server (port 1234)
|
|
563
|
-
|
|
564
|
-
# Install Lynkr
|
|
565
|
-
npm install -g lynkr
|
|
566
|
-
|
|
567
|
-
# Configure
|
|
568
|
-
export MODEL_PROVIDER=lmstudio
|
|
569
|
-
export LMSTUDIO_ENDPOINT=http://localhost:1234
|
|
570
|
-
|
|
571
|
-
# Start
|
|
572
|
-
lynkr start
|
|
573
|
-
```
|
|
574
|
-
|
|
575
|
-
---
|
|
576
|
-
|
|
577
|
-
## Tier-Based Routing (Cost Optimization)
|
|
578
|
-
|
|
579
|
-
**Use local Ollama for simple tasks, cloud for complex ones:**
|
|
580
|
-
|
|
581
|
-
```bash
|
|
582
|
-
# Start Ollama
|
|
583
|
-
ollama serve
|
|
584
|
-
ollama pull llama3.2
|
|
585
|
-
|
|
586
|
-
# Configure tier-based routing (set all 4 to enable)
|
|
587
|
-
export TIER_SIMPLE=ollama:llama3.2
|
|
588
|
-
export TIER_MEDIUM=openrouter:openai/gpt-4o-mini
|
|
589
|
-
export TIER_COMPLEX=databricks:databricks-claude-sonnet-4-5
|
|
590
|
-
export TIER_REASONING=databricks:databricks-claude-sonnet-4-5
|
|
591
|
-
export FALLBACK_ENABLED=true
|
|
592
|
-
export FALLBACK_PROVIDER=databricks
|
|
593
|
-
export DATABRICKS_API_BASE=https://your-workspace.databricks.com
|
|
594
|
-
export DATABRICKS_API_KEY=your-key
|
|
595
|
-
|
|
596
|
-
# Start Lynkr
|
|
597
|
-
lynkr start
|
|
598
|
-
```
|
|
599
|
-
|
|
600
|
-
**How it works:**
|
|
601
|
-
- Each request is scored for complexity (0-100) and mapped to a tier
|
|
602
|
-
- **SIMPLE (0-25)**: Ollama (free, local, fast)
|
|
603
|
-
- **MEDIUM (26-50)**: OpenRouter (affordable cloud)
|
|
604
|
-
- **COMPLEX (51-75)**: Databricks (most capable)
|
|
605
|
-
- **REASONING (76-100)**: Databricks (best available)
|
|
606
|
-
- **Provider failures**: Automatic transparent fallback to cloud
|
|
607
|
-
|
|
608
|
-
**Cost savings:**
|
|
609
|
-
- **65-100%** for requests routed to local models
|
|
610
|
-
- **40-87%** faster for simple requests
|
|
611
|
-
- **Privacy**: Simple queries never leave your machine
|
|
612
|
-
|
|
613
|
-
---
|
|
614
|
-
|
|
615
|
-
## Verification & Testing
|
|
616
|
-
|
|
617
|
-
### Check Server Health
|
|
618
|
-
|
|
619
|
-
```bash
|
|
620
|
-
# Basic health check
|
|
621
|
-
curl http://localhost:8081/health/live
|
|
622
|
-
|
|
623
|
-
# Expected response:
|
|
624
|
-
# {
|
|
625
|
-
# "status": "ok",
|
|
626
|
-
# "provider": "databricks",
|
|
627
|
-
# "timestamp": "2026-01-11T12:00:00.000Z"
|
|
628
|
-
# }
|
|
629
|
-
```
|
|
630
|
-
|
|
631
|
-
### Check Readiness (includes provider connectivity)
|
|
632
|
-
|
|
633
|
-
```bash
|
|
634
|
-
curl http://localhost:8081/health/ready
|
|
635
|
-
|
|
636
|
-
# Expected response (all checks passing):
|
|
637
|
-
# {
|
|
638
|
-
# "status": "ready",
|
|
639
|
-
# "checks": {
|
|
640
|
-
# "database": "ok",
|
|
641
|
-
# "provider": "ok"
|
|
642
|
-
# }
|
|
643
|
-
# }
|
|
644
|
-
```
|
|
645
|
-
|
|
646
|
-
### Test with Claude Code CLI
|
|
647
|
-
|
|
648
|
-
```bash
|
|
649
|
-
# Configure Claude Code CLI
|
|
650
|
-
export ANTHROPIC_BASE_URL=http://localhost:8081
|
|
651
|
-
export ANTHROPIC_API_KEY=dummy
|
|
652
|
-
|
|
653
|
-
# Test simple query
|
|
654
|
-
claude "What is 2+2?"
|
|
655
|
-
|
|
656
|
-
# Should return response from your configured provider
|
|
657
|
-
```
|
|
658
|
-
|
|
659
|
-
---
|
|
660
|
-
|
|
661
|
-
## Environment Variables Reference
|
|
662
|
-
|
|
663
|
-
See [Provider Configuration Guide](providers.md) for complete environment variable reference for all providers.
|
|
664
|
-
|
|
665
|
-
### Core Variables
|
|
666
|
-
|
|
667
|
-
| Variable | Description | Default |
|
|
668
|
-
|----------|-------------|---------|
|
|
669
|
-
| `MODEL_PROVIDER` | Provider to use (`databricks`, `bedrock`, `openrouter`, `ollama`, `llamacpp`, `azure-openai`, `azure-anthropic`, `openai`, `lmstudio`, `moonshot`, `zai`, `vertex`) | `databricks` |
|
|
670
|
-
| `PORT` | HTTP port for proxy server | `8081` |
|
|
671
|
-
| `WORKSPACE_ROOT` | Workspace directory path | `process.cwd()` |
|
|
672
|
-
| `LOG_LEVEL` | Logging level (`error`, `warn`, `info`, `debug`) | `info` |
|
|
673
|
-
| `TOOL_EXECUTION_MODE` | Where tools execute (`server`, `client`) | `server` |
|
|
674
|
-
|
|
675
|
-
### Provider-Specific Variables
|
|
676
|
-
|
|
677
|
-
Each provider requires specific credentials and configuration. See [Provider Configuration](providers.md) for complete details.
|
|
678
|
-
|
|
679
|
-
---
|
|
680
|
-
|
|
681
|
-
## Troubleshooting
|
|
682
|
-
|
|
683
|
-
### Server Won't Start
|
|
684
|
-
|
|
685
|
-
**Issue:** `Error: MODEL_PROVIDER requires credentials`
|
|
686
|
-
|
|
687
|
-
**Solution:**
|
|
688
|
-
```bash
|
|
689
|
-
# Check your provider is configured
|
|
690
|
-
echo $MODEL_PROVIDER
|
|
691
|
-
echo $DATABRICKS_API_KEY # or other provider key
|
|
692
|
-
|
|
693
|
-
# If empty, set them:
|
|
694
|
-
export MODEL_PROVIDER=databricks
|
|
695
|
-
export DATABRICKS_API_KEY=your-key
|
|
696
|
-
```
|
|
697
|
-
|
|
698
|
-
### Connection Refused
|
|
699
|
-
|
|
700
|
-
**Issue:** `ECONNREFUSED` when connecting to provider
|
|
701
|
-
|
|
702
|
-
**Solution:**
|
|
703
|
-
1. Check provider URL is correct
|
|
704
|
-
2. Verify API key is valid
|
|
705
|
-
3. Check network connectivity
|
|
706
|
-
4. For Ollama: Ensure `ollama serve` is running
|
|
707
|
-
|
|
708
|
-
### Port Already in Use
|
|
709
|
-
|
|
710
|
-
**Issue:** `Error: listen EADDRINUSE: address already in use :::8081`
|
|
711
|
-
|
|
712
|
-
**Solution:**
|
|
713
|
-
```bash
|
|
714
|
-
# Find process using port 8081
|
|
715
|
-
lsof -i :8081
|
|
716
|
-
|
|
717
|
-
# Kill the process
|
|
718
|
-
kill -9 <PID>
|
|
719
|
-
|
|
720
|
-
# Or use different port
|
|
721
|
-
export PORT=8082
|
|
722
|
-
lynkr start
|
|
723
|
-
```
|
|
724
|
-
|
|
725
|
-
### Ollama Model Not Found
|
|
726
|
-
|
|
727
|
-
**Issue:** `Error: model "llama3.1:8b" not found`
|
|
728
|
-
|
|
729
|
-
**Solution:**
|
|
730
|
-
```bash
|
|
731
|
-
# List available models
|
|
732
|
-
ollama list
|
|
733
|
-
|
|
734
|
-
# Pull the model
|
|
735
|
-
ollama pull llama3.1:8b
|
|
736
|
-
|
|
737
|
-
# Verify it's available
|
|
738
|
-
ollama list
|
|
739
|
-
```
|
|
740
|
-
|
|
741
|
-
---
|
|
742
|
-
|
|
743
|
-
## Next Steps
|
|
744
|
-
|
|
745
|
-
- **[Provider Configuration](providers.md)** - Detailed configuration for all providers
|
|
746
|
-
- **[Claude Code CLI Setup](claude-code-cli.md)** - Connect Claude Code CLI
|
|
747
|
-
- **[Cursor Integration](cursor-integration.md)** - Connect Cursor IDE
|
|
748
|
-
- **[Features Guide](features.md)** - Learn about advanced features
|
|
749
|
-
- **[Production Deployment](production.md)** - Deploy to production
|
|
750
|
-
|
|
751
|
-
---
|
|
752
|
-
|
|
753
|
-
## Getting Help
|
|
754
|
-
|
|
755
|
-
- **[Troubleshooting Guide](troubleshooting.md)** - Common issues and solutions
|
|
756
|
-
- **[FAQ](faq.md)** - Frequently asked questions
|
|
757
|
-
- **[GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Community Q&A
|
|
758
|
-
- **[GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Report bugs
|