lynkr 4.0.0 → 4.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,731 @@
1
+ # Cursor IDE Integration Guide
2
+
3
+ Complete guide to using Cursor IDE with Lynkr for cost savings, provider flexibility, and local model support.
4
+
5
+ ---
6
+
7
+ ## Overview
8
+
9
+ Lynkr provides **full Cursor IDE support** through OpenAI-compatible API endpoints, enabling you to use Cursor with any provider (Databricks, Bedrock, OpenRouter, Ollama, etc.) while maintaining all Cursor features.
10
+
11
+ ### Why Use Lynkr with Cursor?
12
+
13
+ - 💰 **60-80% cost savings** vs Cursor's default GPT-4 pricing
14
+ - 🔓 **Provider choice** - Use Claude, local models, or any supported provider
15
+ - 🏠 **Self-hosted** - Full control over your AI infrastructure
16
+ - ✅ **Full compatibility** - All Cursor features work (chat, autocomplete, @Codebase search)
17
+ - 🔒 **Privacy** - Option to run 100% locally with Ollama
18
+
19
+ ---
20
+
21
+ ## Quick Setup (5 Minutes)
22
+
23
+ ### Step 1: Start Lynkr Server
24
+
25
+ ```bash
26
+ # Navigate to Lynkr directory
27
+ cd /path/to/Lynkr
28
+
29
+ # Start with any provider (Databricks, Bedrock, OpenRouter, Ollama, etc.)
30
+ npm start
31
+
32
+ # Wait for: "Server listening at http://0.0.0.0:8081" (or your configured PORT)
33
+ ```
34
+
35
+ **Note**: Lynkr runs on port **8081** by default (configured in `.env` as `PORT=8081`)
36
+
37
+ ---
38
+
39
+ ### Step 2: Configure Cursor
40
+
41
+ #### Detailed Configuration Steps
42
+
43
+ 1. **Open Cursor Settings**
44
+ - **Mac**: Click **Cursor** menu → **Settings** (or press `Cmd+,`)
45
+ - **Windows/Linux**: Click **File** → **Settings** (or press `Ctrl+,`)
46
+
47
+ 2. **Navigate to Models Section**
48
+ - In the Settings sidebar, find **Features** section
49
+ - Click on **Models**
50
+
51
+ 3. **Configure OpenAI API Settings**
52
+
53
+ Fill in these three fields:
54
+
55
+ **API Key:**
56
+ ```
57
+ sk-lynkr
58
+ ```
59
+ *(Cursor requires a non-empty value, but Lynkr ignores it. You can use any text like "dummy" or "lynkr")*
60
+
61
+ **Base URL:**
62
+ ```
63
+ http://localhost:8081/v1
64
+ ```
65
+
66
+ ⚠️ **Critical:**
67
+ - Use port **8081** (or your configured PORT in .env)
68
+ - **Must end with `/v1`**
69
+ - Include `http://` prefix
70
+ - ✅ Correct: `http://localhost:8081/v1`
71
+ - ❌ Wrong: `http://localhost:8081` (missing `/v1`)
72
+ - ❌ Wrong: `localhost:8081/v1` (missing `http://`)
73
+
74
+ **Model:**
75
+
76
+ Choose based on your `MODEL_PROVIDER` in `.env`:
77
+ - **Bedrock**: `claude-3.5-sonnet` or `claude-sonnet-4.5`
78
+ - **Databricks**: `claude-sonnet-4.5`
79
+ - **OpenRouter**: `anthropic/claude-3.5-sonnet`
80
+ - **Ollama**: `qwen2.5-coder:latest` (or your OLLAMA_MODEL)
81
+ - **Azure OpenAI**: `gpt-4o` or your deployment name
82
+ - **OpenAI**: `gpt-4o` or your model
83
+
84
+ 4. **Save Settings** (auto-saves in Cursor)
85
+
86
+ #### Visual Setup Summary
87
+
88
+ ```
89
+ ┌─────────────────────────────────────────────────────────┐
90
+ │ Cursor Settings → Models → OpenAI API │
91
+ ├─────────────────────────────────────────────────────────┤
92
+ │ │
93
+ │ API Key: sk-lynkr │
94
+ │ (or any non-empty value) │
95
+ │ │
96
+ │ Base URL: http://localhost:8081/v1 │
97
+ │ ⚠️ Must include /v1 │
98
+ │ │
99
+ │ Model: claude-3.5-sonnet │
100
+ │ (or your provider's model) │
101
+ │ │
102
+ └─────────────────────────────────────────────────────────┘
103
+ ```
104
+
105
+ ---
106
+
107
+ ### Step 3: Test the Integration
108
+
109
+ **Test 1: Basic Chat** (`Cmd+L` / `Ctrl+L`)
110
+ ```
111
+ You: "Hello, can you see this?"
112
+ Expected: Response from your provider via Lynkr ✅
113
+ ```
114
+
115
+ **Test 2: Inline Edits** (`Cmd+K` / `Ctrl+K`)
116
+ ```
117
+ Select code → Press Cmd+K → "Add error handling"
118
+ Expected: Code modifications from your provider ✅
119
+ ```
120
+
121
+ **Test 3: Verify Health**
122
+ ```bash
123
+ curl http://localhost:8081/v1/health
124
+
125
+ # Expected response:
126
+ {
127
+ "status": "ok",
128
+ "provider": "bedrock",
129
+ "openai_compatible": true,
130
+ "cursor_compatible": true,
131
+ "timestamp": "2026-01-11T12:00:00.000Z"
132
+ }
133
+ ```
134
+
135
+ ---
136
+
137
+ ## Feature Compatibility
138
+
139
+ ### What Works Without Additional Setup
140
+
141
+ | Feature | Without Embeddings | With Embeddings |
142
+ |---------|-------------------|-----------------|
143
+ | **Cmd+L chat** | ✅ Works | ✅ Works |
144
+ | **Inline autocomplete** | ✅ Works | ✅ Works |
145
+ | **Cmd+K edits** | ✅ Works | ✅ Works |
146
+ | **Manual @file references** | ✅ Works | ✅ Works |
147
+ | **Terminal commands** | ✅ Works | ✅ Works |
148
+ | **@Codebase semantic search** | ❌ Requires embeddings | ✅ Works |
149
+ | **Automatic context** | ❌ Requires embeddings | ✅ Works |
150
+ | **Find similar code** | ❌ Requires embeddings | ✅ Works |
151
+
152
+ ### Important Notes
153
+
154
+ **Autocomplete Behavior:**
155
+ - Cursor's inline autocomplete uses Cursor's built-in models (fast, local)
156
+ - Autocomplete does NOT go through Lynkr
157
+ - Only these features use Lynkr:
158
+ - ✅ Chat (`Cmd+L` / `Ctrl+L`)
159
+ - ✅ Cmd+K inline edits
160
+ - ✅ @Codebase search (with embeddings)
161
+ - ❌ Autocomplete (uses Cursor's models)
162
+
163
+ ---
164
+
165
+ ## Enabling @Codebase Semantic Search
166
+
167
+ For Cursor's @Codebase semantic search, you need embeddings support.
168
+
169
+ ### ⚡ Already Using OpenRouter?
170
+
171
+ If you configured `MODEL_PROVIDER=openrouter`, embeddings **work automatically** with the same `OPENROUTER_API_KEY` - no additional setup needed! OpenRouter handles both chat AND embeddings with one key.
172
+
173
+ ### 🔧 Using a Different Provider?
174
+
175
+ If you're using Databricks, Bedrock, Ollama, or other providers for chat, add ONE of these for embeddings (ordered by privacy):
176
+
177
+ #### Option A: Ollama (100% Local - Most Private) 🔒
178
+
179
+ **Best for:** Privacy, offline work, zero cloud dependencies
180
+
181
+ ```bash
182
+ # Pull embedding model
183
+ ollama pull nomic-embed-text
184
+
185
+ # Add to .env
186
+ OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
187
+ OLLAMA_EMBEDDINGS_ENDPOINT=http://localhost:11434/api/embeddings
188
+ ```
189
+
190
+ **Popular models:**
191
+ - `nomic-embed-text` (768 dim, 137M params) - **Recommended**, best all-around
192
+ - `mxbai-embed-large` (1024 dim, 335M params) - Higher quality
193
+ - `all-minilm` (384 dim, 23M params) - Fastest/smallest
194
+
195
+ **Cost:** **100% FREE** 🔒
196
+ **Privacy:** All data stays on your machine
197
+
198
+ ---
199
+
200
+ #### Option B: llama.cpp (100% Local - Maximum Performance) 🔒
201
+
202
+ **Best for:** Performance, GGUF models, GPU acceleration
203
+
204
+ ```bash
205
+ # Download embedding model (example: nomic-embed-text GGUF)
206
+ wget https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF/resolve/main/nomic-embed-text-v1.5.Q4_K_M.gguf
207
+
208
+ # Start llama-server with embedding model
209
+ ./llama-server -m nomic-embed-text-v1.5.Q4_K_M.gguf --port 8080 --embedding
210
+
211
+ # Add to .env
212
+ LLAMACPP_EMBEDDINGS_ENDPOINT=http://localhost:8080/embeddings
213
+ ```
214
+
215
+ **Popular models:**
216
+ - `nomic-embed-text-v1.5.Q4_K_M.gguf` - **Recommended**, 768 dim
217
+ - `all-MiniLM-L6-v2.Q4_K_M.gguf` - Smallest, fastest, 384 dim
218
+ - `bge-large-en-v1.5.Q4_K_M.gguf` - Highest quality, 1024 dim
219
+
220
+ **Cost:** **100% FREE** 🔒
221
+ **Privacy:** All data stays on your machine
222
+ **Performance:** Faster than Ollama, optimized C++
223
+
224
+ ---
225
+
226
+ #### Option C: OpenRouter (Cloud - Simplest)
227
+
228
+ **Best for:** Simplicity, quality, one key for everything
229
+
230
+ ```bash
231
+ # Add to .env (uses same key as chat if you're already using OpenRouter)
232
+ OPENROUTER_API_KEY=sk-or-v1-your-key
233
+ OPENROUTER_EMBEDDINGS_MODEL=openai/text-embedding-3-small
234
+ ```
235
+
236
+ **Popular models:**
237
+ - `openai/text-embedding-3-small` - $0.02 per 1M tokens (80% cheaper!) **Recommended**
238
+ - `openai/text-embedding-ada-002` - $0.10 per 1M tokens (standard)
239
+ - `openai/text-embedding-3-large` - $0.13 per 1M tokens (best quality, 3072 dim)
240
+ - `voyage/voyage-code-2` - $0.12 per 1M tokens (specialized for code)
241
+
242
+ **Cost:** ~$0.01-0.10/month for typical usage
243
+ **Privacy:** Cloud-based
244
+
245
+ ---
246
+
247
+ #### Option D: OpenAI (Cloud - Direct)
248
+
249
+ **Best for:** Best quality, direct OpenAI access
250
+
251
+ ```bash
252
+ # Add to .env
253
+ OPENAI_API_KEY=sk-your-openai-api-key
254
+ # Optionally specify model (defaults to text-embedding-ada-002)
255
+ # OPENAI_EMBEDDINGS_MODEL=text-embedding-3-small
256
+ ```
257
+
258
+ **Popular models:**
259
+ - `text-embedding-3-small` - $0.02 per 1M tokens **Recommended**
260
+ - `text-embedding-ada-002` - $0.10 per 1M tokens
261
+ - `text-embedding-3-large` - $0.13 per 1M tokens (best quality)
262
+
263
+ **Cost:** ~$0.01-0.10/month for typical usage
264
+ **Privacy:** Cloud-based
265
+
266
+ ---
267
+
268
+ ### Embeddings Provider Override
269
+
270
+ By default, Lynkr uses the same provider as `MODEL_PROVIDER` for embeddings. To use a different provider:
271
+
272
+ ```env
273
+ # Use Databricks for chat, but Ollama for embeddings (privacy + cost savings)
274
+ MODEL_PROVIDER=databricks
275
+ DATABRICKS_API_BASE=https://your-workspace.databricks.com
276
+ DATABRICKS_API_KEY=your-key
277
+
278
+ # Override embeddings provider
279
+ EMBEDDINGS_PROVIDER=ollama
280
+ OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
281
+ ```
282
+
283
+ **Recommended setups:**
284
+ - **100% Local/Private**: Ollama chat + Ollama embeddings (zero cloud dependencies)
285
+ - **Hybrid**: Databricks/Bedrock chat + Ollama embeddings (private search, cloud chat)
286
+ - **Simple Cloud**: OpenRouter chat + OpenRouter embeddings (one key for both)
287
+
288
+ **After configuration, restart Lynkr** and @Codebase will work!
289
+
290
+ ---
291
+
292
+ ## Available Endpoints
293
+
294
+ Lynkr implements all 4 OpenAI API endpoints for full Cursor compatibility:
295
+
296
+ ### 1. POST /v1/chat/completions
297
+
298
+ Chat with streaming support
299
+ - Handles all chat/completion requests
300
+ - Converts OpenAI format ↔ Anthropic format automatically
301
+ - Full tool calling support
302
+ - Streaming responses
303
+
304
+ ### 2. GET /v1/models
305
+
306
+ List available models
307
+ - Returns models based on configured provider
308
+ - Updates dynamically when you change providers
309
+
310
+ ### 3. POST /v1/embeddings
311
+
312
+ Generate embeddings for @Codebase search
313
+ - Supports 4 providers: Ollama, llama.cpp, OpenRouter, OpenAI
314
+ - Automatic provider detection
315
+ - Falls back gracefully if not configured (returns 501)
316
+
317
+ ### 4. GET /v1/health
318
+
319
+ Health check
320
+ - Verify Lynkr is running
321
+ - Check provider status
322
+ - Returns status, provider info, and compatibility flags
323
+
324
+ ---
325
+
326
+ ## Cost Comparison
327
+
328
+ **Scenario:** 100K requests/month, typical Cursor usage
329
+
330
+ | Setup | Monthly Cost | Embeddings Setup | Features | Privacy |
331
+ |-------|--------------|------------------|----------|---------|
332
+ | **Cursor native (GPT-4)** | $20-50 | Built-in | All features | Cloud |
333
+ | **Lynkr + OpenRouter** | $5-10 | ⚡ **Same key for both** | All features, simplest setup | Cloud |
334
+ | **Lynkr + Databricks** | $15-30 | +Ollama/OpenRouter | All features | Cloud chat, local/cloud search |
335
+ | **Lynkr + Ollama + Ollama embeddings** | **100% FREE** 🔒 | Ollama (local) | All features, 100% local | 100% Local |
336
+ | **Lynkr + Ollama + llama.cpp embeddings** | **100% FREE** 🔒 | llama.cpp (local) | All features, 100% local | 100% Local |
337
+ | **Lynkr + Ollama + OpenRouter embeddings** | $0.01-0.10 | OpenRouter (cloud) | All features, hybrid | Local chat, cloud search |
338
+ | **Lynkr + Ollama (no embeddings)** | **FREE** | None | Chat/Cmd+K only, no @Codebase | 100% Local |
339
+
340
+ ---
341
+
342
+ ## Provider Recommendations
343
+
344
+ ### Best for Privacy (100% Local) 🔒
345
+
346
+ **Ollama + Ollama embeddings**
347
+ - **Cost:** 100% FREE
348
+ - **Privacy:** All data stays on your machine
349
+ - **Features:** Full @Codebase support with local embeddings
350
+ - **Perfect for:** Sensitive codebases, offline work, privacy requirements
351
+
352
+ ```env
353
+ MODEL_PROVIDER=ollama
354
+ OLLAMA_MODEL=llama3.1:8b
355
+ OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
356
+ ```
357
+
358
+ ---
359
+
360
+ ### Best for Simplicity (Recommended for Most Users)
361
+
362
+ **OpenRouter**
363
+ - **Cost:** $5-10/month
364
+ - **Setup:** ONE key for chat + embeddings, no extra setup
365
+ - **Features:** 100+ models, automatic fallbacks
366
+ - **Perfect for:** Easy setup, flexibility, cost optimization
367
+
368
+ ```env
369
+ MODEL_PROVIDER=openrouter
370
+ OPENROUTER_API_KEY=sk-or-v1-your-key
371
+ OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
372
+ # Embeddings work automatically with same key!
373
+ ```
374
+
375
+ ---
376
+
377
+ ### Best for Enterprise
378
+
379
+ **Databricks or Azure Anthropic**
380
+ - **Cost:** $15-30/month (enterprise pricing)
381
+ - **Features:** Claude Sonnet 4.5, enterprise SLA
382
+ - **Perfect for:** Production use, enterprise compliance
383
+
384
+ ```env
385
+ MODEL_PROVIDER=databricks
386
+ DATABRICKS_API_BASE=https://your-workspace.databricks.com
387
+ DATABRICKS_API_KEY=your-key
388
+ # Add Ollama embeddings for privacy
389
+ OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
390
+ ```
391
+
392
+ ---
393
+
394
+ ### Best for AWS Ecosystem
395
+
396
+ **AWS Bedrock**
397
+ - **Cost:** $10-20/month (100+ models)
398
+ - **Features:** Claude + DeepSeek + Qwen + Nova + Titan + Llama
399
+ - **Perfect for:** AWS integration, multi-model flexibility
400
+
401
+ ```env
402
+ MODEL_PROVIDER=bedrock
403
+ AWS_BEDROCK_API_KEY=your-bearer-token
404
+ AWS_BEDROCK_REGION=us-east-1
405
+ AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0
406
+ ```
407
+
408
+ ---
409
+
410
+ ### Best for Speed
411
+
412
+ **Ollama or llama.cpp**
413
+ - **Latency:** 100-500ms (local inference)
414
+ - **Cost:** 100% FREE
415
+ - **Perfect for:** Fast iteration, local development
416
+
417
+ ---
418
+
419
+ ## Troubleshooting
420
+
421
+ ### Connection Refused or Network Error
422
+
423
+ **Symptoms:** Cursor shows connection errors, can't reach Lynkr
424
+
425
+ **Solutions:**
426
+
427
+ 1. **Verify Lynkr is running:**
428
+ ```bash
429
+ # Check if Lynkr process is running on port 8081
430
+ lsof -i :8081
431
+ # Should show node process
432
+ ```
433
+
434
+ 2. **Test health endpoint:**
435
+ ```bash
436
+ curl http://localhost:8081/v1/health
437
+ # Should return: {"status":"ok"}
438
+ ```
439
+
440
+ 3. **Check port number:**
441
+ - Verify Cursor Base URL uses correct port: `http://localhost:8081/v1`
442
+ - Check `.env` file: `PORT=8081`
443
+ - If you changed PORT, update Cursor settings to match
444
+
445
+ 4. **Verify URL format:**
446
+ - ✅ Correct: `http://localhost:8081/v1`
447
+ - ❌ Wrong: `http://localhost:8081` (missing `/v1`)
448
+ - ❌ Wrong: `localhost:8081/v1` (missing `http://`)
449
+
450
+ ---
451
+
452
+ ### Invalid API Key or Unauthorized
453
+
454
+ **Symptoms:** Cursor says API key is invalid
455
+
456
+ **Solutions:**
457
+ - Lynkr doesn't validate API keys from Cursor
458
+ - This error means Cursor isn't reaching Lynkr at all
459
+ - Double-check Base URL in Cursor: `http://localhost:8081/v1`
460
+ - Make sure you included `/v1` at the end
461
+ - Try clearing and re-entering the Base URL
462
+
463
+ ---
464
+
465
+ ### Model Not Found or Invalid Model
466
+
467
+ **Symptoms:** Cursor can't find the model you specified
468
+
469
+ **Solutions:**
470
+
471
+ 1. **Match model name to your provider:**
472
+ - **Bedrock**: Use `claude-3.5-sonnet` or `claude-sonnet-4.5`
473
+ - **Databricks**: Use `claude-sonnet-4.5`
474
+ - **OpenRouter**: Use `anthropic/claude-3.5-sonnet`
475
+ - **Ollama**: Use your actual model name like `qwen2.5-coder:latest`
476
+
477
+ 2. **Try generic names:**
478
+ - Lynkr translates generic names, so try:
479
+ - `claude-3.5-sonnet`
480
+ - `gpt-4o`
481
+ - These work across most providers
482
+
483
+ 3. **Check provider logs:**
484
+ ```bash
485
+ # In Lynkr terminal
486
+ # Look for "Unknown model" errors
487
+ ```
488
+
489
+ ---
490
+
491
+ ### @Codebase Doesn't Work
492
+
493
+ **Symptoms:** @Codebase doesn't return results or shows error
494
+
495
+ **Solutions:**
496
+
497
+ 1. **Verify embeddings are configured:**
498
+ ```bash
499
+ curl http://localhost:8081/v1/embeddings \
500
+ -H "Content-Type: application/json" \
501
+ -d '{"input":"test","model":"text-embedding-ada-002"}'
502
+
503
+ # Should return embeddings, not 501 error
504
+ ```
505
+
506
+ 2. **Check embeddings provider:**
507
+ ```bash
508
+ # In .env, verify one of these is set:
509
+ OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
510
+ # OR
511
+ LLAMACPP_EMBEDDINGS_ENDPOINT=http://localhost:8080/embeddings
512
+ # OR
513
+ OPENROUTER_API_KEY=sk-or-v1-your-key
514
+ # OR
515
+ OPENAI_API_KEY=sk-your-key
516
+ ```
517
+
518
+ 3. **Restart Lynkr** after adding embeddings config
519
+
520
+ 4. **This is a Cursor indexing issue, not Lynkr:**
521
+ - Cursor needs to re-index your codebase
522
+ - Try closing and reopening the workspace
523
+
524
+ ---
525
+
526
+ ### Slow Responses
527
+
528
+ **Symptoms:** Responses take 5+ seconds
529
+
530
+ **Solutions:**
531
+
532
+ 1. **Check provider latency:**
533
+ - **Local** (Ollama/llama.cpp): Should be 100-500ms
534
+ - **Cloud** (OpenRouter/Databricks): Should be 500ms-2s
535
+ - **Distant regions**: Can be 2-5s
536
+
537
+ 2. **Enable hybrid routing** for speed:
538
+ ```env
539
+ # Use Ollama for simple requests (fast)
540
+ # Cloud for complex requests
541
+ PREFER_OLLAMA=true
542
+ FALLBACK_ENABLED=true
543
+ ```
544
+
545
+ 3. **Check Lynkr logs:**
546
+ - Look for actual response times
547
+ - Example: `Response time: 2500ms`
548
+
549
+ ---
550
+
551
+ ### Embeddings Work But Search Results Are Poor
552
+
553
+ **Symptoms:** @Codebase returns irrelevant files
554
+
555
+ **Solutions:**
556
+
557
+ 1. **Try better embedding models:**
558
+ ```bash
559
+ # For Ollama - upgrade to larger model
560
+ ollama pull mxbai-embed-large # Better quality than nomic-embed-text
561
+ OLLAMA_EMBEDDINGS_MODEL=mxbai-embed-large
562
+ ```
563
+
564
+ 2. **Use cloud embeddings for better quality:**
565
+ ```bash
566
+ # OpenRouter has excellent embeddings
567
+ OPENROUTER_API_KEY=sk-or-v1-your-key
568
+ OPENROUTER_EMBEDDINGS_MODEL=voyage/voyage-code-2
569
+ ```
570
+
571
+ 3. **This is a Cursor indexing issue, not Lynkr:**
572
+ - Cursor needs to re-index your codebase
573
+ - Try closing and reopening the workspace
574
+
575
+ ---
576
+
577
+ ### Too Many Requests or Rate Limiting
578
+
579
+ **Symptoms:** Provider returns 429 errors
580
+
581
+ **Solutions:**
582
+
583
+ 1. **Enable fallback provider:**
584
+ ```env
585
+ FALLBACK_ENABLED=true
586
+ FALLBACK_PROVIDER=databricks
587
+ ```
588
+
589
+ 2. **Switch to Ollama** (no rate limits):
590
+ ```env
591
+ MODEL_PROVIDER=ollama
592
+ OLLAMA_MODEL=llama3.1:8b
593
+ ```
594
+
595
+ 3. **Use OpenRouter** (pooled rate limits across providers):
596
+ ```env
597
+ MODEL_PROVIDER=openrouter
598
+ ```
599
+
600
+ ---
601
+
602
+ ### Enable Debug Logging
603
+
604
+ For detailed troubleshooting:
605
+
606
+ ```bash
607
+ # In .env
608
+ LOG_LEVEL=debug
609
+
610
+ # Restart Lynkr
611
+ npm start
612
+
613
+ # Check logs for detailed request/response info
614
+ ```
615
+
616
+ ---
617
+
618
+ ## Architecture
619
+
620
+ ```
621
+ Cursor IDE
622
+ ↓ OpenAI API format
623
+ Lynkr Proxy
624
+ ↓ Converts to Anthropic format
625
+ Your Provider (Databricks/Bedrock/OpenRouter/Ollama/etc.)
626
+ ↓ Returns response
627
+ Lynkr Proxy
628
+ ↓ Converts back to OpenAI format
629
+ Cursor IDE (displays result)
630
+ ```
631
+
632
+ ---
633
+
634
+ ## Advanced Configuration Examples
635
+
636
+ ### Setup 1: Simplest (One Key for Everything - OpenRouter)
637
+
638
+ ```bash
639
+ # Chat + Embeddings: OpenRouter handles both with ONE key
640
+ MODEL_PROVIDER=openrouter
641
+ OPENROUTER_API_KEY=sk-or-v1-your-key-here
642
+
643
+ # Done! Everything works with one key
644
+ ```
645
+
646
+ **Benefits:**
647
+ - ✅ ONE key for chat + embeddings
648
+ - ✅ 100+ models available
649
+ - ✅ Automatic fallbacks
650
+ - ✅ Competitive pricing
651
+
652
+ ---
653
+
654
+ ### Setup 2: Privacy-First (100% Local)
655
+
656
+ ```bash
657
+ # Chat: Ollama (local)
658
+ MODEL_PROVIDER=ollama
659
+ OLLAMA_MODEL=llama3.1:8b
660
+
661
+ # Embeddings: Ollama (local)
662
+ OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
663
+
664
+ # Everything runs on your machine, zero cloud dependencies
665
+ ```
666
+
667
+ **Benefits:**
668
+ - ✅ 100% FREE
669
+ - ✅ 100% private (all data stays local)
670
+ - ✅ Works offline
671
+ - ✅ Full @Codebase support
672
+
673
+ ---
674
+
675
+ ### Setup 3: Hybrid (Best of Both Worlds)
676
+
677
+ ```bash
678
+ # Chat: Ollama for simple requests, Databricks for complex
679
+ PREFER_OLLAMA=true
680
+ FALLBACK_ENABLED=true
681
+ OLLAMA_MODEL=llama3.1:8b
682
+
683
+ # Fallback to Databricks for complex requests
684
+ FALLBACK_PROVIDER=databricks
685
+ DATABRICKS_API_BASE=https://your-workspace.databricks.com
686
+ DATABRICKS_API_KEY=your-key
687
+
688
+ # Embeddings: Ollama (local, private)
689
+ OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
690
+
691
+ # Cost: Mostly FREE (Ollama handles 70-80% of requests)
692
+ # Only complex tool-heavy requests go to Databricks
693
+ ```
694
+
695
+ **Benefits:**
696
+ - ✅ Mostly FREE (70-80% of requests on Ollama)
697
+ - ✅ Private embeddings (local search)
698
+ - ✅ Cloud quality for complex tasks
699
+ - ✅ Automatic intelligent routing
700
+
701
+ ---
702
+
703
+ ## Cursor vs Native Comparison
704
+
705
+ | Aspect | Cursor Native | Lynkr + Cursor |
706
+ |--------|---------------|----------------|
707
+ | **Providers** | OpenAI only | 9+ providers (Bedrock, Databricks, OpenRouter, Ollama, llama.cpp, etc.) |
708
+ | **Costs** | OpenAI pricing | 60-80% cheaper (or 100% FREE with Ollama) |
709
+ | **Privacy** | Cloud-only | Can run 100% locally (Ollama + local embeddings) |
710
+ | **Embeddings** | Built-in (cloud) | 4 options: Ollama (local), llama.cpp (local), OpenRouter (cloud), OpenAI (cloud) |
711
+ | **Control** | Black box | Full observability, logs, metrics |
712
+ | **Features** | All Cursor features | All Cursor features (chat, Cmd+K, @Codebase) |
713
+ | **Flexibility** | Fixed setup | Mix providers (e.g., Bedrock chat + Ollama embeddings) |
714
+
715
+ ---
716
+
717
+ ## Next Steps
718
+
719
+ - **[Embeddings Configuration](embeddings.md)** - Detailed embeddings setup guide
720
+ - **[Provider Configuration](providers.md)** - Configure all providers
721
+ - **[Installation Guide](installation.md)** - Install Lynkr
722
+ - **[Troubleshooting](troubleshooting.md)** - More troubleshooting tips
723
+ - **[FAQ](faq.md)** - Frequently asked questions
724
+
725
+ ---
726
+
727
+ ## Getting Help
728
+
729
+ - **[GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Community Q&A
730
+ - **[GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Report bugs
731
+ - **[Troubleshooting Guide](troubleshooting.md)** - Common issues and solutions