lynkr 7.2.5 → 8.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (124) hide show
  1. package/README.md +3 -3
  2. package/config/model-tiers.json +89 -0
  3. package/install.sh +6 -1
  4. package/package.json +4 -2
  5. package/scripts/setup.js +0 -1
  6. package/src/agents/executor.js +14 -6
  7. package/src/api/middleware/session.js +15 -2
  8. package/src/api/openai-router.js +162 -37
  9. package/src/api/providers-handler.js +15 -1
  10. package/src/api/router.js +107 -2
  11. package/src/budget/index.js +4 -3
  12. package/src/clients/databricks.js +431 -234
  13. package/src/clients/gpt-utils.js +181 -0
  14. package/src/clients/ollama-utils.js +66 -140
  15. package/src/clients/routing.js +0 -1
  16. package/src/clients/standard-tools.js +99 -3
  17. package/src/config/index.js +133 -35
  18. package/src/context/toon.js +173 -0
  19. package/src/logger/index.js +23 -0
  20. package/src/orchestrator/index.js +688 -213
  21. package/src/routing/agentic-detector.js +320 -0
  22. package/src/routing/complexity-analyzer.js +202 -2
  23. package/src/routing/cost-optimizer.js +305 -0
  24. package/src/routing/index.js +168 -159
  25. package/src/routing/model-tiers.js +365 -0
  26. package/src/server.js +4 -14
  27. package/src/sessions/cleanup.js +3 -3
  28. package/src/sessions/record.js +10 -1
  29. package/src/sessions/store.js +7 -2
  30. package/src/tools/agent-task.js +48 -1
  31. package/src/tools/index.js +19 -2
  32. package/src/tools/lazy-loader.js +7 -0
  33. package/src/tools/tinyfish.js +358 -0
  34. package/src/tools/truncate.js +1 -0
  35. package/.github/FUNDING.yml +0 -15
  36. package/.github/workflows/README.md +0 -215
  37. package/.github/workflows/ci.yml +0 -69
  38. package/.github/workflows/index.yml +0 -62
  39. package/.github/workflows/web-tools-tests.yml +0 -56
  40. package/CITATIONS.bib +0 -6
  41. package/CLAWROUTER_ROUTING_PLAN.md +0 -910
  42. package/DEPLOYMENT.md +0 -1001
  43. package/LYNKR-TUI-PLAN.md +0 -984
  44. package/PERFORMANCE-REPORT.md +0 -866
  45. package/PLAN-per-client-model-routing.md +0 -252
  46. package/ROUTER_COMPARISON.md +0 -173
  47. package/TIER_ROUTING_PLAN.md +0 -771
  48. package/docs/42642f749da6234f41b6b425c3bb07c9.txt +0 -1
  49. package/docs/BingSiteAuth.xml +0 -4
  50. package/docs/docs-style.css +0 -478
  51. package/docs/docs.html +0 -197
  52. package/docs/google5be250e608e6da39.html +0 -1
  53. package/docs/index.html +0 -577
  54. package/docs/index.md +0 -577
  55. package/docs/robots.txt +0 -4
  56. package/docs/sitemap.xml +0 -44
  57. package/docs/style.css +0 -1223
  58. package/documentation/README.md +0 -100
  59. package/documentation/api.md +0 -806
  60. package/documentation/claude-code-cli.md +0 -672
  61. package/documentation/codex-cli.md +0 -397
  62. package/documentation/contributing.md +0 -571
  63. package/documentation/cursor-integration.md +0 -731
  64. package/documentation/docker.md +0 -867
  65. package/documentation/embeddings.md +0 -760
  66. package/documentation/faq.md +0 -659
  67. package/documentation/features.md +0 -396
  68. package/documentation/headroom.md +0 -519
  69. package/documentation/installation.md +0 -706
  70. package/documentation/memory-system.md +0 -476
  71. package/documentation/production.md +0 -601
  72. package/documentation/providers.md +0 -906
  73. package/documentation/testing.md +0 -629
  74. package/documentation/token-optimization.md +0 -323
  75. package/documentation/tools.md +0 -697
  76. package/documentation/troubleshooting.md +0 -893
  77. package/final-test.js +0 -33
  78. package/headroom-sidecar/config.py +0 -93
  79. package/headroom-sidecar/requirements.txt +0 -14
  80. package/headroom-sidecar/server.py +0 -451
  81. package/monitor-agents.sh +0 -31
  82. package/scripts/audit-log-reader.js +0 -399
  83. package/scripts/compact-dictionary.js +0 -204
  84. package/scripts/test-deduplication.js +0 -448
  85. package/src/db/database.sqlite +0 -0
  86. package/test/README.md +0 -212
  87. package/test/azure-openai-config.test.js +0 -204
  88. package/test/azure-openai-error-resilience.test.js +0 -238
  89. package/test/azure-openai-format-conversion.test.js +0 -354
  90. package/test/azure-openai-integration.test.js +0 -281
  91. package/test/azure-openai-routing.test.js +0 -177
  92. package/test/azure-openai-streaming.test.js +0 -171
  93. package/test/bedrock-integration.test.js +0 -471
  94. package/test/comprehensive-test-suite.js +0 -928
  95. package/test/config-validation.test.js +0 -207
  96. package/test/cursor-integration.test.js +0 -484
  97. package/test/format-conversion.test.js +0 -578
  98. package/test/hybrid-routing-integration.test.js +0 -254
  99. package/test/hybrid-routing-performance.test.js +0 -418
  100. package/test/llamacpp-integration.test.js +0 -863
  101. package/test/lmstudio-integration.test.js +0 -335
  102. package/test/memory/extractor.test.js +0 -398
  103. package/test/memory/retriever.test.js +0 -613
  104. package/test/memory/retriever.test.js.bak +0 -585
  105. package/test/memory/search.test.js +0 -537
  106. package/test/memory/search.test.js.bak +0 -389
  107. package/test/memory/store.test.js +0 -344
  108. package/test/memory/store.test.js.bak +0 -312
  109. package/test/memory/surprise.test.js +0 -300
  110. package/test/memory-performance.test.js +0 -472
  111. package/test/openai-integration.test.js +0 -686
  112. package/test/openrouter-error-resilience.test.js +0 -418
  113. package/test/passthrough-mode.test.js +0 -385
  114. package/test/performance-benchmark.js +0 -351
  115. package/test/performance-tests.js +0 -528
  116. package/test/routing.test.js +0 -219
  117. package/test/web-tools.test.js +0 -329
  118. package/test-agents-simple.js +0 -43
  119. package/test-cli-connection.sh +0 -33
  120. package/test-learning-unit.js +0 -126
  121. package/test-learning.js +0 -112
  122. package/test-parallel-agents.sh +0 -124
  123. package/test-parallel-direct.js +0 -155
  124. package/test-subagents.sh +0 -117
@@ -1,893 +0,0 @@
1
- # Troubleshooting Guide
2
-
3
- Common issues and solutions for Lynkr, Claude Code CLI, and Cursor IDE integration.
4
-
5
- ---
6
-
7
- ## Quick Diagnosis
8
-
9
- ### Is Lynkr Running?
10
-
11
- ```bash
12
- # Check if Lynkr is running on port 8081
13
- lsof -i :8081
14
- # Should show node process
15
-
16
- # Test health endpoint
17
- curl http://localhost:8081/health/live
18
- # Should return: {"status":"ok"}
19
- ```
20
-
21
- ### Are Environment Variables Set?
22
-
23
- ```bash
24
- # Check core configuration
25
- echo $MODEL_PROVIDER
26
- echo $ANTHROPIC_BASE_URL # For Claude CLI
27
-
28
- # Check provider-specific
29
- echo $DATABRICKS_API_KEY
30
- echo $AWS_BEDROCK_API_KEY
31
- echo $OPENROUTER_API_KEY
32
- ```
33
-
34
- ### Enable Debug Logging
35
-
36
- ```bash
37
- # In .env or export
38
- export LOG_LEVEL=debug
39
-
40
- # Restart Lynkr
41
- lynkr start
42
-
43
- # Check logs for detailed info
44
- ```
45
-
46
- ---
47
-
48
- ## Lynkr Server Issues
49
-
50
- ### Server Won't Start
51
-
52
- **Issue:** Lynkr fails to start
53
-
54
- **Symptoms:**
55
- ```
56
- Error: MODEL_PROVIDER requires credentials
57
- Error: Port 8081 already in use
58
- Error: Cannot find module 'xxx'
59
- ```
60
-
61
- **Solutions:**
62
-
63
- 1. **Missing credentials:**
64
- ```bash
65
- # Check provider is configured
66
- echo $MODEL_PROVIDER
67
- echo $DATABRICKS_API_KEY # or other provider key
68
-
69
- # If empty, set them:
70
- export MODEL_PROVIDER=databricks
71
- export DATABRICKS_API_KEY=your-key
72
- ```
73
-
74
- 2. **Port already in use:**
75
- ```bash
76
- # Find process using port 8081
77
- lsof -i :8081
78
-
79
- # Kill the process
80
- kill -9 <PID>
81
-
82
- # Or use different port
83
- export PORT=8082
84
- lynkr start
85
- ```
86
-
87
- 3. **Missing dependencies:**
88
- ```bash
89
- # Reinstall dependencies
90
- npm install
91
-
92
- # Or for global install
93
- npm install -g lynkr --force
94
- ```
95
-
96
- ---
97
-
98
- ### Connection Refused
99
-
100
- **Issue:** `ECONNREFUSED` when connecting to Lynkr
101
-
102
- **Symptoms:**
103
- - Claude CLI: `Connection refused`
104
- - Cursor: `Network error`
105
-
106
- **Solutions:**
107
-
108
- 1. **Verify Lynkr is running:**
109
- ```bash
110
- lsof -i :8081
111
- # Should show node process
112
- ```
113
-
114
- 2. **Check port number:**
115
- ```bash
116
- # For Claude CLI
117
- echo $ANTHROPIC_BASE_URL
118
- # Should be: http://localhost:8081
119
-
120
- # For Cursor
121
- # Check Base URL in settings: http://localhost:8081/v1
122
- ```
123
-
124
- 3. **Test health endpoint:**
125
- ```bash
126
- curl http://localhost:8081/health/live
127
- # Should return: {"status":"ok"}
128
- ```
129
-
130
- 4. **Check firewall:**
131
- ```bash
132
- # macOS: System Preferences → Security & Privacy → Firewall
133
- # Allow incoming connections for node
134
- ```
135
-
136
- ---
137
-
138
- ### High Memory Usage
139
-
140
- **Issue:** Lynkr consuming too much memory
141
-
142
- **Symptoms:**
143
- - Memory usage > 2GB
144
- - System slowdown
145
- - Crashes due to OOM
146
-
147
- **Solutions:**
148
-
149
- 1. **Enable load shedding:**
150
- ```bash
151
- export LOAD_SHEDDING_MEMORY_THRESHOLD=0.85
152
- export LOAD_SHEDDING_HEAP_THRESHOLD=0.90
153
- ```
154
-
155
- 2. **Reduce cache size:**
156
- ```bash
157
- export PROMPT_CACHE_MAX_ENTRIES=32 # Default: 64
158
- export MEMORY_MAX_COUNT=5000 # Default: 10000
159
- ```
160
-
161
- 3. **Restart Lynkr periodically:**
162
- ```bash
163
- # Use process manager like PM2
164
- npm install -g pm2
165
- pm2 start lynkr --name lynkr --max-memory-restart 1G
166
- ```
167
-
168
- ---
169
-
170
- ## Provider Issues
171
-
172
- ### AWS Bedrock
173
-
174
- **Issue:** Authentication failed
175
-
176
- **Symptoms:**
177
- - `401 Unauthorized`
178
- - `Invalid API key`
179
-
180
- **Solutions:**
181
-
182
- 1. **Check API key format:**
183
- ```bash
184
- # Should be bearer token, not Access Key ID
185
- echo $AWS_BEDROCK_API_KEY
186
- # Format: Should look like a long random string
187
- ```
188
-
189
- 2. **Regenerate API key:**
190
- - AWS Console → Bedrock → API Keys
191
- - Generate new key
192
- - Update environment variable
193
-
194
- 3. **Check model access:**
195
- - AWS Console → Bedrock → Model access
196
- - Request access to Claude models
197
- - Wait for approval (can take 5-10 minutes)
198
-
199
- **Issue:** Model not found
200
-
201
- **Symptoms:**
202
- - `Model not available in region`
203
- - `Access denied to model`
204
-
205
- **Solutions:**
206
-
207
- 1. **Use correct model ID:**
208
- ```bash
209
- # Claude 3.5 Sonnet
210
- export AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0
211
-
212
- # Claude 4.5 Sonnet (requires inference profile)
213
- export AWS_BEDROCK_MODEL_ID=us.anthropic.claude-sonnet-4-5-20250929-v1:0
214
- ```
215
-
216
- 2. **Check region:**
217
- ```bash
218
- # Not all models available in all regions
219
- export AWS_BEDROCK_REGION=us-east-1 # Most models available here
220
- ```
221
-
222
- ---
223
-
224
- ### Databricks
225
-
226
- **Issue:** Authentication failed
227
-
228
- **Symptoms:**
229
- - `401 Unauthorized`
230
- - `Invalid token`
231
-
232
- **Solutions:**
233
-
234
- 1. **Check token format:**
235
- ```bash
236
- echo $DATABRICKS_API_KEY
237
- # Should start with: dapi...
238
- ```
239
-
240
- 2. **Regenerate PAT:**
241
- - Databricks workspace → Settings → User Settings
242
- - Generate new token
243
- - Copy and update environment variable
244
-
245
- 3. **Check workspace URL:**
246
- ```bash
247
- echo $DATABRICKS_API_BASE
248
- # Should be: https://your-workspace.cloud.databricks.com
249
- # No trailing slash
250
- ```
251
-
252
- **Issue:** Endpoint not found
253
-
254
- **Symptoms:**
255
- - `404 Not Found`
256
- - `Endpoint does not exist`
257
-
258
- **Solutions:**
259
-
260
- 1. **Check endpoint name:**
261
- ```bash
262
- # Default endpoint path
263
- export DATABRICKS_ENDPOINT_PATH=/serving-endpoints/databricks-claude-sonnet-4-5/invocations
264
-
265
- # Or customize for your endpoint
266
- export DATABRICKS_ENDPOINT_PATH=/serving-endpoints/your-endpoint-name/invocations
267
- ```
268
-
269
- 2. **Verify endpoint exists:**
270
- - Databricks workspace → Serving → Endpoints
271
- - Check endpoint name matches
272
-
273
- ---
274
-
275
- ### OpenRouter
276
-
277
- **Issue:** Rate limiting
278
-
279
- **Symptoms:**
280
- - `429 Too Many Requests`
281
- - `Rate limit exceeded`
282
-
283
- **Solutions:**
284
-
285
- 1. **Add credits:**
286
- - Visit openrouter.ai/account
287
- - Add more credits
288
-
289
- 2. **Switch models:**
290
- ```bash
291
- # Use cheaper model
292
- export OPENROUTER_MODEL=openai/gpt-4o-mini # Faster, cheaper
293
- ```
294
-
295
- 3. **Enable fallback:**
296
- ```bash
297
- export FALLBACK_ENABLED=true
298
- export FALLBACK_PROVIDER=databricks
299
- ```
300
-
301
- **Issue:** Model not found
302
-
303
- **Symptoms:**
304
- - `Invalid model`
305
- - `Model not available`
306
-
307
- **Solutions:**
308
-
309
- 1. **Check model name format:**
310
- ```bash
311
- # Must include provider prefix
312
- export OPENROUTER_MODEL=anthropic/claude-3.5-sonnet # Correct
313
- # NOT: claude-3.5-sonnet (missing provider)
314
- ```
315
-
316
- 2. **Verify model exists:**
317
- - Visit openrouter.ai/models
318
- - Check model is available
319
-
320
- ---
321
-
322
- ### Ollama
323
-
324
- **Issue:** Connection refused
325
-
326
- **Symptoms:**
327
- - `ECONNREFUSED`
328
- - `Cannot connect to Ollama`
329
-
330
- **Solutions:**
331
-
332
- 1. **Start Ollama service:**
333
- ```bash
334
- ollama serve
335
- # Leave running in separate terminal
336
- ```
337
-
338
- 2. **Check endpoint:**
339
- ```bash
340
- echo $OLLAMA_ENDPOINT
341
- # Should be: http://localhost:11434
342
-
343
- # Test endpoint
344
- curl http://localhost:11434/api/tags
345
- # Should return JSON with models
346
- ```
347
-
348
- **Issue:** Model not found
349
-
350
- **Symptoms:**
351
- - `Error: model "llama3.1:8b" not found`
352
-
353
- **Solutions:**
354
-
355
- 1. **Pull the model:**
356
- ```bash
357
- ollama pull llama3.1:8b
358
- ```
359
-
360
- 2. **List available models:**
361
- ```bash
362
- ollama list
363
- ```
364
-
365
- 3. **Verify model name:**
366
- ```bash
367
- echo $OLLAMA_MODEL
368
- # Should match model from `ollama list`
369
- ```
370
-
371
- **Issue:** Poor tool calling
372
-
373
- **Symptoms:**
374
- - Tools not invoked correctly
375
- - Malformed tool calls
376
-
377
- **Solutions:**
378
-
379
- 1. **Use tool-capable model:**
380
- ```bash
381
- # Good tool calling
382
- ollama pull llama3.1:8b # Recommended
383
- ollama pull qwen2.5:14b # Better (7b struggles)
384
- ollama pull mistral:7b-instruct
385
-
386
- # Poor tool calling
387
- # Avoid: qwen2.5-coder, codellama
388
- ```
389
-
390
- 2. **Upgrade to larger model:**
391
- ```bash
392
- ollama pull qwen2.5:14b # Better than 7b for tools
393
- ```
394
-
395
- 3. **Enable fallback:**
396
- ```bash
397
- export FALLBACK_ENABLED=true
398
- export FALLBACK_PROVIDER=databricks
399
- ```
400
-
401
- ---
402
-
403
- ### llama.cpp
404
-
405
- **Issue:** Server not responding
406
-
407
- **Symptoms:**
408
- - `ECONNREFUSED`
409
- - `Connection timeout`
410
-
411
- **Solutions:**
412
-
413
- 1. **Start llama-server:**
414
- ```bash
415
- cd llama.cpp
416
- ./llama-server -m model.gguf --port 8080
417
- ```
418
-
419
- 2. **Check endpoint:**
420
- ```bash
421
- echo $LLAMACPP_ENDPOINT
422
- # Should be: http://localhost:8080
423
-
424
- curl http://localhost:8080/health
425
- # Should return: {"status":"ok"}
426
- ```
427
-
428
- **Issue:** Out of memory
429
-
430
- **Symptoms:**
431
- - Server crashes
432
- - `Failed to allocate memory`
433
-
434
- **Solutions:**
435
-
436
- 1. **Use smaller quantization:**
437
- ```bash
438
- # Q4 = 4-bit quantization (smaller, faster)
439
- # Q8 = 8-bit quantization (larger, better quality)
440
-
441
- # Download Q4 model instead of Q8
442
- wget https://huggingface.co/.../model.Q4_K_M.gguf # Smaller
443
- ```
444
-
445
- 2. **Reduce context size:**
446
- ```bash
447
- ./llama-server -m model.gguf --ctx-size 2048 # Default: 4096
448
- ```
449
-
450
- 3. **Enable GPU offloading:**
451
- ```bash
452
- # For NVIDIA
453
- ./llama-server -m model.gguf --n-gpu-layers 32
454
-
455
- # For Apple Silicon
456
- ./llama-server -m model.gguf --n-gpu-layers 32
457
- ```
458
-
459
- ---
460
-
461
- ## Claude Code CLI Issues
462
-
463
- ### CLI Can't Connect
464
-
465
- **Issue:** Claude CLI can't reach Lynkr
466
-
467
- **Symptoms:**
468
- - `Connection refused`
469
- - `Failed to connect to Anthropic API`
470
-
471
- **Solutions:**
472
-
473
- 1. **Check environment variables:**
474
- ```bash
475
- echo $ANTHROPIC_BASE_URL
476
- # Should be: http://localhost:8081
477
-
478
- echo $ANTHROPIC_API_KEY
479
- # Can be any value: dummy, test, etc.
480
- ```
481
-
482
- 2. **Set permanently:**
483
- ```bash
484
- # Add to ~/.bashrc or ~/.zshrc
485
- export ANTHROPIC_BASE_URL=http://localhost:8081
486
- export ANTHROPIC_API_KEY=dummy
487
-
488
- # Reload
489
- source ~/.bashrc
490
- ```
491
-
492
- 3. **Test Lynkr:**
493
- ```bash
494
- curl http://localhost:8081/health/live
495
- # Should return: {"status":"ok"}
496
- ```
497
-
498
- ---
499
-
500
- ### Tools Not Working
501
-
502
- **Issue:** File/Bash tools fail
503
-
504
- **Symptoms:**
505
- - `Tool execution failed`
506
- - `Permission denied`
507
- - Tools return errors
508
-
509
- **Solutions:**
510
-
511
- 1. **Check tool execution mode:**
512
- ```bash
513
- echo $TOOL_EXECUTION_MODE
514
- # Should be: server (default) or client
515
- ```
516
-
517
- 2. **Check workspace root:**
518
- ```bash
519
- echo $WORKSPACE_ROOT
520
- # Should be valid directory
521
-
522
- # Verify permissions
523
- ls -la $WORKSPACE_ROOT
524
- ```
525
-
526
- 3. **For server mode:**
527
- ```bash
528
- # Lynkr needs read/write access to workspace
529
- chmod -R u+rw $WORKSPACE_ROOT
530
- ```
531
-
532
- 4. **Switch to client mode:**
533
- ```bash
534
- # Tools execute on CLI side
535
- export TOOL_EXECUTION_MODE=client
536
- lynkr start
537
- ```
538
-
539
- ---
540
-
541
- ### Slow Responses
542
-
543
- **Issue:** Responses take 5+ seconds
544
-
545
- **Solutions:**
546
-
547
- 1. **Check provider latency:**
548
- ```bash
549
- # In Lynkr logs, look for:
550
- # "Response time: 2500ms"
551
- ```
552
-
553
- 2. **Use local provider:**
554
- ```bash
555
- export MODEL_PROVIDER=ollama
556
- export OLLAMA_MODEL=llama3.1:8b
557
- ```
558
-
559
- 3. **Enable hybrid routing:**
560
- ```bash
561
- export PREFER_OLLAMA=true
562
- export FALLBACK_ENABLED=true
563
- ```
564
-
565
- ---
566
-
567
- ## Cursor IDE Issues
568
-
569
- ### Can't Connect to Lynkr
570
-
571
- **Issue:** Cursor shows connection errors
572
-
573
- **Solutions:**
574
-
575
- 1. **Check Base URL:**
576
- - Cursor Settings → Models → Base URL
577
- - ✅ Correct: `http://localhost:8081/v1`
578
- - ❌ Wrong: `http://localhost:8081` (missing `/v1`)
579
-
580
- 2. **Verify port:**
581
- ```bash
582
- echo $PORT
583
- # Should match Cursor Base URL port
584
- ```
585
-
586
- 3. **Test endpoint:**
587
- ```bash
588
- curl http://localhost:8081/v1/health
589
- # Should return: {"status":"ok"}
590
- ```
591
-
592
- ---
593
-
594
- ### @Codebase Doesn't Work
595
-
596
- **Issue:** @Codebase search returns no results
597
-
598
- **Solutions:**
599
-
600
- 1. **Check embeddings configured:**
601
- ```bash
602
- curl http://localhost:8081/v1/embeddings \
603
- -H "Content-Type: application/json" \
604
- -d '{"input":"test","model":"text-embedding-ada-002"}'
605
-
606
- # Should return embeddings, not 501
607
- ```
608
-
609
- 2. **Configure embeddings:**
610
- ```bash
611
- # Option A: Ollama (local, free)
612
- ollama pull nomic-embed-text
613
- export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
614
-
615
- # Option B: OpenRouter (cloud)
616
- export OPENROUTER_API_KEY=sk-or-v1-your-key
617
-
618
- # Option C: OpenAI (cloud)
619
- export OPENAI_API_KEY=sk-your-key
620
- ```
621
-
622
- 3. **Restart Lynkr** after adding embeddings config
623
-
624
- 4. **Restart Cursor** to re-index codebase
625
-
626
- See [Embeddings Guide](embeddings.md) for details.
627
-
628
- ---
629
-
630
- ### Poor Search Results
631
-
632
- **Issue:** @Codebase returns irrelevant files
633
-
634
- **Solutions:**
635
-
636
- 1. **Upgrade embedding model:**
637
- ```bash
638
- # Ollama: Use larger model
639
- ollama pull mxbai-embed-large
640
- export OLLAMA_EMBEDDINGS_MODEL=mxbai-embed-large
641
-
642
- # OpenRouter: Use code-specialized model
643
- export OPENROUTER_EMBEDDINGS_MODEL=voyage/voyage-code-2
644
- ```
645
-
646
- 2. **Switch to cloud embeddings:**
647
- - Local (Ollama/llama.cpp): Good
648
- - Cloud (OpenRouter/OpenAI): Excellent
649
-
650
- 3. **Re-index workspace:**
651
- - Close and reopen workspace in Cursor
652
-
653
- ---
654
-
655
- ### Model Not Found
656
-
657
- **Issue:** Cursor can't find model
658
-
659
- **Solutions:**
660
-
661
- 1. **Match model to provider:**
662
- - Bedrock: `claude-3.5-sonnet`
663
- - Databricks: `claude-sonnet-4.5`
664
- - OpenRouter: `anthropic/claude-3.5-sonnet`
665
- - Ollama: `llama3.1:8b` (actual model name)
666
-
667
- 2. **Try generic names:**
668
- - `claude-3.5-sonnet`
669
- - `gpt-4o`
670
- - Lynkr translates these across providers
671
-
672
- ---
673
-
674
- ## Embeddings Issues
675
-
676
- ### 501 Not Implemented
677
-
678
- **Issue:** Embeddings endpoint returns 501
679
-
680
- **Symptoms:**
681
- ```bash
682
- curl http://localhost:8081/v1/embeddings
683
- # Returns: {"error":"Embeddings not configured"}
684
- ```
685
-
686
- **Solutions:**
687
-
688
- Configure ONE embeddings provider:
689
-
690
- ```bash
691
- # Option A: Ollama
692
- ollama pull nomic-embed-text
693
- export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
694
-
695
- # Option B: llama.cpp
696
- export LLAMACPP_EMBEDDINGS_ENDPOINT=http://localhost:8080/embeddings
697
-
698
- # Option C: OpenRouter
699
- export OPENROUTER_API_KEY=sk-or-v1-your-key
700
-
701
- # Option D: OpenAI
702
- export OPENAI_API_KEY=sk-your-key
703
- ```
704
-
705
- Restart Lynkr after configuration.
706
-
707
- ---
708
-
709
- ### Ollama Embeddings Connection Refused
710
-
711
- **Issue:** Can't connect to Ollama embeddings
712
-
713
- **Solutions:**
714
-
715
- 1. **Verify Ollama is running:**
716
- ```bash
717
- curl http://localhost:11434/api/tags
718
- # Should return models list
719
- ```
720
-
721
- 2. **Check model is pulled:**
722
- ```bash
723
- ollama list
724
- # Should show: nomic-embed-text
725
- ```
726
-
727
- 3. **Test embeddings:**
728
- ```bash
729
- curl http://localhost:11434/api/embeddings \
730
- -d '{"model":"nomic-embed-text","prompt":"test"}'
731
- # Should return embedding vector
732
- ```
733
-
734
- ---
735
-
736
- ## Performance Issues
737
-
738
- ### High CPU Usage
739
-
740
- **Issue:** Lynkr consuming 100% CPU
741
-
742
- **Solutions:**
743
-
744
- 1. **Reduce concurrent requests:**
745
- ```bash
746
- export LOAD_SHEDDING_ACTIVE_REQUESTS_THRESHOLD=100
747
- ```
748
-
749
- 2. **Use local provider for simple requests:**
750
- ```bash
751
- export PREFER_OLLAMA=true
752
- export OLLAMA_MODEL=llama3.1:8b
753
- ```
754
-
755
- 3. **Enable circuit breaker:**
756
- ```bash
757
- export CIRCUIT_BREAKER_FAILURE_THRESHOLD=5
758
- export CIRCUIT_BREAKER_TIMEOUT=60000
759
- ```
760
-
761
- ---
762
-
763
- ### Slow First Request / Cold Start Warning
764
-
765
- **Issue:** First request is slow, or you see this warning in logs:
766
- ```
767
- WARN: Potential cold start detected - duration: 14088
768
- ```
769
-
770
- **Why this happens:**
771
- - **Ollama/llama.cpp**: Model loading into memory (10-30+ seconds for large models)
772
- - **Cloud providers**: Cold start initialization (2-5 seconds)
773
- - Ollama unloads models after 5 minutes of inactivity by default
774
-
775
- **Solutions for Ollama:**
776
-
777
- 1. **Keep models loaded with OLLAMA_KEEP_ALIVE** (Recommended):
778
- ```bash
779
- # macOS - set environment variable for Ollama
780
- launchctl setenv OLLAMA_KEEP_ALIVE "24h"
781
- # Then restart Ollama app
782
-
783
- # Linux (systemd)
784
- sudo systemctl edit ollama
785
- # Add: Environment="OLLAMA_KEEP_ALIVE=24h"
786
- sudo systemctl daemon-reload && sudo systemctl restart ollama
787
-
788
- # Docker
789
- docker run -e OLLAMA_KEEP_ALIVE=24h -d ollama/ollama
790
- ```
791
-
792
- 2. **Per-request keep alive:**
793
- ```bash
794
- curl http://localhost:11434/api/generate \
795
- -d '{"model":"llama3.1:8b","keep_alive":"24h"}'
796
- ```
797
-
798
- 3. **Warm up after startup:**
799
- ```bash
800
- # Send test request after starting Lynkr
801
- curl http://localhost:8081/v1/messages \
802
- -H "Content-Type: application/json" \
803
- -d '{"model":"claude-3-5-sonnet","max_tokens":10,"messages":[{"role":"user","content":"hi"}]}'
804
- ```
805
-
806
- **Keep Alive Values:**
807
- | Value | Behavior |
808
- |-------|----------|
809
- | `5m` | Default - unload after 5 minutes idle |
810
- | `24h` | Keep loaded for 24 hours |
811
- | `-1` | Never unload |
812
- | `0` | Unload immediately |
813
-
814
- **Note:** The cold start warning is informational - it helps identify latency issues but is not an error.
815
-
816
- ---
817
-
818
- ## Memory System Issues
819
-
820
- ### Too Many Memories
821
-
822
- **Issue:** Memory database growing too large
823
-
824
- **Solutions:**
825
-
826
- 1. **Reduce max count:**
827
- ```bash
828
- export MEMORY_MAX_COUNT=5000 # Default: 10000
829
- ```
830
-
831
- 2. **Reduce max age:**
832
- ```bash
833
- export MEMORY_MAX_AGE_DAYS=30 # Default: 90
834
- ```
835
-
836
- 3. **Increase surprise threshold:**
837
- ```bash
838
- export MEMORY_SURPRISE_THRESHOLD=0.5 # Default: 0.3 (higher = less stored)
839
- ```
840
-
841
- 4. **Manually prune:**
842
- ```bash
843
- # Delete old database
844
- rm data/memories.db
845
- # Will be recreated on next start
846
- ```
847
-
848
- ---
849
-
850
- ## Getting More Help
851
-
852
- ### Enable Debug Logging
853
-
854
- ```bash
855
- export LOG_LEVEL=debug
856
- lynkr start
857
-
858
- # Check logs for detailed request/response info
859
- ```
860
-
861
- ### Check Logs
862
-
863
- ```bash
864
- # Lynkr logs (in terminal where you started it)
865
- # Look for errors, warnings, response times
866
-
867
- # For systemd
868
- journalctl -u lynkr -f
869
-
870
- # For PM2
871
- pm2 logs lynkr
872
- ```
873
-
874
- ### Community Support
875
-
876
- - **[GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Ask questions
877
- - **[GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Report bugs
878
- - **[FAQ](faq.md)** - Frequently asked questions
879
-
880
- ---
881
-
882
- ## Still Having Issues?
883
-
884
- If you've tried the above solutions and still have problems:
885
-
886
- 1. **Enable debug logging** and check logs
887
- 2. **Search [GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** for similar problems
888
- 3. **Ask in [GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** with:
889
- - Lynkr version
890
- - Provider being used
891
- - Full error message
892
- - Debug logs
893
- - Steps to reproduce