lynkr 8.0.0 → 9.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.lynkr/telemetry.db +0 -0
- package/.lynkr/telemetry.db-shm +0 -0
- package/.lynkr/telemetry.db-wal +0 -0
- package/README.md +196 -322
- package/lynkr-skill.tar.gz +0 -0
- package/package.json +4 -3
- package/src/api/openai-router.js +64 -13
- package/src/api/providers-handler.js +171 -3
- package/src/api/router.js +9 -2
- package/src/clients/circuit-breaker.js +10 -247
- package/src/clients/codex-process.js +342 -0
- package/src/clients/codex-utils.js +143 -0
- package/src/clients/databricks.js +210 -63
- package/src/clients/resilience.js +540 -0
- package/src/clients/retry.js +22 -167
- package/src/clients/standard-tools.js +23 -0
- package/src/config/index.js +77 -0
- package/src/context/compression.js +42 -9
- package/src/context/distill.js +492 -0
- package/src/orchestrator/index.js +48 -8
- package/src/routing/complexity-analyzer.js +258 -5
- package/src/routing/index.js +12 -2
- package/src/routing/latency-tracker.js +148 -0
- package/src/routing/model-tiers.js +2 -0
- package/src/routing/quality-scorer.js +113 -0
- package/src/routing/telemetry.js +464 -0
- package/src/server.js +13 -12
- package/src/tools/code-graph.js +538 -0
- package/src/tools/code-mode.js +304 -0
- package/src/tools/index.js +4 -0
- package/src/tools/lazy-loader.js +18 -0
- package/src/tools/mcp-remote.js +7 -0
- package/src/tools/smart-selection.js +11 -0
- package/src/tools/tinyfish.js +358 -0
- package/src/tools/truncate.js +1 -0
- package/src/utils/payload.js +206 -0
- package/src/utils/perf-timer.js +80 -0
- package/.github/FUNDING.yml +0 -15
- package/.github/workflows/README.md +0 -215
- package/.github/workflows/ci.yml +0 -69
- package/.github/workflows/index.yml +0 -62
- package/.github/workflows/web-tools-tests.yml +0 -56
- package/CITATIONS.bib +0 -6
- package/DEPLOYMENT.md +0 -1001
- package/LYNKR-TUI-PLAN.md +0 -984
- package/PERFORMANCE-REPORT.md +0 -866
- package/PLAN-per-client-model-routing.md +0 -252
- package/docs/42642f749da6234f41b6b425c3bb07c9.txt +0 -1
- package/docs/BingSiteAuth.xml +0 -4
- package/docs/docs-style.css +0 -478
- package/docs/docs.html +0 -198
- package/docs/google5be250e608e6da39.html +0 -1
- package/docs/index.html +0 -577
- package/docs/index.md +0 -584
- package/docs/robots.txt +0 -4
- package/docs/sitemap.xml +0 -44
- package/docs/style.css +0 -1223
- package/docs/toon-integration-spec.md +0 -130
- package/documentation/README.md +0 -101
- package/documentation/api.md +0 -806
- package/documentation/claude-code-cli.md +0 -679
- package/documentation/codex-cli.md +0 -397
- package/documentation/contributing.md +0 -571
- package/documentation/cursor-integration.md +0 -734
- package/documentation/docker.md +0 -874
- package/documentation/embeddings.md +0 -762
- package/documentation/faq.md +0 -713
- package/documentation/features.md +0 -403
- package/documentation/headroom.md +0 -519
- package/documentation/installation.md +0 -758
- package/documentation/memory-system.md +0 -476
- package/documentation/production.md +0 -636
- package/documentation/providers.md +0 -1009
- package/documentation/routing.md +0 -476
- package/documentation/testing.md +0 -629
- package/documentation/token-optimization.md +0 -325
- package/documentation/tools.md +0 -697
- package/documentation/troubleshooting.md +0 -969
- package/final-test.js +0 -33
- package/headroom-sidecar/config.py +0 -93
- package/headroom-sidecar/requirements.txt +0 -14
- package/headroom-sidecar/server.py +0 -451
- package/monitor-agents.sh +0 -31
- package/scripts/audit-log-reader.js +0 -399
- package/scripts/compact-dictionary.js +0 -204
- package/scripts/test-deduplication.js +0 -448
- package/src/db/database.sqlite +0 -0
- package/te +0 -11622
- package/test/README.md +0 -212
- package/test/azure-openai-config.test.js +0 -213
- package/test/azure-openai-error-resilience.test.js +0 -238
- package/test/azure-openai-format-conversion.test.js +0 -354
- package/test/azure-openai-integration.test.js +0 -287
- package/test/azure-openai-routing.test.js +0 -175
- package/test/azure-openai-streaming.test.js +0 -171
- package/test/bedrock-integration.test.js +0 -457
- package/test/comprehensive-test-suite.js +0 -928
- package/test/config-validation.test.js +0 -207
- package/test/cursor-integration.test.js +0 -484
- package/test/format-conversion.test.js +0 -578
- package/test/hybrid-routing-integration.test.js +0 -269
- package/test/hybrid-routing-performance.test.js +0 -428
- package/test/llamacpp-integration.test.js +0 -882
- package/test/lmstudio-integration.test.js +0 -347
- package/test/memory/extractor.test.js +0 -398
- package/test/memory/retriever.test.js +0 -613
- package/test/memory/retriever.test.js.bak +0 -585
- package/test/memory/search.test.js +0 -537
- package/test/memory/search.test.js.bak +0 -389
- package/test/memory/store.test.js +0 -344
- package/test/memory/store.test.js.bak +0 -312
- package/test/memory/surprise.test.js +0 -300
- package/test/memory-performance.test.js +0 -472
- package/test/openai-integration.test.js +0 -683
- package/test/openrouter-error-resilience.test.js +0 -418
- package/test/passthrough-mode.test.js +0 -385
- package/test/performance-benchmark.js +0 -351
- package/test/performance-tests.js +0 -528
- package/test/routing.test.js +0 -225
- package/test/toon-compression.test.js +0 -131
- package/test/web-tools.test.js +0 -329
- package/test-agents-simple.js +0 -43
- package/test-cli-connection.sh +0 -33
- package/test-learning-unit.js +0 -126
- package/test-learning.js +0 -112
- package/test-parallel-agents.sh +0 -124
- package/test-parallel-direct.js +0 -155
- package/test-subagents.sh +0 -117
|
@@ -1,969 +0,0 @@
|
|
|
1
|
-
# Troubleshooting Guide
|
|
2
|
-
|
|
3
|
-
Common issues and solutions for Lynkr, Claude Code CLI, and Cursor IDE integration.
|
|
4
|
-
|
|
5
|
-
---
|
|
6
|
-
|
|
7
|
-
## Quick Diagnosis
|
|
8
|
-
|
|
9
|
-
### Is Lynkr Running?
|
|
10
|
-
|
|
11
|
-
```bash
|
|
12
|
-
# Check if Lynkr is running on port 8081
|
|
13
|
-
lsof -i :8081
|
|
14
|
-
# Should show node process
|
|
15
|
-
|
|
16
|
-
# Test health endpoint
|
|
17
|
-
curl http://localhost:8081/health/live
|
|
18
|
-
# Should return: {"status":"ok"}
|
|
19
|
-
```
|
|
20
|
-
|
|
21
|
-
### Are Environment Variables Set?
|
|
22
|
-
|
|
23
|
-
```bash
|
|
24
|
-
# Check core configuration
|
|
25
|
-
echo $MODEL_PROVIDER
|
|
26
|
-
echo $ANTHROPIC_BASE_URL # For Claude CLI
|
|
27
|
-
|
|
28
|
-
# Check provider-specific
|
|
29
|
-
echo $DATABRICKS_API_KEY
|
|
30
|
-
echo $AWS_BEDROCK_API_KEY
|
|
31
|
-
echo $OPENROUTER_API_KEY
|
|
32
|
-
```
|
|
33
|
-
|
|
34
|
-
### Enable Debug Logging
|
|
35
|
-
|
|
36
|
-
```bash
|
|
37
|
-
# In .env or export
|
|
38
|
-
export LOG_LEVEL=debug
|
|
39
|
-
|
|
40
|
-
# Restart Lynkr
|
|
41
|
-
lynkr start
|
|
42
|
-
|
|
43
|
-
# Check logs for detailed info
|
|
44
|
-
```
|
|
45
|
-
|
|
46
|
-
---
|
|
47
|
-
|
|
48
|
-
## Lynkr Server Issues
|
|
49
|
-
|
|
50
|
-
### Server Won't Start
|
|
51
|
-
|
|
52
|
-
**Issue:** Lynkr fails to start
|
|
53
|
-
|
|
54
|
-
**Symptoms:**
|
|
55
|
-
```
|
|
56
|
-
Error: MODEL_PROVIDER requires credentials
|
|
57
|
-
Error: Port 8081 already in use
|
|
58
|
-
Error: Cannot find module 'xxx'
|
|
59
|
-
```
|
|
60
|
-
|
|
61
|
-
**Solutions:**
|
|
62
|
-
|
|
63
|
-
1. **Missing credentials:**
|
|
64
|
-
```bash
|
|
65
|
-
# Check provider is configured
|
|
66
|
-
echo $MODEL_PROVIDER
|
|
67
|
-
echo $DATABRICKS_API_KEY # or other provider key
|
|
68
|
-
|
|
69
|
-
# If empty, set them:
|
|
70
|
-
export MODEL_PROVIDER=databricks
|
|
71
|
-
export DATABRICKS_API_KEY=your-key
|
|
72
|
-
```
|
|
73
|
-
|
|
74
|
-
2. **Port already in use:**
|
|
75
|
-
```bash
|
|
76
|
-
# Find process using port 8081
|
|
77
|
-
lsof -i :8081
|
|
78
|
-
|
|
79
|
-
# Kill the process
|
|
80
|
-
kill -9 <PID>
|
|
81
|
-
|
|
82
|
-
# Or use different port
|
|
83
|
-
export PORT=8082
|
|
84
|
-
lynkr start
|
|
85
|
-
```
|
|
86
|
-
|
|
87
|
-
3. **Missing dependencies:**
|
|
88
|
-
```bash
|
|
89
|
-
# Reinstall dependencies
|
|
90
|
-
npm install
|
|
91
|
-
|
|
92
|
-
# Or for global install
|
|
93
|
-
npm install -g lynkr --force
|
|
94
|
-
```
|
|
95
|
-
|
|
96
|
-
---
|
|
97
|
-
|
|
98
|
-
### Connection Refused
|
|
99
|
-
|
|
100
|
-
**Issue:** `ECONNREFUSED` when connecting to Lynkr
|
|
101
|
-
|
|
102
|
-
**Symptoms:**
|
|
103
|
-
- Claude CLI: `Connection refused`
|
|
104
|
-
- Cursor: `Network error`
|
|
105
|
-
|
|
106
|
-
**Solutions:**
|
|
107
|
-
|
|
108
|
-
1. **Verify Lynkr is running:**
|
|
109
|
-
```bash
|
|
110
|
-
lsof -i :8081
|
|
111
|
-
# Should show node process
|
|
112
|
-
```
|
|
113
|
-
|
|
114
|
-
2. **Check port number:**
|
|
115
|
-
```bash
|
|
116
|
-
# For Claude CLI
|
|
117
|
-
echo $ANTHROPIC_BASE_URL
|
|
118
|
-
# Should be: http://localhost:8081
|
|
119
|
-
|
|
120
|
-
# For Cursor
|
|
121
|
-
# Check Base URL in settings: http://localhost:8081/v1
|
|
122
|
-
```
|
|
123
|
-
|
|
124
|
-
3. **Test health endpoint:**
|
|
125
|
-
```bash
|
|
126
|
-
curl http://localhost:8081/health/live
|
|
127
|
-
# Should return: {"status":"ok"}
|
|
128
|
-
```
|
|
129
|
-
|
|
130
|
-
4. **Check firewall:**
|
|
131
|
-
```bash
|
|
132
|
-
# macOS: System Preferences → Security & Privacy → Firewall
|
|
133
|
-
# Allow incoming connections for node
|
|
134
|
-
```
|
|
135
|
-
|
|
136
|
-
---
|
|
137
|
-
|
|
138
|
-
### High Memory Usage
|
|
139
|
-
|
|
140
|
-
**Issue:** Lynkr consuming too much memory
|
|
141
|
-
|
|
142
|
-
**Symptoms:**
|
|
143
|
-
- Memory usage > 2GB
|
|
144
|
-
- System slowdown
|
|
145
|
-
- Crashes due to OOM
|
|
146
|
-
|
|
147
|
-
**Solutions:**
|
|
148
|
-
|
|
149
|
-
1. **Enable load shedding:**
|
|
150
|
-
```bash
|
|
151
|
-
export LOAD_SHEDDING_MEMORY_THRESHOLD=0.85
|
|
152
|
-
export LOAD_SHEDDING_HEAP_THRESHOLD=0.90
|
|
153
|
-
```
|
|
154
|
-
|
|
155
|
-
2. **Reduce cache size:**
|
|
156
|
-
```bash
|
|
157
|
-
export PROMPT_CACHE_MAX_ENTRIES=32 # Default: 64
|
|
158
|
-
export MEMORY_MAX_COUNT=5000 # Default: 10000
|
|
159
|
-
```
|
|
160
|
-
|
|
161
|
-
3. **Restart Lynkr periodically:**
|
|
162
|
-
```bash
|
|
163
|
-
# Use process manager like PM2
|
|
164
|
-
npm install -g pm2
|
|
165
|
-
pm2 start lynkr --name lynkr --max-memory-restart 1G
|
|
166
|
-
```
|
|
167
|
-
|
|
168
|
-
---
|
|
169
|
-
|
|
170
|
-
## Provider Issues
|
|
171
|
-
|
|
172
|
-
### AWS Bedrock
|
|
173
|
-
|
|
174
|
-
**Issue:** Authentication failed
|
|
175
|
-
|
|
176
|
-
**Symptoms:**
|
|
177
|
-
- `401 Unauthorized`
|
|
178
|
-
- `Invalid API key`
|
|
179
|
-
|
|
180
|
-
**Solutions:**
|
|
181
|
-
|
|
182
|
-
1. **Check API key format:**
|
|
183
|
-
```bash
|
|
184
|
-
# Should be bearer token, not Access Key ID
|
|
185
|
-
echo $AWS_BEDROCK_API_KEY
|
|
186
|
-
# Format: Should look like a long random string
|
|
187
|
-
```
|
|
188
|
-
|
|
189
|
-
2. **Regenerate API key:**
|
|
190
|
-
- AWS Console → Bedrock → API Keys
|
|
191
|
-
- Generate new key
|
|
192
|
-
- Update environment variable
|
|
193
|
-
|
|
194
|
-
3. **Check model access:**
|
|
195
|
-
- AWS Console → Bedrock → Model access
|
|
196
|
-
- Request access to Claude models
|
|
197
|
-
- Wait for approval (can take 5-10 minutes)
|
|
198
|
-
|
|
199
|
-
**Issue:** Model not found
|
|
200
|
-
|
|
201
|
-
**Symptoms:**
|
|
202
|
-
- `Model not available in region`
|
|
203
|
-
- `Access denied to model`
|
|
204
|
-
|
|
205
|
-
**Solutions:**
|
|
206
|
-
|
|
207
|
-
1. **Use correct model ID:**
|
|
208
|
-
```bash
|
|
209
|
-
# Claude 3.5 Sonnet
|
|
210
|
-
export AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0
|
|
211
|
-
|
|
212
|
-
# Claude 4.5 Sonnet (requires inference profile)
|
|
213
|
-
export AWS_BEDROCK_MODEL_ID=us.anthropic.claude-sonnet-4-5-20250929-v1:0
|
|
214
|
-
```
|
|
215
|
-
|
|
216
|
-
2. **Check region:**
|
|
217
|
-
```bash
|
|
218
|
-
# Not all models available in all regions
|
|
219
|
-
export AWS_BEDROCK_REGION=us-east-1 # Most models available here
|
|
220
|
-
```
|
|
221
|
-
|
|
222
|
-
---
|
|
223
|
-
|
|
224
|
-
### Databricks
|
|
225
|
-
|
|
226
|
-
**Issue:** Authentication failed
|
|
227
|
-
|
|
228
|
-
**Symptoms:**
|
|
229
|
-
- `401 Unauthorized`
|
|
230
|
-
- `Invalid token`
|
|
231
|
-
|
|
232
|
-
**Solutions:**
|
|
233
|
-
|
|
234
|
-
1. **Check token format:**
|
|
235
|
-
```bash
|
|
236
|
-
echo $DATABRICKS_API_KEY
|
|
237
|
-
# Should start with: dapi...
|
|
238
|
-
```
|
|
239
|
-
|
|
240
|
-
2. **Regenerate PAT:**
|
|
241
|
-
- Databricks workspace → Settings → User Settings
|
|
242
|
-
- Generate new token
|
|
243
|
-
- Copy and update environment variable
|
|
244
|
-
|
|
245
|
-
3. **Check workspace URL:**
|
|
246
|
-
```bash
|
|
247
|
-
echo $DATABRICKS_API_BASE
|
|
248
|
-
# Should be: https://your-workspace.cloud.databricks.com
|
|
249
|
-
# No trailing slash
|
|
250
|
-
```
|
|
251
|
-
|
|
252
|
-
**Issue:** Endpoint not found
|
|
253
|
-
|
|
254
|
-
**Symptoms:**
|
|
255
|
-
- `404 Not Found`
|
|
256
|
-
- `Endpoint does not exist`
|
|
257
|
-
|
|
258
|
-
**Solutions:**
|
|
259
|
-
|
|
260
|
-
1. **Check endpoint name:**
|
|
261
|
-
```bash
|
|
262
|
-
# Default endpoint path
|
|
263
|
-
export DATABRICKS_ENDPOINT_PATH=/serving-endpoints/databricks-claude-sonnet-4-5/invocations
|
|
264
|
-
|
|
265
|
-
# Or customize for your endpoint
|
|
266
|
-
export DATABRICKS_ENDPOINT_PATH=/serving-endpoints/your-endpoint-name/invocations
|
|
267
|
-
```
|
|
268
|
-
|
|
269
|
-
2. **Verify endpoint exists:**
|
|
270
|
-
- Databricks workspace → Serving → Endpoints
|
|
271
|
-
- Check endpoint name matches
|
|
272
|
-
|
|
273
|
-
---
|
|
274
|
-
|
|
275
|
-
### OpenRouter
|
|
276
|
-
|
|
277
|
-
**Issue:** Rate limiting
|
|
278
|
-
|
|
279
|
-
**Symptoms:**
|
|
280
|
-
- `429 Too Many Requests`
|
|
281
|
-
- `Rate limit exceeded`
|
|
282
|
-
|
|
283
|
-
**Solutions:**
|
|
284
|
-
|
|
285
|
-
1. **Add credits:**
|
|
286
|
-
- Visit openrouter.ai/account
|
|
287
|
-
- Add more credits
|
|
288
|
-
|
|
289
|
-
2. **Switch models:**
|
|
290
|
-
```bash
|
|
291
|
-
# Use cheaper model
|
|
292
|
-
export OPENROUTER_MODEL=openai/gpt-4o-mini # Faster, cheaper
|
|
293
|
-
```
|
|
294
|
-
|
|
295
|
-
3. **Enable fallback:**
|
|
296
|
-
```bash
|
|
297
|
-
export FALLBACK_ENABLED=true
|
|
298
|
-
export FALLBACK_PROVIDER=databricks
|
|
299
|
-
```
|
|
300
|
-
|
|
301
|
-
**Issue:** Model not found
|
|
302
|
-
|
|
303
|
-
**Symptoms:**
|
|
304
|
-
- `Invalid model`
|
|
305
|
-
- `Model not available`
|
|
306
|
-
|
|
307
|
-
**Solutions:**
|
|
308
|
-
|
|
309
|
-
1. **Check model name format:**
|
|
310
|
-
```bash
|
|
311
|
-
# Must include provider prefix
|
|
312
|
-
export OPENROUTER_MODEL=anthropic/claude-3.5-sonnet # Correct
|
|
313
|
-
# NOT: claude-3.5-sonnet (missing provider)
|
|
314
|
-
```
|
|
315
|
-
|
|
316
|
-
2. **Verify model exists:**
|
|
317
|
-
- Visit openrouter.ai/models
|
|
318
|
-
- Check model is available
|
|
319
|
-
|
|
320
|
-
---
|
|
321
|
-
|
|
322
|
-
### Ollama
|
|
323
|
-
|
|
324
|
-
**Issue:** Connection refused
|
|
325
|
-
|
|
326
|
-
**Symptoms:**
|
|
327
|
-
- `ECONNREFUSED`
|
|
328
|
-
- `Cannot connect to Ollama`
|
|
329
|
-
|
|
330
|
-
**Solutions:**
|
|
331
|
-
|
|
332
|
-
1. **Start Ollama service:**
|
|
333
|
-
```bash
|
|
334
|
-
ollama serve
|
|
335
|
-
# Leave running in separate terminal
|
|
336
|
-
```
|
|
337
|
-
|
|
338
|
-
2. **Check endpoint:**
|
|
339
|
-
```bash
|
|
340
|
-
echo $OLLAMA_ENDPOINT
|
|
341
|
-
# Should be: http://localhost:11434
|
|
342
|
-
|
|
343
|
-
# Test endpoint
|
|
344
|
-
curl http://localhost:11434/api/tags
|
|
345
|
-
# Should return JSON with models
|
|
346
|
-
```
|
|
347
|
-
|
|
348
|
-
**Issue:** Model not found
|
|
349
|
-
|
|
350
|
-
**Symptoms:**
|
|
351
|
-
- `Error: model "llama3.1:8b" not found`
|
|
352
|
-
|
|
353
|
-
**Solutions:**
|
|
354
|
-
|
|
355
|
-
1. **Pull the model:**
|
|
356
|
-
```bash
|
|
357
|
-
ollama pull llama3.1:8b
|
|
358
|
-
```
|
|
359
|
-
|
|
360
|
-
2. **List available models:**
|
|
361
|
-
```bash
|
|
362
|
-
ollama list
|
|
363
|
-
```
|
|
364
|
-
|
|
365
|
-
3. **Verify model name:**
|
|
366
|
-
```bash
|
|
367
|
-
echo $OLLAMA_MODEL
|
|
368
|
-
# Should match model from `ollama list`
|
|
369
|
-
```
|
|
370
|
-
|
|
371
|
-
**Issue:** Poor tool calling
|
|
372
|
-
|
|
373
|
-
**Symptoms:**
|
|
374
|
-
- Tools not invoked correctly
|
|
375
|
-
- Malformed tool calls
|
|
376
|
-
|
|
377
|
-
**Solutions:**
|
|
378
|
-
|
|
379
|
-
1. **Use tool-capable model:**
|
|
380
|
-
```bash
|
|
381
|
-
# Good tool calling
|
|
382
|
-
ollama pull llama3.1:8b # Recommended
|
|
383
|
-
ollama pull qwen2.5:14b # Better (7b struggles)
|
|
384
|
-
ollama pull mistral:7b-instruct
|
|
385
|
-
|
|
386
|
-
# Poor tool calling
|
|
387
|
-
# Avoid: qwen2.5-coder, codellama
|
|
388
|
-
```
|
|
389
|
-
|
|
390
|
-
2. **Upgrade to larger model:**
|
|
391
|
-
```bash
|
|
392
|
-
ollama pull qwen2.5:14b # Better than 7b for tools
|
|
393
|
-
```
|
|
394
|
-
|
|
395
|
-
3. **Enable fallback:**
|
|
396
|
-
```bash
|
|
397
|
-
export FALLBACK_ENABLED=true
|
|
398
|
-
export FALLBACK_PROVIDER=databricks
|
|
399
|
-
```
|
|
400
|
-
|
|
401
|
-
---
|
|
402
|
-
|
|
403
|
-
### Moonshot AI (Kimi)
|
|
404
|
-
|
|
405
|
-
**Issue:** Rate limited (429)
|
|
406
|
-
|
|
407
|
-
**Symptoms:**
|
|
408
|
-
- `429 Too Many Requests`
|
|
409
|
-
- `Rate limit exceeded`
|
|
410
|
-
- Responses failing intermittently
|
|
411
|
-
|
|
412
|
-
**Solutions:**
|
|
413
|
-
|
|
414
|
-
1. **Reduce concurrency:**
|
|
415
|
-
Moonshot has a max concurrency of ~3 requests. Lynkr retries automatically with backoff, but sustained high concurrency will trigger 429s.
|
|
416
|
-
|
|
417
|
-
2. **Use turbo model:**
|
|
418
|
-
```bash
|
|
419
|
-
# Turbo has higher rate limits than thinking model
|
|
420
|
-
export MOONSHOT_MODEL=kimi-k2-turbo-preview
|
|
421
|
-
```
|
|
422
|
-
|
|
423
|
-
3. **Enable fallback:**
|
|
424
|
-
```bash
|
|
425
|
-
export FALLBACK_ENABLED=true
|
|
426
|
-
export FALLBACK_PROVIDER=openrouter
|
|
427
|
-
```
|
|
428
|
-
|
|
429
|
-
**Issue:** Authentication failed
|
|
430
|
-
|
|
431
|
-
**Symptoms:**
|
|
432
|
-
- `401 Unauthorized`
|
|
433
|
-
- `Invalid API key`
|
|
434
|
-
|
|
435
|
-
**Solutions:**
|
|
436
|
-
|
|
437
|
-
1. **Check API key format:**
|
|
438
|
-
```bash
|
|
439
|
-
echo $MOONSHOT_API_KEY
|
|
440
|
-
# Should start with: sk-
|
|
441
|
-
```
|
|
442
|
-
|
|
443
|
-
2. **Regenerate API key:**
|
|
444
|
-
- Visit [platform.moonshot.ai](https://platform.moonshot.ai)
|
|
445
|
-
- Generate a new key
|
|
446
|
-
- Update environment variable
|
|
447
|
-
|
|
448
|
-
3. **Check endpoint:**
|
|
449
|
-
```bash
|
|
450
|
-
echo $MOONSHOT_ENDPOINT
|
|
451
|
-
# Should be: https://api.moonshot.ai/v1/chat/completions
|
|
452
|
-
```
|
|
453
|
-
|
|
454
|
-
**Issue:** Reasoning content displayed in output
|
|
455
|
-
|
|
456
|
-
**Symptoms:**
|
|
457
|
-
- Response includes chain-of-thought text before the actual answer
|
|
458
|
-
- Long preambles like "The user is asking me to..."
|
|
459
|
-
|
|
460
|
-
**Solutions:**
|
|
461
|
-
|
|
462
|
-
This happens when using `kimi-k2-thinking` model. Lynkr should automatically strip reasoning content and only show the final answer. If you see reasoning in the output:
|
|
463
|
-
|
|
464
|
-
1. **Update Lynkr** to the latest version
|
|
465
|
-
2. **Switch to turbo model** if reasoning output is not needed:
|
|
466
|
-
```bash
|
|
467
|
-
export MOONSHOT_MODEL=kimi-k2-turbo-preview
|
|
468
|
-
```
|
|
469
|
-
|
|
470
|
-
---
|
|
471
|
-
|
|
472
|
-
### llama.cpp
|
|
473
|
-
|
|
474
|
-
**Issue:** Server not responding
|
|
475
|
-
|
|
476
|
-
**Symptoms:**
|
|
477
|
-
- `ECONNREFUSED`
|
|
478
|
-
- `Connection timeout`
|
|
479
|
-
|
|
480
|
-
**Solutions:**
|
|
481
|
-
|
|
482
|
-
1. **Start llama-server:**
|
|
483
|
-
```bash
|
|
484
|
-
cd llama.cpp
|
|
485
|
-
./llama-server -m model.gguf --port 8080
|
|
486
|
-
```
|
|
487
|
-
|
|
488
|
-
2. **Check endpoint:**
|
|
489
|
-
```bash
|
|
490
|
-
echo $LLAMACPP_ENDPOINT
|
|
491
|
-
# Should be: http://localhost:8080
|
|
492
|
-
|
|
493
|
-
curl http://localhost:8080/health
|
|
494
|
-
# Should return: {"status":"ok"}
|
|
495
|
-
```
|
|
496
|
-
|
|
497
|
-
**Issue:** Out of memory
|
|
498
|
-
|
|
499
|
-
**Symptoms:**
|
|
500
|
-
- Server crashes
|
|
501
|
-
- `Failed to allocate memory`
|
|
502
|
-
|
|
503
|
-
**Solutions:**
|
|
504
|
-
|
|
505
|
-
1. **Use smaller quantization:**
|
|
506
|
-
```bash
|
|
507
|
-
# Q4 = 4-bit quantization (smaller, faster)
|
|
508
|
-
# Q8 = 8-bit quantization (larger, better quality)
|
|
509
|
-
|
|
510
|
-
# Download Q4 model instead of Q8
|
|
511
|
-
wget https://huggingface.co/.../model.Q4_K_M.gguf # Smaller
|
|
512
|
-
```
|
|
513
|
-
|
|
514
|
-
2. **Reduce context size:**
|
|
515
|
-
```bash
|
|
516
|
-
./llama-server -m model.gguf --ctx-size 2048 # Default: 4096
|
|
517
|
-
```
|
|
518
|
-
|
|
519
|
-
3. **Enable GPU offloading:**
|
|
520
|
-
```bash
|
|
521
|
-
# For NVIDIA
|
|
522
|
-
./llama-server -m model.gguf --n-gpu-layers 32
|
|
523
|
-
|
|
524
|
-
# For Apple Silicon
|
|
525
|
-
./llama-server -m model.gguf --n-gpu-layers 32
|
|
526
|
-
```
|
|
527
|
-
|
|
528
|
-
---
|
|
529
|
-
|
|
530
|
-
## Claude Code CLI Issues
|
|
531
|
-
|
|
532
|
-
### CLI Can't Connect
|
|
533
|
-
|
|
534
|
-
**Issue:** Claude CLI can't reach Lynkr
|
|
535
|
-
|
|
536
|
-
**Symptoms:**
|
|
537
|
-
- `Connection refused`
|
|
538
|
-
- `Failed to connect to Anthropic API`
|
|
539
|
-
|
|
540
|
-
**Solutions:**
|
|
541
|
-
|
|
542
|
-
1. **Check environment variables:**
|
|
543
|
-
```bash
|
|
544
|
-
echo $ANTHROPIC_BASE_URL
|
|
545
|
-
# Should be: http://localhost:8081
|
|
546
|
-
|
|
547
|
-
echo $ANTHROPIC_API_KEY
|
|
548
|
-
# Can be any value: dummy, test, etc.
|
|
549
|
-
```
|
|
550
|
-
|
|
551
|
-
2. **Set permanently:**
|
|
552
|
-
```bash
|
|
553
|
-
# Add to ~/.bashrc or ~/.zshrc
|
|
554
|
-
export ANTHROPIC_BASE_URL=http://localhost:8081
|
|
555
|
-
export ANTHROPIC_API_KEY=dummy
|
|
556
|
-
|
|
557
|
-
# Reload
|
|
558
|
-
source ~/.bashrc
|
|
559
|
-
```
|
|
560
|
-
|
|
561
|
-
3. **Test Lynkr:**
|
|
562
|
-
```bash
|
|
563
|
-
curl http://localhost:8081/health/live
|
|
564
|
-
# Should return: {"status":"ok"}
|
|
565
|
-
```
|
|
566
|
-
|
|
567
|
-
---
|
|
568
|
-
|
|
569
|
-
### Tools Not Working
|
|
570
|
-
|
|
571
|
-
**Issue:** File/Bash tools fail
|
|
572
|
-
|
|
573
|
-
**Symptoms:**
|
|
574
|
-
- `Tool execution failed`
|
|
575
|
-
- `Permission denied`
|
|
576
|
-
- Tools return errors
|
|
577
|
-
|
|
578
|
-
**Solutions:**
|
|
579
|
-
|
|
580
|
-
1. **Check tool execution mode:**
|
|
581
|
-
```bash
|
|
582
|
-
echo $TOOL_EXECUTION_MODE
|
|
583
|
-
# Should be: server (default) or client
|
|
584
|
-
```
|
|
585
|
-
|
|
586
|
-
2. **Check workspace root:**
|
|
587
|
-
```bash
|
|
588
|
-
echo $WORKSPACE_ROOT
|
|
589
|
-
# Should be valid directory
|
|
590
|
-
|
|
591
|
-
# Verify permissions
|
|
592
|
-
ls -la $WORKSPACE_ROOT
|
|
593
|
-
```
|
|
594
|
-
|
|
595
|
-
3. **For server mode:**
|
|
596
|
-
```bash
|
|
597
|
-
# Lynkr needs read/write access to workspace
|
|
598
|
-
chmod -R u+rw $WORKSPACE_ROOT
|
|
599
|
-
```
|
|
600
|
-
|
|
601
|
-
4. **Switch to client mode:**
|
|
602
|
-
```bash
|
|
603
|
-
# Tools execute on CLI side
|
|
604
|
-
export TOOL_EXECUTION_MODE=client
|
|
605
|
-
lynkr start
|
|
606
|
-
```
|
|
607
|
-
|
|
608
|
-
---
|
|
609
|
-
|
|
610
|
-
### Slow Responses
|
|
611
|
-
|
|
612
|
-
**Issue:** Responses take 5+ seconds
|
|
613
|
-
|
|
614
|
-
**Solutions:**
|
|
615
|
-
|
|
616
|
-
1. **Check provider latency:**
|
|
617
|
-
```bash
|
|
618
|
-
# In Lynkr logs, look for:
|
|
619
|
-
# "Response time: 2500ms"
|
|
620
|
-
```
|
|
621
|
-
|
|
622
|
-
2. **Use local provider:**
|
|
623
|
-
```bash
|
|
624
|
-
export MODEL_PROVIDER=ollama
|
|
625
|
-
export OLLAMA_MODEL=llama3.1:8b
|
|
626
|
-
```
|
|
627
|
-
|
|
628
|
-
3. **Enable tier-based routing:**
|
|
629
|
-
```bash
|
|
630
|
-
# Set all 4 TIER_* env vars to enable tier-based routing
|
|
631
|
-
export TIER_SIMPLE=ollama:llama3.2
|
|
632
|
-
export TIER_MEDIUM=openrouter:openai/gpt-4o-mini
|
|
633
|
-
export TIER_COMPLEX=azure-openai:gpt-4o
|
|
634
|
-
export TIER_REASONING=azure-openai:gpt-4o
|
|
635
|
-
export FALLBACK_ENABLED=true
|
|
636
|
-
```
|
|
637
|
-
|
|
638
|
-
---
|
|
639
|
-
|
|
640
|
-
## Cursor IDE Issues
|
|
641
|
-
|
|
642
|
-
### Can't Connect to Lynkr
|
|
643
|
-
|
|
644
|
-
**Issue:** Cursor shows connection errors
|
|
645
|
-
|
|
646
|
-
**Solutions:**
|
|
647
|
-
|
|
648
|
-
1. **Check Base URL:**
|
|
649
|
-
- Cursor Settings → Models → Base URL
|
|
650
|
-
- ✅ Correct: `http://localhost:8081/v1`
|
|
651
|
-
- ❌ Wrong: `http://localhost:8081` (missing `/v1`)
|
|
652
|
-
|
|
653
|
-
2. **Verify port:**
|
|
654
|
-
```bash
|
|
655
|
-
echo $PORT
|
|
656
|
-
# Should match Cursor Base URL port
|
|
657
|
-
```
|
|
658
|
-
|
|
659
|
-
3. **Test endpoint:**
|
|
660
|
-
```bash
|
|
661
|
-
curl http://localhost:8081/v1/health
|
|
662
|
-
# Should return: {"status":"ok"}
|
|
663
|
-
```
|
|
664
|
-
|
|
665
|
-
---
|
|
666
|
-
|
|
667
|
-
### @Codebase Doesn't Work
|
|
668
|
-
|
|
669
|
-
**Issue:** @Codebase search returns no results
|
|
670
|
-
|
|
671
|
-
**Solutions:**
|
|
672
|
-
|
|
673
|
-
1. **Check embeddings configured:**
|
|
674
|
-
```bash
|
|
675
|
-
curl http://localhost:8081/v1/embeddings \
|
|
676
|
-
-H "Content-Type: application/json" \
|
|
677
|
-
-d '{"input":"test","model":"text-embedding-ada-002"}'
|
|
678
|
-
|
|
679
|
-
# Should return embeddings, not 501
|
|
680
|
-
```
|
|
681
|
-
|
|
682
|
-
2. **Configure embeddings:**
|
|
683
|
-
```bash
|
|
684
|
-
# Option A: Ollama (local, free)
|
|
685
|
-
ollama pull nomic-embed-text
|
|
686
|
-
export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
|
|
687
|
-
|
|
688
|
-
# Option B: OpenRouter (cloud)
|
|
689
|
-
export OPENROUTER_API_KEY=sk-or-v1-your-key
|
|
690
|
-
|
|
691
|
-
# Option C: OpenAI (cloud)
|
|
692
|
-
export OPENAI_API_KEY=sk-your-key
|
|
693
|
-
```
|
|
694
|
-
|
|
695
|
-
3. **Restart Lynkr** after adding embeddings config
|
|
696
|
-
|
|
697
|
-
4. **Restart Cursor** to re-index codebase
|
|
698
|
-
|
|
699
|
-
See [Embeddings Guide](embeddings.md) for details.
|
|
700
|
-
|
|
701
|
-
---
|
|
702
|
-
|
|
703
|
-
### Poor Search Results
|
|
704
|
-
|
|
705
|
-
**Issue:** @Codebase returns irrelevant files
|
|
706
|
-
|
|
707
|
-
**Solutions:**
|
|
708
|
-
|
|
709
|
-
1. **Upgrade embedding model:**
|
|
710
|
-
```bash
|
|
711
|
-
# Ollama: Use larger model
|
|
712
|
-
ollama pull mxbai-embed-large
|
|
713
|
-
export OLLAMA_EMBEDDINGS_MODEL=mxbai-embed-large
|
|
714
|
-
|
|
715
|
-
# OpenRouter: Use code-specialized model
|
|
716
|
-
export OPENROUTER_EMBEDDINGS_MODEL=voyage/voyage-code-2
|
|
717
|
-
```
|
|
718
|
-
|
|
719
|
-
2. **Switch to cloud embeddings:**
|
|
720
|
-
- Local (Ollama/llama.cpp): Good
|
|
721
|
-
- Cloud (OpenRouter/OpenAI): Excellent
|
|
722
|
-
|
|
723
|
-
3. **Re-index workspace:**
|
|
724
|
-
- Close and reopen workspace in Cursor
|
|
725
|
-
|
|
726
|
-
---
|
|
727
|
-
|
|
728
|
-
### Model Not Found
|
|
729
|
-
|
|
730
|
-
**Issue:** Cursor can't find model
|
|
731
|
-
|
|
732
|
-
**Solutions:**
|
|
733
|
-
|
|
734
|
-
1. **Match model to provider:**
|
|
735
|
-
- Bedrock: `claude-3.5-sonnet`
|
|
736
|
-
- Databricks: `claude-sonnet-4.5`
|
|
737
|
-
- OpenRouter: `anthropic/claude-3.5-sonnet`
|
|
738
|
-
- Ollama: `llama3.1:8b` (actual model name)
|
|
739
|
-
|
|
740
|
-
2. **Try generic names:**
|
|
741
|
-
- `claude-3.5-sonnet`
|
|
742
|
-
- `gpt-4o`
|
|
743
|
-
- Lynkr translates these across providers
|
|
744
|
-
|
|
745
|
-
---
|
|
746
|
-
|
|
747
|
-
## Embeddings Issues
|
|
748
|
-
|
|
749
|
-
### 501 Not Implemented
|
|
750
|
-
|
|
751
|
-
**Issue:** Embeddings endpoint returns 501
|
|
752
|
-
|
|
753
|
-
**Symptoms:**
|
|
754
|
-
```bash
|
|
755
|
-
curl http://localhost:8081/v1/embeddings
|
|
756
|
-
# Returns: {"error":"Embeddings not configured"}
|
|
757
|
-
```
|
|
758
|
-
|
|
759
|
-
**Solutions:**
|
|
760
|
-
|
|
761
|
-
Configure ONE embeddings provider:
|
|
762
|
-
|
|
763
|
-
```bash
|
|
764
|
-
# Option A: Ollama
|
|
765
|
-
ollama pull nomic-embed-text
|
|
766
|
-
export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
|
|
767
|
-
|
|
768
|
-
# Option B: llama.cpp
|
|
769
|
-
export LLAMACPP_EMBEDDINGS_ENDPOINT=http://localhost:8080/embeddings
|
|
770
|
-
|
|
771
|
-
# Option C: OpenRouter
|
|
772
|
-
export OPENROUTER_API_KEY=sk-or-v1-your-key
|
|
773
|
-
|
|
774
|
-
# Option D: OpenAI
|
|
775
|
-
export OPENAI_API_KEY=sk-your-key
|
|
776
|
-
```
|
|
777
|
-
|
|
778
|
-
Restart Lynkr after configuration.
|
|
779
|
-
|
|
780
|
-
---
|
|
781
|
-
|
|
782
|
-
### Ollama Embeddings Connection Refused
|
|
783
|
-
|
|
784
|
-
**Issue:** Can't connect to Ollama embeddings
|
|
785
|
-
|
|
786
|
-
**Solutions:**
|
|
787
|
-
|
|
788
|
-
1. **Verify Ollama is running:**
|
|
789
|
-
```bash
|
|
790
|
-
curl http://localhost:11434/api/tags
|
|
791
|
-
# Should return models list
|
|
792
|
-
```
|
|
793
|
-
|
|
794
|
-
2. **Check model is pulled:**
|
|
795
|
-
```bash
|
|
796
|
-
ollama list
|
|
797
|
-
# Should show: nomic-embed-text
|
|
798
|
-
```
|
|
799
|
-
|
|
800
|
-
3. **Test embeddings:**
|
|
801
|
-
```bash
|
|
802
|
-
curl http://localhost:11434/api/embeddings \
|
|
803
|
-
-d '{"model":"nomic-embed-text","prompt":"test"}'
|
|
804
|
-
# Should return embedding vector
|
|
805
|
-
```
|
|
806
|
-
|
|
807
|
-
---
|
|
808
|
-
|
|
809
|
-
## Performance Issues
|
|
810
|
-
|
|
811
|
-
### High CPU Usage
|
|
812
|
-
|
|
813
|
-
**Issue:** Lynkr consuming 100% CPU
|
|
814
|
-
|
|
815
|
-
**Solutions:**
|
|
816
|
-
|
|
817
|
-
1. **Reduce concurrent requests:**
|
|
818
|
-
```bash
|
|
819
|
-
export LOAD_SHEDDING_ACTIVE_REQUESTS_THRESHOLD=100
|
|
820
|
-
```
|
|
821
|
-
|
|
822
|
-
2. **Use tier-based routing to send simple requests to local models:**
|
|
823
|
-
```bash
|
|
824
|
-
# Set all 4 TIER_* env vars to enable tier-based routing
|
|
825
|
-
export TIER_SIMPLE=ollama:llama3.2
|
|
826
|
-
export TIER_MEDIUM=openrouter:openai/gpt-4o-mini
|
|
827
|
-
export TIER_COMPLEX=azure-openai:gpt-4o
|
|
828
|
-
export TIER_REASONING=azure-openai:gpt-4o
|
|
829
|
-
```
|
|
830
|
-
|
|
831
|
-
3. **Enable circuit breaker:**
|
|
832
|
-
```bash
|
|
833
|
-
export CIRCUIT_BREAKER_FAILURE_THRESHOLD=5
|
|
834
|
-
export CIRCUIT_BREAKER_TIMEOUT=60000
|
|
835
|
-
```
|
|
836
|
-
|
|
837
|
-
---
|
|
838
|
-
|
|
839
|
-
### Slow First Request / Cold Start Warning
|
|
840
|
-
|
|
841
|
-
**Issue:** First request is slow, or you see this warning in logs:
|
|
842
|
-
```
|
|
843
|
-
WARN: Potential cold start detected - duration: 14088
|
|
844
|
-
```
|
|
845
|
-
|
|
846
|
-
**Why this happens:**
|
|
847
|
-
- **Ollama/llama.cpp**: Model loading into memory (10-30+ seconds for large models)
|
|
848
|
-
- **Cloud providers**: Cold start initialization (2-5 seconds)
|
|
849
|
-
- Ollama unloads models after 5 minutes of inactivity by default
|
|
850
|
-
|
|
851
|
-
**Solutions for Ollama:**
|
|
852
|
-
|
|
853
|
-
1. **Keep models loaded with OLLAMA_KEEP_ALIVE** (Recommended):
|
|
854
|
-
```bash
|
|
855
|
-
# macOS - set environment variable for Ollama
|
|
856
|
-
launchctl setenv OLLAMA_KEEP_ALIVE "24h"
|
|
857
|
-
# Then restart Ollama app
|
|
858
|
-
|
|
859
|
-
# Linux (systemd)
|
|
860
|
-
sudo systemctl edit ollama
|
|
861
|
-
# Add: Environment="OLLAMA_KEEP_ALIVE=24h"
|
|
862
|
-
sudo systemctl daemon-reload && sudo systemctl restart ollama
|
|
863
|
-
|
|
864
|
-
# Docker
|
|
865
|
-
docker run -e OLLAMA_KEEP_ALIVE=24h -d ollama/ollama
|
|
866
|
-
```
|
|
867
|
-
|
|
868
|
-
2. **Per-request keep alive:**
|
|
869
|
-
```bash
|
|
870
|
-
curl http://localhost:11434/api/generate \
|
|
871
|
-
-d '{"model":"llama3.1:8b","keep_alive":"24h"}'
|
|
872
|
-
```
|
|
873
|
-
|
|
874
|
-
3. **Warm up after startup:**
|
|
875
|
-
```bash
|
|
876
|
-
# Send test request after starting Lynkr
|
|
877
|
-
curl http://localhost:8081/v1/messages \
|
|
878
|
-
-H "Content-Type: application/json" \
|
|
879
|
-
-d '{"model":"claude-3-5-sonnet","max_tokens":10,"messages":[{"role":"user","content":"hi"}]}'
|
|
880
|
-
```
|
|
881
|
-
|
|
882
|
-
**Keep Alive Values:**
|
|
883
|
-
| Value | Behavior |
|
|
884
|
-
|-------|----------|
|
|
885
|
-
| `5m` | Default - unload after 5 minutes idle |
|
|
886
|
-
| `24h` | Keep loaded for 24 hours |
|
|
887
|
-
| `-1` | Never unload |
|
|
888
|
-
| `0` | Unload immediately |
|
|
889
|
-
|
|
890
|
-
**Note:** The cold start warning is informational - it helps identify latency issues but is not an error.
|
|
891
|
-
|
|
892
|
-
---
|
|
893
|
-
|
|
894
|
-
## Memory System Issues
|
|
895
|
-
|
|
896
|
-
### Too Many Memories
|
|
897
|
-
|
|
898
|
-
**Issue:** Memory database growing too large
|
|
899
|
-
|
|
900
|
-
**Solutions:**
|
|
901
|
-
|
|
902
|
-
1. **Reduce max count:**
|
|
903
|
-
```bash
|
|
904
|
-
export MEMORY_MAX_COUNT=5000 # Default: 10000
|
|
905
|
-
```
|
|
906
|
-
|
|
907
|
-
2. **Reduce max age:**
|
|
908
|
-
```bash
|
|
909
|
-
export MEMORY_MAX_AGE_DAYS=30 # Default: 90
|
|
910
|
-
```
|
|
911
|
-
|
|
912
|
-
3. **Increase surprise threshold:**
|
|
913
|
-
```bash
|
|
914
|
-
export MEMORY_SURPRISE_THRESHOLD=0.5 # Default: 0.3 (higher = less stored)
|
|
915
|
-
```
|
|
916
|
-
|
|
917
|
-
4. **Manually prune:**
|
|
918
|
-
```bash
|
|
919
|
-
# Delete old database
|
|
920
|
-
rm data/memories.db
|
|
921
|
-
# Will be recreated on next start
|
|
922
|
-
```
|
|
923
|
-
|
|
924
|
-
---
|
|
925
|
-
|
|
926
|
-
## Getting More Help
|
|
927
|
-
|
|
928
|
-
### Enable Debug Logging
|
|
929
|
-
|
|
930
|
-
```bash
|
|
931
|
-
export LOG_LEVEL=debug
|
|
932
|
-
lynkr start
|
|
933
|
-
|
|
934
|
-
# Check logs for detailed request/response info
|
|
935
|
-
```
|
|
936
|
-
|
|
937
|
-
### Check Logs
|
|
938
|
-
|
|
939
|
-
```bash
|
|
940
|
-
# Lynkr logs (in terminal where you started it)
|
|
941
|
-
# Look for errors, warnings, response times
|
|
942
|
-
|
|
943
|
-
# For systemd
|
|
944
|
-
journalctl -u lynkr -f
|
|
945
|
-
|
|
946
|
-
# For PM2
|
|
947
|
-
pm2 logs lynkr
|
|
948
|
-
```
|
|
949
|
-
|
|
950
|
-
### Community Support
|
|
951
|
-
|
|
952
|
-
- **[GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Ask questions
|
|
953
|
-
- **[GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Report bugs
|
|
954
|
-
- **[FAQ](faq.md)** - Frequently asked questions
|
|
955
|
-
|
|
956
|
-
---
|
|
957
|
-
|
|
958
|
-
## Still Having Issues?
|
|
959
|
-
|
|
960
|
-
If you've tried the above solutions and still have problems:
|
|
961
|
-
|
|
962
|
-
1. **Enable debug logging** and check logs
|
|
963
|
-
2. **Search [GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** for similar problems
|
|
964
|
-
3. **Ask in [GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** with:
|
|
965
|
-
- Lynkr version
|
|
966
|
-
- Provider being used
|
|
967
|
-
- Full error message
|
|
968
|
-
- Debug logs
|
|
969
|
-
- Steps to reproduce
|