lynkr 4.0.0 → 4.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,659 @@
1
+ # Frequently Asked Questions (FAQ)
2
+
3
+ Common questions about Lynkr, installation, configuration, and usage.
4
+
5
+ ---
6
+
7
+ ## General Questions
8
+
9
+ ### What is Lynkr?
10
+
11
+ Lynkr is a self-hosted proxy server that enables Claude Code CLI and Cursor IDE to work with multiple LLM providers (Databricks, AWS Bedrock, OpenRouter, Ollama, etc.) instead of being locked to Anthropic's API.
12
+
13
+ **Key benefits:**
14
+ - 💰 **60-80% cost savings** through token optimization
15
+ - 🔓 **Provider flexibility** - Choose from 9+ providers
16
+ - 🔒 **Privacy** - Run 100% locally with Ollama or llama.cpp
17
+ - ✅ **Zero code changes** - Drop-in replacement for Anthropic backend
18
+
19
+ ---
20
+
21
+ ### Can I use Lynkr with the official Claude Code CLI?
22
+
23
+ **Yes!** Lynkr is designed as a drop-in replacement for Anthropic's backend. Simply set `ANTHROPIC_BASE_URL` to point to your Lynkr server:
24
+
25
+ ```bash
26
+ export ANTHROPIC_BASE_URL=http://localhost:8081
27
+ export ANTHROPIC_API_KEY=dummy # Required by CLI, but ignored by Lynkr
28
+ claude "Your prompt here"
29
+ ```
30
+
31
+ All Claude Code CLI features work through Lynkr.
32
+
33
+ ---
34
+
35
+ ### Does Lynkr work with Cursor IDE?
36
+
37
+ **Yes!** Lynkr provides OpenAI-compatible endpoints that work with Cursor:
38
+
39
+ 1. Start Lynkr: `lynkr start`
40
+ 2. Configure Cursor Settings → Models:
41
+ - **API Key:** `sk-lynkr` (any non-empty value)
42
+ - **Base URL:** `http://localhost:8081/v1`
43
+ - **Model:** Your provider's model (e.g., `claude-3.5-sonnet`)
44
+
45
+ All Cursor features work: chat (`Cmd+L`), inline edits (`Cmd+K`), and @Codebase search (with embeddings).
46
+
47
+ See [Cursor Integration Guide](cursor-integration.md) for details.
48
+
49
+ ---
50
+
51
+ ### How much does Lynkr cost?
52
+
53
+ Lynkr itself is **100% FREE** and open source (Apache 2.0 license).
54
+
55
+ **Costs depend on your provider:**
56
+ - **Ollama/llama.cpp**: 100% FREE (runs on your hardware)
57
+ - **OpenRouter**: ~$5-10/month (100+ models)
58
+ - **AWS Bedrock**: ~$10-20/month (100+ models)
59
+ - **Databricks**: Enterprise pricing (contact Databricks)
60
+ - **Azure/OpenAI**: Standard provider pricing
61
+
62
+ **With token optimization**, Lynkr reduces provider costs by **60-80%** through smart tool selection, prompt caching, and memory deduplication.
63
+
64
+ ---
65
+
66
+ ### What's the difference between Lynkr and native Claude Code?
67
+
68
+ | Feature | Native Claude Code | Lynkr |
69
+ |---------|-------------------|-------|
70
+ | **Providers** | Anthropic only | 9+ providers |
71
+ | **Cost** | Full Anthropic pricing | 60-80% cheaper |
72
+ | **Local models** | ❌ Cloud-only | ✅ Ollama, llama.cpp |
73
+ | **Privacy** | ☁️ Cloud | 🔒 Can run 100% locally |
74
+ | **Token optimization** | ❌ None | ✅ 6 optimization phases |
75
+ | **MCP support** | Limited | ✅ Full orchestration |
76
+ | **Enterprise features** | Limited | ✅ Circuit breakers, metrics, K8s-ready |
77
+ | **Cost transparency** | Hidden | ✅ Full tracking |
78
+ | **License** | Proprietary | ✅ Apache 2.0 (open source) |
79
+
80
+ ---
81
+
82
+ ## Installation & Setup
83
+
84
+ ### How do I install Lynkr?
85
+
86
+ **Option 1: NPM (Recommended)**
87
+ ```bash
88
+ npm install -g lynkr
89
+ lynkr start
90
+ ```
91
+
92
+ **Option 2: Homebrew (macOS)**
93
+ ```bash
94
+ brew tap vishalveerareddy123/lynkr
95
+ brew install lynkr
96
+ lynkr start
97
+ ```
98
+
99
+ **Option 3: Git Clone**
100
+ ```bash
101
+ git clone https://github.com/vishalveerareddy123/Lynkr.git
102
+ cd Lynkr && npm install && npm start
103
+ ```
104
+
105
+ See [Installation Guide](installation.md) for all methods.
106
+
107
+ ---
108
+
109
+ ### Which provider should I use?
110
+
111
+ **Depends on your priorities:**
112
+
113
+ **For Privacy (100% Local, FREE):**
114
+ - ✅ **Ollama** - Easy setup, 100% private
115
+ - ✅ **llama.cpp** - Maximum performance, GGUF models
116
+ - **Setup:** 5-15 minutes
117
+ - **Cost:** $0 (runs on your hardware)
118
+
119
+ **For Simplicity (Easiest Cloud):**
120
+ - ✅ **OpenRouter** - One key for 100+ models
121
+ - **Setup:** 2 minutes
122
+ - **Cost:** ~$5-10/month
123
+
124
+ **For AWS Ecosystem:**
125
+ - ✅ **AWS Bedrock** - 100+ models, Claude + alternatives
126
+ - **Setup:** 5 minutes
127
+ - **Cost:** ~$10-20/month
128
+
129
+ **For Enterprise:**
130
+ - ✅ **Databricks** - Claude 4.5, enterprise SLA
131
+ - **Setup:** 10 minutes
132
+ - **Cost:** Enterprise pricing
133
+
134
+ See [Provider Configuration Guide](providers.md) for detailed comparison.
135
+
136
+ ---
137
+
138
+ ### Can I use multiple providers?
139
+
140
+ **Yes!** Lynkr supports hybrid routing:
141
+
142
+ ```bash
143
+ # Use Ollama for simple requests, Databricks for complex ones
144
+ export PREFER_OLLAMA=true
145
+ export OLLAMA_MODEL=llama3.1:8b
146
+ export FALLBACK_ENABLED=true
147
+ export FALLBACK_PROVIDER=databricks
148
+ ```
149
+
150
+ **How it works:**
151
+ - **0-2 tools**: Ollama (free, local, fast)
152
+ - **3-15 tools**: OpenRouter (if configured) or fallback
153
+ - **16+ tools**: Databricks/Azure (most capable)
154
+ - **Ollama failures**: Automatic transparent fallback
155
+
156
+ **Cost savings:** 65-100% for requests that stay on Ollama.
157
+
158
+ ---
159
+
160
+ ## Provider-Specific Questions
161
+
162
+ ### Can I use Ollama models with Lynkr and Cursor?
163
+
164
+ **Yes!** Ollama works for both chat AND embeddings (100% local, FREE):
165
+
166
+ **Chat setup:**
167
+ ```bash
168
+ export MODEL_PROVIDER=ollama
169
+ export OLLAMA_MODEL=llama3.1:8b # or qwen2.5-coder, mistral, etc.
170
+ lynkr start
171
+ ```
172
+
173
+ **Embeddings setup (for @Codebase):**
174
+ ```bash
175
+ ollama pull nomic-embed-text
176
+ export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
177
+ ```
178
+
179
+ **Recommended models:**
180
+ - **Chat**: `llama3.1:8b` - Good balance, tool calling supported
181
+ - **Chat**: `qwen2.5:14b` - Better reasoning (7b struggles with tools)
182
+ - **Embeddings**: `nomic-embed-text` (137M) - Best all-around
183
+
184
+ **100% local, 100% private, 100% FREE!** 🔒
185
+
186
+ ---
187
+
188
+ ### How do I enable @Codebase search in Cursor with Lynkr?
189
+
190
+ @Codebase semantic search requires embeddings. Choose ONE option:
191
+
192
+ **Option 1: Ollama (100% Local, FREE)** 🔒
193
+ ```bash
194
+ ollama pull nomic-embed-text
195
+ export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
196
+ ```
197
+
198
+ **Option 2: llama.cpp (100% Local, FREE)** 🔒
199
+ ```bash
200
+ ./llama-server -m nomic-embed-text.gguf --port 8080 --embedding
201
+ export LLAMACPP_EMBEDDINGS_ENDPOINT=http://localhost:8080/embeddings
202
+ ```
203
+
204
+ **Option 3: OpenRouter (Cloud, ~$0.01-0.10/month)**
205
+ ```bash
206
+ export OPENROUTER_API_KEY=sk-or-v1-your-key
207
+ # Works automatically if you're already using OpenRouter for chat!
208
+ ```
209
+
210
+ **Option 4: OpenAI (Cloud, ~$0.01-0.10/month)**
211
+ ```bash
212
+ export OPENAI_API_KEY=sk-your-key
213
+ ```
214
+
215
+ **After configuring, restart Lynkr.** @Codebase will then work in Cursor!
216
+
217
+ See [Embeddings Guide](embeddings.md) for details.
218
+
219
+ ---
220
+
221
+ ### What are the performance differences between providers?
222
+
223
+ | Provider | Latency | Cost | Tool Support | Best For |
224
+ |----------|---------|------|--------------|----------|
225
+ | **Ollama** | 100-500ms | **FREE** | Good | Local, privacy, offline |
226
+ | **llama.cpp** | 50-300ms | **FREE** | Good | Performance, GPU |
227
+ | **OpenRouter** | 500ms-2s | $-$$ | Excellent | Flexibility, 100+ models |
228
+ | **Databricks/Azure** | 500ms-2s | $$$ | Excellent | Enterprise, Claude 4.5 |
229
+ | **AWS Bedrock** | 500ms-2s | $-$$$ | Excellent* | AWS, 100+ models |
230
+ | **OpenAI** | 500ms-2s | $$ | Excellent | GPT-4o, o1, o3 |
231
+
232
+ _* Tool calling only supported by Claude models on Bedrock_
233
+
234
+ ---
235
+
236
+ ### Does AWS Bedrock support tool calling?
237
+
238
+ **Only Claude models support tool calling on Bedrock.**
239
+
240
+ ✅ **Supported (with tools):**
241
+ - `anthropic.claude-3-5-sonnet-20241022-v2:0`
242
+ - `anthropic.claude-3-opus-20240229-v1:0`
243
+ - `us.anthropic.claude-sonnet-4-5-20250929-v1:0`
244
+
245
+ ❌ **Not supported (no tools):**
246
+ - Amazon Titan models
247
+ - Meta Llama models
248
+ - Mistral models
249
+ - Cohere models
250
+ - AI21 models
251
+
252
+ Other models work via Converse API but won't use Read/Write/Bash tools.
253
+
254
+ See [BEDROCK_MODELS.md](../BEDROCK_MODELS.md) for complete model catalog.
255
+
256
+ ---
257
+
258
+ ## Features & Capabilities
259
+
260
+ ### What is token optimization and how does it save costs?
261
+
262
+ Lynkr includes **6 token optimization phases** that reduce costs by **60-80%**:
263
+
264
+ 1. **Smart Tool Selection** (50-70% reduction)
265
+ - Filters tools based on request type
266
+ - Only sends relevant tools to model
267
+ - Example: Chat query doesn't need git tools
268
+
269
+ 2. **Prompt Caching** (30-45% reduction)
270
+ - Caches repeated prompts
271
+ - Reuses system prompts
272
+ - Reduces redundant token usage
273
+
274
+ 3. **Memory Deduplication** (20-30% reduction)
275
+ - Removes duplicate memories
276
+ - Compresses conversation history
277
+ - Eliminates redundant context
278
+
279
+ 4. **Tool Response Truncation** (15-25% reduction)
280
+ - Truncates long tool outputs
281
+ - Keeps only relevant portions
282
+ - Reduces tool result tokens
283
+
284
+ 5. **Dynamic System Prompts** (10-20% reduction)
285
+ - Adapts prompts to request type
286
+ - Shorter prompts for simple queries
287
+ - Longer prompts only when needed
288
+
289
+ 6. **Conversation Compression** (15-25% reduction)
290
+ - Summarizes old messages
291
+ - Keeps recent context full
292
+ - Compresses historical turns
293
+
294
+ **At 100k requests/month, this translates to $6,400-9,600/month savings ($77k-115k/year).**
295
+
296
+ See [Token Optimization Guide](token-optimization.md) for details.
297
+
298
+ ---
299
+
300
+ ### What is the memory system?
301
+
302
+ Lynkr includes a **Titans-inspired long-term memory system** that remembers important context across conversations:
303
+
304
+ **Key features:**
305
+ - 🧠 **Surprise-Based Updates** - Only stores novel, important information
306
+ - 🔍 **Semantic Search** - Full-text search with Porter stemmer
307
+ - 📊 **Multi-Signal Retrieval** - Ranks by recency, importance, relevance
308
+ - ⚡ **Automatic Integration** - Zero latency overhead (<50ms retrieval)
309
+ - 🛠️ **Management Tools** - `memory_search`, `memory_add`, `memory_forget`
310
+
311
+ **What gets remembered:**
312
+ - ✅ User preferences ("I prefer Python")
313
+ - ✅ Important decisions ("Decided to use React")
314
+ - ✅ Project facts ("This app uses PostgreSQL")
315
+ - ✅ New entities (first mention of files, functions)
316
+ - ❌ Greetings, confirmations, repeated info
317
+
318
+ **Configuration:**
319
+ ```bash
320
+ export MEMORY_ENABLED=true # Enable/disable
321
+ export MEMORY_RETRIEVAL_LIMIT=5 # Memories per request
322
+ export MEMORY_SURPRISE_THRESHOLD=0.3 # Min score to store
323
+ ```
324
+
325
+ See [Memory System Guide](memory-system.md) for details.
326
+
327
+ ---
328
+
329
+ ### What are tool execution modes?
330
+
331
+ Lynkr supports two tool execution modes:
332
+
333
+ **Server Mode (Default)**
334
+ ```bash
335
+ export TOOL_EXECUTION_MODE=server
336
+ ```
337
+ - Tools run on the machine running Lynkr
338
+ - Good for: Standalone proxy, shared team server
339
+ - File operations access server filesystem
340
+
341
+ **Client Mode (Passthrough)**
342
+ ```bash
343
+ export TOOL_EXECUTION_MODE=client
344
+ ```
345
+ - Tools run on Claude Code CLI side (your local machine)
346
+ - Good for: Local development, accessing local files
347
+ - Full integration with local environment
348
+
349
+ ---
350
+
351
+ ### Does Lynkr support MCP (Model Context Protocol)?
352
+
353
+ **Yes!** Lynkr includes full MCP orchestration:
354
+
355
+ - 🔍 **Automatic Discovery** - Scans `~/.claude/mcp` for manifests
356
+ - 🚀 **JSON-RPC 2.0 Client** - Communicates with MCP servers
357
+ - 🛠️ **Dynamic Tool Registration** - Exposes MCP tools in proxy
358
+ - 🔒 **Docker Sandbox** - Optional container isolation
359
+
360
+ **Configuration:**
361
+ ```bash
362
+ export MCP_MANIFEST_DIRS=~/.claude/mcp
363
+ export MCP_SANDBOX_ENABLED=true
364
+ ```
365
+
366
+ MCP tools integrate seamlessly with Claude Code CLI and Cursor.
367
+
368
+ ---
369
+
370
+ ## Deployment & Production
371
+
372
+ ### Can I deploy Lynkr to production?
373
+
374
+ **Yes!** Lynkr includes 14 production-hardening features:
375
+
376
+ - **Reliability:** Circuit breakers, exponential backoff, load shedding
377
+ - **Observability:** Prometheus metrics, structured logging, health checks
378
+ - **Security:** Input validation, policy enforcement, sandboxing
379
+ - **Performance:** Prompt caching, token optimization, connection pooling
380
+ - **Deployment:** Kubernetes-ready health checks, graceful shutdown, Docker support
381
+
382
+ See [Production Hardening Guide](production.md) for details.
383
+
384
+ ---
385
+
386
+ ### How do I deploy with Docker?
387
+
388
+ **docker-compose (Recommended):**
389
+ ```bash
390
+ git clone https://github.com/vishalveerareddy123/Lynkr.git
391
+ cd Lynkr
392
+ cp .env.example .env
393
+ # Edit .env with your credentials
394
+ docker-compose up -d
395
+ ```
396
+
397
+ **Standalone Docker:**
398
+ ```bash
399
+ docker build -t lynkr .
400
+ docker run -d -p 8081:8081 -e MODEL_PROVIDER=databricks -e DATABRICKS_API_KEY=your-key lynkr
401
+ ```
402
+
403
+ See [Docker Deployment Guide](docker.md) for advanced options (GPU, K8s, volumes).
404
+
405
+ ---
406
+
407
+ ### What metrics does Lynkr collect?
408
+
409
+ Lynkr collects comprehensive metrics in Prometheus format:
410
+
411
+ **Request Metrics:**
412
+ - Request rate (requests/sec)
413
+ - Latency percentiles (p50, p95, p99)
414
+ - Error rate and types
415
+ - Status code distribution
416
+
417
+ **Token Metrics:**
418
+ - Token usage per request
419
+ - Token cost per request
420
+ - Cumulative token usage
421
+ - Cache hit rate
422
+
423
+ **System Metrics:**
424
+ - Memory usage
425
+ - CPU usage
426
+ - Active connections
427
+ - Circuit breaker state
428
+
429
+ **Access metrics:**
430
+ ```bash
431
+ curl http://localhost:8081/metrics
432
+ # Returns Prometheus-format metrics
433
+ ```
434
+
435
+ See [Production Guide](production.md) for metrics configuration.
436
+
437
+ ---
438
+
439
+ ## Troubleshooting
440
+
441
+ ### Lynkr won't start - what should I check?
442
+
443
+ 1. **Missing credentials:**
444
+ ```bash
445
+ echo $MODEL_PROVIDER
446
+ echo $DATABRICKS_API_KEY # or other provider key
447
+ ```
448
+
449
+ 2. **Port already in use:**
450
+ ```bash
451
+ lsof -i :8081
452
+ kill -9 <PID>
453
+ # Or use different port: export PORT=8082
454
+ ```
455
+
456
+ 3. **Missing dependencies:**
457
+ ```bash
458
+ npm install
459
+ # Or: npm install -g lynkr --force
460
+ ```
461
+
462
+ See [Troubleshooting Guide](troubleshooting.md) for more issues.
463
+
464
+ ---
465
+
466
+ ### Why is my first request slow?
467
+
468
+ **This is normal:**
469
+ - **Ollama/llama.cpp:** Model loading (1-5 seconds)
470
+ - **Cloud providers:** Cold start (2-5 seconds)
471
+ - **Subsequent requests are fast**
472
+
473
+ **Solutions:**
474
+
475
+ 1. **Keep Ollama running:**
476
+ ```bash
477
+ ollama serve # Keep running in background
478
+ ```
479
+
480
+ 2. **Warm up after startup:**
481
+ ```bash
482
+ curl http://localhost:8081/health/ready?deep=true
483
+ ```
484
+
485
+ ---
486
+
487
+ ### How do I enable debug logging?
488
+
489
+ ```bash
490
+ export LOG_LEVEL=debug
491
+ lynkr start
492
+
493
+ # Check logs for detailed request/response info
494
+ ```
495
+
496
+ ---
497
+
498
+ ## Cost & Pricing
499
+
500
+ ### How much can I save with Lynkr?
501
+
502
+ **Scenario:** 100,000 requests/month, average 50k input tokens, 2k output tokens
503
+
504
+ | Provider | Without Lynkr | With Lynkr (60% savings) | Monthly Savings |
505
+ |----------|---------------|-------------------------|-----------------|
506
+ | **Claude Sonnet 4.5** | $16,000 | $6,400 | **$9,600** |
507
+ | **GPT-4o** | $12,000 | $4,800 | **$7,200** |
508
+ | **Ollama (Local)** | API costs | $0 | **$12,000+** |
509
+
510
+ **ROI:** $77k-115k/year in savings.
511
+
512
+ **Token optimization breakdown:**
513
+ - Smart tool selection: 50-70% reduction
514
+ - Prompt caching: 30-45% reduction
515
+ - Memory deduplication: 20-30% reduction
516
+ - Tool truncation: 15-25% reduction
517
+
518
+ ---
519
+
520
+ ### What's the cheapest setup?
521
+
522
+ **100% FREE Setup:**
523
+ ```bash
524
+ # Chat: Ollama (local, free)
525
+ export MODEL_PROVIDER=ollama
526
+ export OLLAMA_MODEL=llama3.1:8b
527
+
528
+ # Embeddings: Ollama (local, free)
529
+ export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
530
+ ```
531
+
532
+ **Total cost: $0/month** 🔒
533
+ - 100% private (all data stays on your machine)
534
+ - Works offline
535
+ - Full Claude Code CLI + Cursor support
536
+
537
+ **Hardware requirements:**
538
+ - 8GB+ RAM for 7-8B models
539
+ - 16GB+ RAM for 14B models
540
+ - Optional: GPU for faster inference
541
+
542
+ ---
543
+
544
+ ## Security & Privacy
545
+
546
+ ### Is Lynkr secure for production use?
547
+
548
+ **Yes!** Lynkr includes multiple security features:
549
+
550
+ - **Input Validation:** Zero-dependency schema validation
551
+ - **Policy Enforcement:** Git, test, web fetch policies
552
+ - **Sandboxing:** Optional Docker isolation for MCP tools
553
+ - **Authentication:** API key support (provider-level)
554
+ - **Rate Limiting:** Load shedding during overload
555
+ - **Logging:** Structured logs with request ID correlation
556
+
557
+ **Best practices:**
558
+ - Run behind reverse proxy (nginx, Caddy)
559
+ - Use HTTPS for external access
560
+ - Rotate API keys regularly
561
+ - Enable policy restrictions
562
+ - Monitor metrics and logs
563
+
564
+ ---
565
+
566
+ ### Can I run Lynkr completely offline?
567
+
568
+ **Yes!** Use local providers:
569
+
570
+ **Option 1: Ollama**
571
+ ```bash
572
+ export MODEL_PROVIDER=ollama
573
+ export OLLAMA_MODEL=llama3.1:8b
574
+ export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
575
+ ```
576
+
577
+ **Option 2: llama.cpp**
578
+ ```bash
579
+ export MODEL_PROVIDER=llamacpp
580
+ export LLAMACPP_ENDPOINT=http://localhost:8080
581
+ export LLAMACPP_EMBEDDINGS_ENDPOINT=http://localhost:8080/embeddings
582
+ ```
583
+
584
+ **Result:**
585
+ - ✅ Zero internet required
586
+ - ✅ 100% private (all data stays local)
587
+ - ✅ Works in air-gapped environments
588
+ - ✅ Full Claude Code CLI + Cursor support
589
+
590
+ ---
591
+
592
+ ### Where is my data stored?
593
+
594
+ **Local data (on machine running Lynkr):**
595
+ - **SQLite databases:** `data/` directory
596
+ - `memories.db` - Long-term memories
597
+ - `sessions.db` - Conversation history
598
+ - `workspace-index.db` - Workspace metadata
599
+ - **Configuration:** `.env` file
600
+ - **Logs:** stdout (or log file if configured)
601
+
602
+ **Provider data:**
603
+ - **Cloud providers:** Sent to provider (Databricks, Bedrock, OpenRouter, etc.)
604
+ - **Local providers:** Stays on your machine (Ollama, llama.cpp)
605
+
606
+ **Privacy recommendation:**
607
+ Use Ollama or llama.cpp for 100% local, private operation.
608
+
609
+ ---
610
+
611
+ ## Getting Help
612
+
613
+ ### Where can I get help?
614
+
615
+ - **[Troubleshooting Guide](troubleshooting.md)** - Common issues and solutions
616
+ - **[GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Community Q&A
617
+ - **[GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Report bugs
618
+ - **[Documentation](README.md)** - Complete guides
619
+
620
+ ### How do I report a bug?
621
+
622
+ 1. Check [GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues) for existing reports
623
+ 2. If new, create an issue with:
624
+ - Lynkr version
625
+ - Provider being used
626
+ - Full error message
627
+ - Steps to reproduce
628
+ - Debug logs (with `LOG_LEVEL=debug`)
629
+
630
+ ### How can I contribute?
631
+
632
+ See [Contributing Guide](contributing.md) for:
633
+ - Code contributions
634
+ - Documentation improvements
635
+ - Bug reports
636
+ - Feature requests
637
+
638
+ ---
639
+
640
+ ## License
641
+
642
+ ### What license is Lynkr under?
643
+
644
+ **Apache 2.0** - Free and open source.
645
+
646
+ You can:
647
+ - ✅ Use commercially
648
+ - ✅ Modify the code
649
+ - ✅ Distribute
650
+ - ✅ Sublicense
651
+ - ✅ Use privately
652
+
653
+ **No restrictions for:**
654
+ - Personal use
655
+ - Commercial use
656
+ - Internal company use
657
+ - Redistribution
658
+
659
+ See [LICENSE](../LICENSE) file for details.