lynkr 3.1.0 → 3.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +150 -31
- package/docs/index.md +150 -18
- package/install.sh +63 -16
- package/package.json +1 -1
- package/scripts/setup.js +117 -43
- package/src/config/index.js +13 -0
- package/src/orchestrator/index.js +39 -0
- package/src/tools/smart-selection.js +356 -0
- package/README_SEO_OPTIMIZATION_PLAN.md +0 -557
package/README.md
CHANGED
|
@@ -13,12 +13,18 @@
|
|
|
13
13
|
|
|
14
14
|
> **Production-ready Claude Code proxy server supporting Databricks, OpenRouter, Ollama & Azure. Features MCP integration, prompt caching & 60-80% token optimization savings.**
|
|
15
15
|
|
|
16
|
+
## 🔖 Keywords
|
|
17
|
+
|
|
18
|
+
`claude-code` `claude-proxy` `anthropic-api` `databricks-llm` `openrouter-integration` `ollama-local` `llama-cpp` `azure-openai` `azure-anthropic` `mcp-server` `prompt-caching` `token-optimization` `ai-coding-assistant` `llm-proxy` `self-hosted-ai` `git-automation` `code-generation` `developer-tools` `ci-cd-automation` `llm-gateway` `cost-reduction` `multi-provider-llm`
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
16
22
|
## Table of Contents
|
|
17
23
|
|
|
18
24
|
1. [Why Lynkr?](#why-lynkr)
|
|
19
25
|
2. [Quick Start (3 minutes)](#quick-start-3-minutes)
|
|
20
26
|
3. [Overview](#overview)
|
|
21
|
-
4. [Supported
|
|
27
|
+
4. [Supported AI Model Providers](#supported-ai-model-providers-databricks-openrouter-ollama-azure-llamacpp)
|
|
22
28
|
5. [Lynkr vs Native Claude Code](#lynkr-vs-native-claude-code)
|
|
23
29
|
6. [Core Capabilities](#core-capabilities)
|
|
24
30
|
- [Repo Intelligence & Navigation](#repo-intelligence--navigation)
|
|
@@ -27,12 +33,12 @@
|
|
|
27
33
|
- [Execution & Tooling](#execution--tooling)
|
|
28
34
|
- [Workflow & Collaboration](#workflow--collaboration)
|
|
29
35
|
- [UX, Monitoring, and Logs](#ux-monitoring-and-logs)
|
|
30
|
-
7. [Production
|
|
36
|
+
7. [Production-Ready Features for Enterprise Deployment](#production-ready-features-for-enterprise-deployment)
|
|
31
37
|
- [Reliability & Resilience](#reliability--resilience)
|
|
32
38
|
- [Observability & Monitoring](#observability--monitoring)
|
|
33
39
|
- [Security & Governance](#security--governance)
|
|
34
40
|
8. [Architecture](#architecture)
|
|
35
|
-
9. [Getting Started](#getting-started)
|
|
41
|
+
9. [Getting Started: Installation & Setup Guide](#getting-started-installation--setup-guide)
|
|
36
42
|
10. [Configuration Reference](#configuration-reference)
|
|
37
43
|
11. [Runtime Operations](#runtime-operations)
|
|
38
44
|
- [Launching the Proxy](#launching-the-proxy)
|
|
@@ -47,9 +53,10 @@
|
|
|
47
53
|
12. [Manual Test Matrix](#manual-test-matrix)
|
|
48
54
|
13. [Troubleshooting](#troubleshooting)
|
|
49
55
|
14. [Roadmap & Known Gaps](#roadmap--known-gaps)
|
|
50
|
-
15. [FAQ](#faq)
|
|
51
|
-
16. [References](#references)
|
|
52
|
-
17. [
|
|
56
|
+
15. [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
|
|
57
|
+
16. [References & Further Reading](#references--further-reading)
|
|
58
|
+
17. [Community & Adoption](#community--adoption)
|
|
59
|
+
18. [License](#license)
|
|
53
60
|
|
|
54
61
|
---
|
|
55
62
|
|
|
@@ -61,13 +68,13 @@ Claude Code CLI is locked to Anthropic's API, limiting your choice of LLM provid
|
|
|
61
68
|
### The Solution
|
|
62
69
|
Lynkr is a **production-ready proxy server** that unlocks Claude Code CLI's full potential:
|
|
63
70
|
|
|
64
|
-
- ✅ **Any LLM Provider** - Databricks, OpenRouter (100+ models), Ollama (local), Azure, OpenAI, llama.cpp
|
|
65
|
-
- ✅ **60-80% Cost Reduction** - Built-in token optimization (5 optimization phases implemented)
|
|
66
|
-
- ✅ **Zero Code Changes** - Drop-in replacement for Anthropic backend
|
|
67
|
-
- ✅ **Local & Offline** - Run Claude Code with Ollama or llama.cpp (no internet required)
|
|
68
|
-
- ✅ **Enterprise Features** - Circuit breakers, load balancing, metrics, K8s-ready health checks
|
|
69
|
-
- ✅ **MCP Integration** - Automatically discover and orchestrate Model Context Protocol servers
|
|
70
|
-
- ✅ **Privacy & Control** - Self-hosted, open-source (Apache 2.0), no vendor lock-in
|
|
71
|
+
- ✅ **Any LLM Provider** - [Databricks, OpenRouter (100+ models), Ollama (local), Azure, OpenAI, llama.cpp](#supported-ai-model-providers-databricks-openrouter-ollama-azure-llamacpp)
|
|
72
|
+
- ✅ **60-80% Cost Reduction** - Built-in [token optimization](#token-optimization-implementation) (5 optimization phases implemented)
|
|
73
|
+
- ✅ **Zero Code Changes** - [Drop-in replacement](#connecting-claude-code-cli) for Anthropic backend
|
|
74
|
+
- ✅ **Local & Offline** - Run Claude Code with [Ollama](#using-ollama-models) or [llama.cpp](#using-llamacpp-with-lynkr) (no internet required)
|
|
75
|
+
- ✅ **Enterprise Features** - [Circuit breakers, load balancing, metrics, K8s-ready health checks](#production-ready-features-for-enterprise-deployment)
|
|
76
|
+
- ✅ **MCP Integration** - Automatically discover and orchestrate [Model Context Protocol servers](#integrating-mcp-servers)
|
|
77
|
+
- ✅ **Privacy & Control** - Self-hosted, open-source ([Apache 2.0](#license)), no vendor lock-in
|
|
71
78
|
|
|
72
79
|
### Perfect For
|
|
73
80
|
- 🔧 **Developers** who want flexibility and cost control
|
|
@@ -83,8 +90,6 @@ Lynkr is a **production-ready proxy server** that unlocks Claude Code CLI's full
|
|
|
83
90
|
### 1️⃣ Install
|
|
84
91
|
```bash
|
|
85
92
|
npm install -g lynkr
|
|
86
|
-
# or
|
|
87
|
-
brew install lynkr
|
|
88
93
|
```
|
|
89
94
|
|
|
90
95
|
### 2️⃣ Configure Your Provider
|
|
@@ -144,6 +149,7 @@ Key highlights:
|
|
|
144
149
|
- **Workspace awareness** – Local repo indexing, `CLAUDE.md` summaries, language-aware navigation, and Git helpers mirror core Claude Code workflows.
|
|
145
150
|
- **Model Context Protocol (MCP) orchestration** – Automatically discovers MCP manifests, launches JSON-RPC 2.0 servers, and re-exposes their tools inside the proxy.
|
|
146
151
|
- **Prompt caching** – Re-uses repeated prompts to reduce latency and token consumption, matching Claude's own cache semantics.
|
|
152
|
+
- **Smart tool selection** – Intelligently filters tools based on request type (conversational, coding, research), reducing tool tokens by 50-70% for simple queries. Automatically enabled across all providers.
|
|
147
153
|
- **Policy enforcement** – Environment-driven guardrails control Git operations, test requirements, web fetch fallbacks, and sandboxing rules. Input validation and consistent error handling ensure API reliability.
|
|
148
154
|
|
|
149
155
|
The result is a production-ready, self-hosted alternative that stays close to Anthropic's ergonomics while providing enterprise-grade reliability, observability, and performance.
|
|
@@ -154,7 +160,7 @@ Further documentation and usage notes are available on [DeepWiki](https://deepwi
|
|
|
154
160
|
|
|
155
161
|
---
|
|
156
162
|
|
|
157
|
-
## Supported
|
|
163
|
+
## Supported AI Model Providers (Databricks, OpenRouter, Ollama, Azure, llama.cpp)
|
|
158
164
|
|
|
159
165
|
Lynkr supports multiple AI model providers, giving you flexibility in choosing the right model for your needs:
|
|
160
166
|
|
|
@@ -164,7 +170,7 @@ Lynkr supports multiple AI model providers, giving you flexibility in choosing t
|
|
|
164
170
|
|----------|--------------|------------------|----------|
|
|
165
171
|
| **Databricks** (Default) | `MODEL_PROVIDER=databricks` | Claude Sonnet 4.5, Claude Opus 4.5 | Production use, enterprise deployment |
|
|
166
172
|
| **OpenAI** | `MODEL_PROVIDER=openai` | GPT-5, GPT-5.2, GPT-4o, GPT-4o-mini, GPT-4-turbo, o1, o1-mini | Direct OpenAI API access |
|
|
167
|
-
| **Azure OpenAI** | `MODEL_PROVIDER=azure-openai` | GPT-5, GPT-5.2,GPT-4o, GPT-4o-mini, GPT-5, o1, o3 | Azure integration, Microsoft ecosystem |
|
|
173
|
+
| **Azure OpenAI** | `MODEL_PROVIDER=azure-openai` | GPT-5, GPT-5.2,GPT-4o, GPT-4o-mini, GPT-5, o1, o3, Kimi-K2 | Azure integration, Microsoft ecosystem |
|
|
168
174
|
| **Azure Anthropic** | `MODEL_PROVIDER=azure-anthropic` | Claude Sonnet 4.5, Claude Opus 4.5 | Azure-hosted Claude models |
|
|
169
175
|
| **OpenRouter** | `MODEL_PROVIDER=openrouter` | 100+ models (GPT-4o, Claude, Gemini, Llama, etc.) | Model flexibility, cost optimization |
|
|
170
176
|
| **Ollama** (Local) | `MODEL_PROVIDER=ollama` | Llama 3.1, Qwen2.5, Mistral, CodeLlama | Local/offline use, privacy, no API costs |
|
|
@@ -193,15 +199,8 @@ Lynkr supports multiple AI model providers, giving you flexibility in choosing t
|
|
|
193
199
|
|
|
194
200
|
### **Azure OpenAI Specific Models**
|
|
195
201
|
|
|
196
|
-
When using `MODEL_PROVIDER=azure-openai`, you can deploy any of
|
|
202
|
+
When using `MODEL_PROVIDER=azure-openai`, you can deploy any of the models in azure ai foundry:
|
|
197
203
|
|
|
198
|
-
| Model | Deployment Name | Capabilities | Best For |
|
|
199
|
-
|-------|----------------|--------------|----------|
|
|
200
|
-
| **GPT-4o** | `gpt-4o` | Text, vision, function calling | General-purpose, multimodal tasks |
|
|
201
|
-
| **GPT-4o-mini** | `gpt-4o-mini` | Text, function calling | Fast responses, cost-effective |
|
|
202
|
-
| **GPT-5** | `gpt-5-chat` or custom | Advanced reasoning, longer context | Complex problem-solving |
|
|
203
|
-
| **o1-preview** | `o1-preview` | Deep reasoning, chain of thought | Mathematical, logic problems |
|
|
204
|
-
| **o3-mini** | `o3-mini` | Efficient reasoning | Fast reasoning tasks |
|
|
205
204
|
|
|
206
205
|
**Note**: Azure OpenAI deployment names are configurable via `AZURE_OPENAI_DEPLOYMENT` environment variable.
|
|
207
206
|
|
|
@@ -267,7 +266,7 @@ FALLBACK_PROVIDER=databricks # or azure-openai, openrouter, azure-anthropic
|
|
|
267
266
|
| **Self-Hosted** | ❌ Managed service | ✅ **Full control** (open-source) |
|
|
268
267
|
| **MCP Support** | Limited | ✅ **Full orchestration** with auto-discovery |
|
|
269
268
|
| **Prompt Caching** | Basic | ✅ **Advanced caching** with deduplication |
|
|
270
|
-
| **Token Optimization** | ❌ None | ✅ **
|
|
269
|
+
| **Token Optimization** | ❌ None | ✅ **6 phases** (smart tool selection, history compression, tool truncation, dynamic prompts) |
|
|
271
270
|
| **Enterprise Features** | Limited | ✅ **Circuit breakers, load shedding, metrics, K8s-ready** |
|
|
272
271
|
| **Privacy** | ☁️ Cloud-dependent | ✅ **Self-hosted** (air-gapped deployments possible) |
|
|
273
272
|
| **Cost Transparency** | Hidden usage | ✅ **Full tracking** (per-request, per-session, Prometheus metrics) |
|
|
@@ -305,6 +304,18 @@ FALLBACK_PROVIDER=databricks # or azure-openai, openrouter, azure-anthropic
|
|
|
305
304
|
|
|
306
305
|
---
|
|
307
306
|
|
|
307
|
+
## 🚀 Ready to Get Started?
|
|
308
|
+
|
|
309
|
+
**Reduce your Claude Code costs by 60-80% in under 3 minutes:**
|
|
310
|
+
|
|
311
|
+
1. ⭐ **[Star this repo](https://github.com/vishalveerareddy123/Lynkr)** to show support and stay updated
|
|
312
|
+
2. 📖 **[Follow the Quick Start Guide](#quick-start-3-minutes)** to install and configure Lynkr
|
|
313
|
+
3. 💬 **[Join our Discord](https://discord.gg/qF7DDxrX)** for real-time community support
|
|
314
|
+
4. 💬 **[Join the Discussion](https://github.com/vishalveerareddy123/Lynkr/discussions)** for questions and ideas
|
|
315
|
+
5. 🐛 **[Report Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** to help improve Lynkr
|
|
316
|
+
|
|
317
|
+
---
|
|
318
|
+
|
|
308
319
|
## Core Capabilities
|
|
309
320
|
|
|
310
321
|
### Long-Term Memory System (Titans-Inspired)
|
|
@@ -405,9 +416,9 @@ See [MEMORY_SYSTEM.md](MEMORY_SYSTEM.md) for complete documentation and [QUICKST
|
|
|
405
416
|
|
|
406
417
|
---
|
|
407
418
|
|
|
408
|
-
## Production
|
|
419
|
+
## Production-Ready Features for Enterprise Deployment
|
|
409
420
|
|
|
410
|
-
Lynkr includes comprehensive production-
|
|
421
|
+
Lynkr includes comprehensive production-hardened features designed for reliability, observability, and security in enterprise environments. These features add minimal performance overhead while providing robust operational capabilities for mission-critical AI deployments.
|
|
411
422
|
|
|
412
423
|
### Reliability & Resilience
|
|
413
424
|
|
|
@@ -575,6 +586,59 @@ Lynkr includes comprehensive production-ready features designed for reliability,
|
|
|
575
586
|
└───────────────────┘
|
|
576
587
|
```
|
|
577
588
|
|
|
589
|
+
### Request Flow Diagram
|
|
590
|
+
|
|
591
|
+
```mermaid
|
|
592
|
+
graph TB
|
|
593
|
+
A[Claude Code CLI] -->|HTTP POST /v1/messages| B[Lynkr Proxy Server]
|
|
594
|
+
B --> C{Middleware Stack}
|
|
595
|
+
C -->|Load Shedding| D{Load OK?}
|
|
596
|
+
D -->|Yes| E[Request Logging]
|
|
597
|
+
D -->|No| Z1[503 Service Unavailable]
|
|
598
|
+
E --> F[Metrics Collection]
|
|
599
|
+
F --> G[Input Validation]
|
|
600
|
+
G --> H[Orchestrator]
|
|
601
|
+
|
|
602
|
+
H --> I{Check Prompt Cache}
|
|
603
|
+
I -->|Cache Hit| J[Return Cached Response]
|
|
604
|
+
I -->|Cache Miss| K{Determine Provider}
|
|
605
|
+
|
|
606
|
+
K -->|Simple 0-2 tools| L[Ollama Local]
|
|
607
|
+
K -->|Moderate 3-14 tools| M[OpenRouter / Azure]
|
|
608
|
+
K -->|Complex 15+ tools| N[Databricks]
|
|
609
|
+
|
|
610
|
+
L --> O[Circuit Breaker Check]
|
|
611
|
+
M --> O
|
|
612
|
+
N --> O
|
|
613
|
+
|
|
614
|
+
O -->|Closed| P{Provider API}
|
|
615
|
+
O -->|Open| Z2[Fallback Provider]
|
|
616
|
+
|
|
617
|
+
P -->|Databricks| Q1[Databricks API]
|
|
618
|
+
P -->|OpenRouter| Q2[OpenRouter API]
|
|
619
|
+
P -->|Ollama| Q3[Ollama Local]
|
|
620
|
+
P -->|Azure| Q4[Azure Anthropic API]
|
|
621
|
+
|
|
622
|
+
Q1 --> R[Response Processing]
|
|
623
|
+
Q2 --> R
|
|
624
|
+
Q3 --> R
|
|
625
|
+
Q4 --> R
|
|
626
|
+
Z2 --> R
|
|
627
|
+
|
|
628
|
+
R --> S[Format Conversion]
|
|
629
|
+
S --> T[Cache Response]
|
|
630
|
+
T --> U[Update Metrics]
|
|
631
|
+
U --> V[Return to Client]
|
|
632
|
+
J --> V
|
|
633
|
+
|
|
634
|
+
style B fill:#4a90e2,stroke:#333,stroke-width:2px,color:#fff
|
|
635
|
+
style H fill:#7b68ee,stroke:#333,stroke-width:2px,color:#fff
|
|
636
|
+
style K fill:#f39c12,stroke:#333,stroke-width:2px
|
|
637
|
+
style P fill:#2ecc71,stroke:#333,stroke-width:2px,color:#fff
|
|
638
|
+
```
|
|
639
|
+
|
|
640
|
+
**Key Components:**
|
|
641
|
+
|
|
578
642
|
- **`src/api/router.js`** – Express routes that accept Claude-compatible `/v1/messages` requests.
|
|
579
643
|
- **`src/api/middleware/*`** – Production middleware stack:
|
|
580
644
|
- `load-shedding.js` – Proactive overload protection with resource monitoring
|
|
@@ -595,7 +659,7 @@ Lynkr includes comprehensive production-ready features designed for reliability,
|
|
|
595
659
|
|
|
596
660
|
---
|
|
597
661
|
|
|
598
|
-
## Getting Started
|
|
662
|
+
## Getting Started: Installation & Setup Guide
|
|
599
663
|
|
|
600
664
|
### Prerequisites
|
|
601
665
|
|
|
@@ -2113,14 +2177,69 @@ The graceful shutdown and health check endpoints ensure zero-downtime deployment
|
|
|
2113
2177
|
|
|
2114
2178
|
---
|
|
2115
2179
|
|
|
2116
|
-
## References
|
|
2180
|
+
## References & Further Reading
|
|
2117
2181
|
|
|
2118
|
-
|
|
2182
|
+
### Academic & Technical Resources
|
|
2119
2183
|
|
|
2184
|
+
**Agentic AI Systems:**
|
|
2120
2185
|
- **Zhang et al. (2024)**. *Agentic Context Engineering*. arXiv:2510.04618. [arXiv](https://arxiv.org/abs/2510.04618)
|
|
2121
2186
|
|
|
2187
|
+
**Long-Term Memory & RAG:**
|
|
2188
|
+
- **Mohtashami & Jaggi (2023)**. *Landmark Attention: Random-Access Infinite Context Length for Transformers*. [arXiv](https://arxiv.org/abs/2305.16300)
|
|
2189
|
+
- **Google DeepMind (2024)**. *Titans: Learning to Memorize at Test Time*. [arXiv](https://arxiv.org/abs/2411.07043)
|
|
2190
|
+
|
|
2122
2191
|
For BibTeX citations, see [CITATIONS.bib](CITATIONS.bib).
|
|
2123
2192
|
|
|
2193
|
+
### Official Documentation
|
|
2194
|
+
|
|
2195
|
+
- [Claude Code CLI Documentation](https://docs.anthropic.com/en/docs/build-with-claude/claude-for-sheets) - Official Claude Code reference
|
|
2196
|
+
- [Model Context Protocol (MCP) Specification](https://spec.modelcontextprotocol.io/) - MCP protocol documentation
|
|
2197
|
+
- [Databricks Foundation Models](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) - Databricks LLM documentation
|
|
2198
|
+
- [Anthropic API Documentation](https://docs.anthropic.com/en/api/getting-started) - Claude API reference
|
|
2199
|
+
|
|
2200
|
+
### Related Projects & Tools
|
|
2201
|
+
|
|
2202
|
+
- [Ollama](https://ollama.ai/) - Local LLM runtime for running open-source models
|
|
2203
|
+
- [OpenRouter](https://openrouter.ai/) - Multi-provider LLM API gateway (100+ models)
|
|
2204
|
+
- [llama.cpp](https://github.com/ggerganov/llama.cpp) - High-performance C++ LLM inference engine
|
|
2205
|
+
- [LiteLLM](https://github.com/BerriAI/litellm) - Multi-provider LLM proxy (alternative approach)
|
|
2206
|
+
- [Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers) - Curated list of MCP server implementations
|
|
2207
|
+
|
|
2208
|
+
---
|
|
2209
|
+
|
|
2210
|
+
## Community & Adoption
|
|
2211
|
+
|
|
2212
|
+
### Get Involved
|
|
2213
|
+
|
|
2214
|
+
**⭐ Star this repository** to show your support and help others discover Lynkr!
|
|
2215
|
+
|
|
2216
|
+
[](https://github.com/vishalveerareddy123/Lynkr)
|
|
2217
|
+
|
|
2218
|
+
### Support & Resources
|
|
2219
|
+
|
|
2220
|
+
- 🐛 **Report Issues:** [GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues) - Bug reports and feature requests
|
|
2221
|
+
- 💬 **Discussions:** [GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions) - Questions, ideas, and community help
|
|
2222
|
+
- 💬 **Discord Community:** [Join our Discord](https://discord.gg/qF7DDxrX) - Real-time chat and community support
|
|
2223
|
+
- 📚 **Documentation:** [DeepWiki](https://deepwiki.com/vishalveerareddy123/Lynkr) - Comprehensive guides and examples
|
|
2224
|
+
- 🔧 **Contributing:** [CONTRIBUTING.md](CONTRIBUTING.md) - How to contribute to Lynkr
|
|
2225
|
+
|
|
2226
|
+
### Share Lynkr
|
|
2227
|
+
|
|
2228
|
+
Help spread the word about Lynkr:
|
|
2229
|
+
|
|
2230
|
+
- 🐦 [Share on Twitter](https://twitter.com/intent/tweet?text=Check%20out%20Lynkr%20-%20a%20production-ready%20Claude%20Code%20proxy%20with%20multi-provider%20support%20and%2060-80%25%20token%20savings!&url=https://github.com/vishalveerareddy123/Lynkr&hashtags=AI,ClaudeCode,LLM,OpenSource)
|
|
2231
|
+
- 💼 [Share on LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https://github.com/vishalveerareddy123/Lynkr)
|
|
2232
|
+
- 📰 [Share on Hacker News](https://news.ycombinator.com/submitlink?u=https://github.com/vishalveerareddy123/Lynkr&t=Lynkr%20-%20Production-Ready%20Claude%20Code%20Proxy)
|
|
2233
|
+
- 📱 [Share on Reddit](https://www.reddit.com/submit?url=https://github.com/vishalveerareddy123/Lynkr&title=Lynkr%20-%20Production-Ready%20Claude%20Code%20Proxy%20with%20Multi-Provider%20Support)
|
|
2234
|
+
|
|
2235
|
+
### Why Developers Choose Lynkr
|
|
2236
|
+
|
|
2237
|
+
- 💰 **Massive cost savings** - Save 60-80% on token costs with built-in optimization
|
|
2238
|
+
- 🔓 **Provider freedom** - Choose from 7+ LLM providers (Databricks, OpenRouter, Ollama, Azure, llama.cpp)
|
|
2239
|
+
- 🏠 **Privacy & control** - Self-hosted, open-source, no vendor lock-in
|
|
2240
|
+
- 🚀 **Production-ready** - Enterprise features: circuit breakers, metrics, health checks
|
|
2241
|
+
- 🛠️ **Active development** - Regular updates, responsive maintainers, growing community
|
|
2242
|
+
|
|
2124
2243
|
---
|
|
2125
2244
|
|
|
2126
2245
|
## License
|
package/docs/index.md
CHANGED
|
@@ -4,8 +4,9 @@
|
|
|
4
4
|
<script defer data-url="https://devhunt.org/tool/lynkr" src="https://cdn.jsdelivr.net/gh/sidiDev/devhunt-banner/indexV0.js"></script>
|
|
5
5
|
|
|
6
6
|
|
|
7
|
-
# Lynkr
|
|
8
|
-
|
|
7
|
+
# Lynkr - Production-Ready Claude Code Proxy with Multi-Provider Support, MCP Integration & Token Optimization
|
|
8
|
+
|
|
9
|
+
#### Lynkr is an open-source, production-ready Claude Code proxy that enables the Claude Code CLI to work with any LLM provider (Databricks, OpenRouter, Ollama, Azure, OpenAI, llama.cpp) without losing Anthropic backend features. It features MCP server orchestration, Git workflows, repo intelligence, workspace tools, prompt caching, and 60-80% token optimization for cost-effective LLM-powered development.
|
|
9
10
|
<!--
|
|
10
11
|
SEO Keywords:
|
|
11
12
|
Databricks, Claude Code, Anthropic, Azure Anthropic,
|
|
@@ -14,6 +15,9 @@ developer tools, proxy, git automation, AI developer tools,
|
|
|
14
15
|
prompt caching, Node.js
|
|
15
16
|
-->
|
|
16
17
|
|
|
18
|
+
## 🔖 Keywords
|
|
19
|
+
|
|
20
|
+
`claude-code` `claude-proxy` `anthropic-api` `databricks-llm` `openrouter-integration` `ollama-local` `llama-cpp` `azure-openai` `azure-anthropic` `mcp-server` `prompt-caching` `token-optimization` `ai-coding-assistant` `llm-proxy` `self-hosted-ai` `git-automation` `code-generation` `developer-tools` `ci-cd-automation` `llm-gateway` `cost-reduction` `multi-provider-llm`
|
|
17
21
|
|
|
18
22
|
---
|
|
19
23
|
|
|
@@ -28,21 +32,21 @@ prompt caching, Node.js
|
|
|
28
32
|
|
|
29
33
|
# 🚀 What is Lynkr?
|
|
30
34
|
|
|
31
|
-
**Lynkr** is an open-source **Claude Code-compatible backend proxy** that lets you run the **Claude Code CLI** and Claude-style tools **directly against Databricks
|
|
35
|
+
**Lynkr** is an open-source **Claude Code-compatible backend proxy** that lets you run the **Claude Code CLI** and Claude-style tools **directly against [Databricks, Azure, OpenRouter, Ollama, and llama.cpp](#-configuration-guide-for-multi-provider-support-databricks-azure-openrouter-ollama-llamacpp)** instead of the default Anthropic cloud.
|
|
32
36
|
|
|
33
37
|
It enables full repo-aware LLM workflows:
|
|
34
38
|
|
|
35
|
-
- code navigation
|
|
36
|
-
- diff review
|
|
37
|
-
- Git operations
|
|
38
|
-
- test execution
|
|
39
|
-
- workspace tools
|
|
40
|
-
- Model Context Protocol (MCP) servers
|
|
41
|
-
- repo indexing and project intelligence
|
|
42
|
-
- prompt caching
|
|
43
|
-
- conversational sessions
|
|
39
|
+
- code navigation
|
|
40
|
+
- diff review
|
|
41
|
+
- [Git operations](#git-tools-and-workflow-automation)
|
|
42
|
+
- test execution
|
|
43
|
+
- workspace tools
|
|
44
|
+
- [Model Context Protocol (MCP) servers](#full-model-context-protocol-mcp-integration)
|
|
45
|
+
- [repo indexing and project intelligence](#-repo-intelligence--indexing)
|
|
46
|
+
- [prompt caching](#prompt-caching-lru--ttl)
|
|
47
|
+
- [conversational sessions with long-term memory](#-long-term-memory-system-titans-inspired)
|
|
44
48
|
|
|
45
|
-
This makes Databricks a first-class environment for **AI-assisted software development**, **LLM agents**, **automated refactoring**, **debugging**, and **ML/ETL workflow exploration**.
|
|
49
|
+
This makes Databricks and other providers a first-class environment for **AI-assisted software development**, **LLM agents**, **automated refactoring**, **debugging**, and **ML/ETL workflow exploration**.
|
|
46
50
|
|
|
47
51
|
---
|
|
48
52
|
|
|
@@ -170,7 +174,60 @@ Databricks / Azure Anthropic / OpenRouter / Ollama / llama.cpp
|
|
|
170
174
|
|
|
171
175
|
````
|
|
172
176
|
|
|
173
|
-
|
|
177
|
+
## Request Flow Visualization
|
|
178
|
+
|
|
179
|
+
```mermaid
|
|
180
|
+
graph TB
|
|
181
|
+
A[Claude Code CLI] -->|HTTP POST /v1/messages| B[Lynkr Proxy Server]
|
|
182
|
+
B --> C{Middleware Stack}
|
|
183
|
+
C -->|Load Shedding| D{Load OK?}
|
|
184
|
+
D -->|Yes| E[Request Logging]
|
|
185
|
+
D -->|No| Z1[503 Service Unavailable]
|
|
186
|
+
E --> F[Metrics Collection]
|
|
187
|
+
F --> G[Input Validation]
|
|
188
|
+
G --> H[Orchestrator]
|
|
189
|
+
|
|
190
|
+
H --> I{Check Prompt Cache}
|
|
191
|
+
I -->|Cache Hit| J[Return Cached Response]
|
|
192
|
+
I -->|Cache Miss| K{Determine Provider}
|
|
193
|
+
|
|
194
|
+
K -->|Simple 0-2 tools| L[Ollama Local]
|
|
195
|
+
K -->|Moderate 3-14 tools| M[OpenRouter / Azure]
|
|
196
|
+
K -->|Complex 15+ tools| N[Databricks]
|
|
197
|
+
|
|
198
|
+
L --> O[Circuit Breaker Check]
|
|
199
|
+
M --> O
|
|
200
|
+
N --> O
|
|
201
|
+
|
|
202
|
+
O -->|Closed| P{Provider API}
|
|
203
|
+
O -->|Open| Z2[Fallback Provider]
|
|
204
|
+
|
|
205
|
+
P -->|Databricks| Q1[Databricks API]
|
|
206
|
+
P -->|OpenRouter| Q2[OpenRouter API]
|
|
207
|
+
P -->|Ollama| Q3[Ollama Local]
|
|
208
|
+
P -->|Azure| Q4[Azure Anthropic API]
|
|
209
|
+
P -->|llama.cpp| Q5[llama.cpp Server]
|
|
210
|
+
|
|
211
|
+
Q1 --> R[Response Processing]
|
|
212
|
+
Q2 --> R
|
|
213
|
+
Q3 --> R
|
|
214
|
+
Q4 --> R
|
|
215
|
+
Q5 --> R
|
|
216
|
+
Z2 --> R
|
|
217
|
+
|
|
218
|
+
R --> S[Format Conversion]
|
|
219
|
+
S --> T[Cache Response]
|
|
220
|
+
T --> U[Update Metrics]
|
|
221
|
+
U --> V[Return to Client]
|
|
222
|
+
J --> V
|
|
223
|
+
|
|
224
|
+
style B fill:#4a90e2,stroke:#333,stroke-width:2px,color:#fff
|
|
225
|
+
style H fill:#7b68ee,stroke:#333,stroke-width:2px,color:#fff
|
|
226
|
+
style K fill:#f39c12,stroke:#333,stroke-width:2px
|
|
227
|
+
style P fill:#2ecc71,stroke:#333,stroke-width:2px,color:#fff
|
|
228
|
+
```
|
|
229
|
+
|
|
230
|
+
**Key directories:**
|
|
174
231
|
|
|
175
232
|
- `src/api` → Claude-compatible API proxy
|
|
176
233
|
- `src/orchestrator` → LLM agent runtime loop
|
|
@@ -182,7 +239,7 @@ Key directories:
|
|
|
182
239
|
|
|
183
240
|
---
|
|
184
241
|
|
|
185
|
-
# ⚙ Installation
|
|
242
|
+
# ⚙ Getting Started: Installation & Setup Guide
|
|
186
243
|
|
|
187
244
|
## Global install (recommended)
|
|
188
245
|
```bash
|
|
@@ -208,7 +265,7 @@ npm start
|
|
|
208
265
|
|
|
209
266
|
---
|
|
210
267
|
|
|
211
|
-
# 🔧
|
|
268
|
+
# 🔧 Configuration Guide for Multi-Provider Support (Databricks, Azure, OpenRouter, Ollama, llama.cpp)
|
|
212
269
|
|
|
213
270
|
## Databricks Setup
|
|
214
271
|
|
|
@@ -664,12 +721,87 @@ This "working nature" allows Lynkr to not just execute commands, but to **learn
|
|
|
664
721
|
|
|
665
722
|
---
|
|
666
723
|
|
|
724
|
+
# 📚 References & Further Reading
|
|
725
|
+
|
|
726
|
+
## Academic & Technical Resources
|
|
727
|
+
|
|
728
|
+
**Agentic AI Systems:**
|
|
729
|
+
- **Zhang et al. (2024)**. *Agentic Context Engineering*. arXiv:2510.04618. [arXiv](https://arxiv.org/abs/2510.04618)
|
|
730
|
+
|
|
731
|
+
**Long-Term Memory & RAG:**
|
|
732
|
+
- **Mohtashami & Jaggi (2023)**. *Landmark Attention: Random-Access Infinite Context Length for Transformers*. [arXiv](https://arxiv.org/abs/2305.16300)
|
|
733
|
+
- **Google DeepMind (2024)**. *Titans: Learning to Memorize at Test Time*. [arXiv](https://arxiv.org/abs/2411.07043)
|
|
734
|
+
|
|
735
|
+
## Official Documentation
|
|
736
|
+
|
|
737
|
+
- [Claude Code CLI Documentation](https://docs.anthropic.com/en/docs/build-with-claude/claude-for-sheets) - Official Claude Code reference
|
|
738
|
+
- [Model Context Protocol (MCP) Specification](https://spec.modelcontextprotocol.io/) - MCP protocol documentation
|
|
739
|
+
- [Databricks Foundation Models](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) - Databricks LLM documentation
|
|
740
|
+
- [Anthropic API Documentation](https://docs.anthropic.com/en/api/getting-started) - Claude API reference
|
|
741
|
+
|
|
742
|
+
## Related Projects & Tools
|
|
743
|
+
|
|
744
|
+
- [Ollama](https://ollama.ai/) - Local LLM runtime for running open-source models
|
|
745
|
+
- [OpenRouter](https://openrouter.ai/) - Multi-provider LLM API gateway (100+ models)
|
|
746
|
+
- [llama.cpp](https://github.com/ggerganov/llama.cpp) - High-performance C++ LLM inference engine
|
|
747
|
+
- [LiteLLM](https://github.com/BerriAI/litellm) - Multi-provider LLM proxy (alternative approach)
|
|
748
|
+
- [Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers) - Curated list of MCP server implementations
|
|
749
|
+
|
|
750
|
+
---
|
|
751
|
+
|
|
752
|
+
# 🌟 Community & Adoption
|
|
753
|
+
|
|
754
|
+
## Get Involved
|
|
755
|
+
|
|
756
|
+
**⭐ Star this repository** to show your support and help others discover Lynkr!
|
|
757
|
+
|
|
758
|
+
[](https://github.com/vishalveerareddy123/Lynkr)
|
|
759
|
+
|
|
760
|
+
## Support & Resources
|
|
761
|
+
|
|
762
|
+
- 🐛 **Report Issues:** [GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues) - Bug reports and feature requests
|
|
763
|
+
- 💬 **Discussions:** [GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions) - Questions, ideas, and community help
|
|
764
|
+
- 📚 **Documentation:** [DeepWiki](https://deepwiki.com/vishalveerareddy123/Lynkr) - Comprehensive guides and examples
|
|
765
|
+
- 🔧 **Contributing:** [CONTRIBUTING.md](https://github.com/vishalveerareddy123/Lynkr/blob/main/CONTRIBUTING.md) - How to contribute to Lynkr
|
|
766
|
+
|
|
767
|
+
## Share Lynkr
|
|
768
|
+
|
|
769
|
+
Help spread the word about Lynkr:
|
|
770
|
+
|
|
771
|
+
- 🐦 [Share on Twitter](https://twitter.com/intent/tweet?text=Check%20out%20Lynkr%20-%20a%20production-ready%20Claude%20Code%20proxy%20with%20multi-provider%20support%20and%2060-80%25%20token%20savings!&url=https://github.com/vishalveerareddy123/Lynkr&hashtags=AI,ClaudeCode,LLM,OpenSource)
|
|
772
|
+
- 💼 [Share on LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https://github.com/vishalveerareddy123/Lynkr)
|
|
773
|
+
- 📰 [Share on Hacker News](https://news.ycombinator.com/submitlink?u=https://github.com/vishalveerareddy123/Lynkr&t=Lynkr%20-%20Production-Ready%20Claude%20Code%20Proxy)
|
|
774
|
+
- 📱 [Share on Reddit](https://www.reddit.com/submit?url=https://github.com/vishalveerareddy123/Lynkr&title=Lynkr%20-%20Production-Ready%20Claude%20Code%20Proxy%20with%20Multi-Provider%20Support)
|
|
775
|
+
|
|
776
|
+
## Why Developers Choose Lynkr
|
|
777
|
+
|
|
778
|
+
- 💰 **Massive cost savings** - Save 60-80% on token costs with built-in optimization
|
|
779
|
+
- 🔓 **Provider freedom** - Choose from 7+ LLM providers (Databricks, OpenRouter, Ollama, Azure, llama.cpp)
|
|
780
|
+
- 🏠 **Privacy & control** - Self-hosted, open-source, no vendor lock-in
|
|
781
|
+
- 🚀 **Production-ready** - Enterprise features: circuit breakers, metrics, health checks
|
|
782
|
+
- 🛠️ **Active development** - Regular updates, responsive maintainers, growing community
|
|
783
|
+
|
|
784
|
+
---
|
|
785
|
+
|
|
667
786
|
# 🔗 Links
|
|
668
787
|
|
|
669
788
|
* **GitHub**: [https://github.com/vishalveerareddy123/Lynkr](https://github.com/vishalveerareddy123/Lynkr)
|
|
670
789
|
* **Docs**: [https://deepwiki.com/vishalveerareddy123/Lynkr](https://deepwiki.com/vishalveerareddy123/Lynkr)
|
|
671
790
|
* **Issues**: [https://github.com/vishalveerareddy123/Lynkr/issues](https://github.com/vishalveerareddy123/Lynkr/issues)
|
|
672
791
|
|
|
673
|
-
|
|
792
|
+
---
|
|
793
|
+
|
|
794
|
+
## 🚀 Ready to Get Started?
|
|
795
|
+
|
|
796
|
+
**Reduce your Claude Code costs by 60-80% today:**
|
|
797
|
+
|
|
798
|
+
1. ⭐ **[Star this repo](https://github.com/vishalveerareddy123/Lynkr)** to show support and stay updated
|
|
799
|
+
2. 📖 **[Install Lynkr](#-getting-started-installation--setup-guide)** and configure your preferred provider
|
|
800
|
+
3. 💬 **[Join the Discussion](https://github.com/vishalveerareddy123/Lynkr/discussions)** for community support
|
|
801
|
+
4. 🐛 **[Report Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** to help improve Lynkr
|
|
802
|
+
|
|
803
|
+
---
|
|
804
|
+
|
|
805
|
+
If you use Databricks, Azure Anthropic, OpenRouter, Ollama, or llama.cpp and want rich Claude Code workflows with massive cost savings, Lynkr gives you the control, flexibility, and extensibility you need.
|
|
674
806
|
|
|
675
|
-
Feel free to open issues, contribute tools,
|
|
807
|
+
Feel free to open issues, contribute tools, integrate with MCP servers, or help us improve the documentation!
|
package/install.sh
CHANGED
|
@@ -16,7 +16,7 @@ BLUE='\033[0;34m'
|
|
|
16
16
|
NC='\033[0m' # No Color
|
|
17
17
|
|
|
18
18
|
# Configuration
|
|
19
|
-
REPO_URL="https://github.com/
|
|
19
|
+
REPO_URL="https://github.com/Fast-Editor/Lynkr"
|
|
20
20
|
INSTALL_DIR="${LYNKR_INSTALL_DIR:-$HOME/.lynkr}"
|
|
21
21
|
BRANCH="${LYNKR_BRANCH:-main}"
|
|
22
22
|
|
|
@@ -115,12 +115,19 @@ install_dependencies() {
|
|
|
115
115
|
# Create default .env file
|
|
116
116
|
create_env_file() {
|
|
117
117
|
if [ ! -f "$INSTALL_DIR/.env" ]; then
|
|
118
|
-
print_info "Creating
|
|
119
|
-
|
|
118
|
+
print_info "Creating .env configuration file..."
|
|
119
|
+
|
|
120
|
+
# Try to copy from .env.example (comprehensive configuration)
|
|
121
|
+
if [ -f "$INSTALL_DIR/.env.example" ]; then
|
|
122
|
+
cp "$INSTALL_DIR/.env.example" "$INSTALL_DIR/.env"
|
|
123
|
+
print_success "Created .env from .env.example (all features documented)"
|
|
124
|
+
else
|
|
125
|
+
# Fallback: create minimal .env if .env.example doesn't exist
|
|
126
|
+
cat > "$INSTALL_DIR/.env" << 'EOF'
|
|
120
127
|
# Lynkr Configuration
|
|
121
|
-
#
|
|
128
|
+
# For full options, see: https://github.com/vishalveerareddy123/Lynkr/blob/main/.env.example
|
|
122
129
|
|
|
123
|
-
# Model Provider (databricks, openai, azure-openai, azure-anthropic, openrouter, ollama)
|
|
130
|
+
# Model Provider (databricks, openai, azure-openai, azure-anthropic, openrouter, ollama, llamacpp)
|
|
124
131
|
MODEL_PROVIDER=ollama
|
|
125
132
|
|
|
126
133
|
# Server Configuration
|
|
@@ -131,13 +138,29 @@ PREFER_OLLAMA=true
|
|
|
131
138
|
OLLAMA_MODEL=qwen2.5-coder:7b
|
|
132
139
|
OLLAMA_ENDPOINT=http://localhost:11434
|
|
133
140
|
|
|
134
|
-
#
|
|
141
|
+
# Long-Term Memory System (Titans-Inspired) - Enabled by default
|
|
142
|
+
MEMORY_ENABLED=true
|
|
143
|
+
MEMORY_RETRIEVAL_LIMIT=5
|
|
144
|
+
MEMORY_SURPRISE_THRESHOLD=0.3
|
|
145
|
+
|
|
146
|
+
# Uncomment and configure your preferred cloud provider:
|
|
135
147
|
# OPENAI_API_KEY=sk-your-key
|
|
136
148
|
# OPENROUTER_API_KEY=your-key
|
|
137
149
|
# DATABRICKS_API_KEY=your-key
|
|
138
150
|
# DATABRICKS_API_BASE=https://your-workspace.databricks.com
|
|
139
151
|
EOF
|
|
140
|
-
|
|
152
|
+
print_success "Created basic .env file"
|
|
153
|
+
fi
|
|
154
|
+
|
|
155
|
+
echo ""
|
|
156
|
+
print_info "📝 Configuration ready! Key settings:"
|
|
157
|
+
echo " • Default provider: Ollama (local, offline)"
|
|
158
|
+
echo " • Memory system: Enabled (learns from conversations)"
|
|
159
|
+
echo " • Port: 8080"
|
|
160
|
+
echo ""
|
|
161
|
+
print_warning "To use cloud providers (Databricks/OpenAI/Azure):"
|
|
162
|
+
echo " Edit: ${BLUE}nano $INSTALL_DIR/.env${NC}"
|
|
163
|
+
echo " Add your API keys and change MODEL_PROVIDER"
|
|
141
164
|
else
|
|
142
165
|
print_warning ".env file already exists, skipping"
|
|
143
166
|
fi
|
|
@@ -180,23 +203,47 @@ print_next_steps() {
|
|
|
180
203
|
print_success "Lynkr installed successfully!"
|
|
181
204
|
echo "=============================="
|
|
182
205
|
echo ""
|
|
183
|
-
echo "
|
|
206
|
+
echo "🚀 Quick Start Guide:"
|
|
184
207
|
echo ""
|
|
185
|
-
echo "
|
|
186
|
-
echo "
|
|
208
|
+
echo " ${GREEN}Option A: Use Ollama (Free, Local, Offline)${NC}"
|
|
209
|
+
echo " ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
210
|
+
echo ""
|
|
211
|
+
echo " 1. Install Ollama (if not already installed):"
|
|
212
|
+
echo " ${BLUE}lynkr-setup${NC} ${GREEN}← Automatic Ollama installer${NC}"
|
|
187
213
|
echo ""
|
|
188
214
|
echo " 2. Start Lynkr:"
|
|
189
|
-
echo " ${BLUE}
|
|
190
|
-
echo " or"
|
|
191
|
-
echo " ${BLUE}lynkr${NC} (if in PATH)"
|
|
215
|
+
echo " ${BLUE}lynkr${NC}"
|
|
192
216
|
echo ""
|
|
193
|
-
echo " 3. Configure Claude CLI
|
|
217
|
+
echo " 3. Configure Claude Code CLI:"
|
|
194
218
|
echo " ${BLUE}export ANTHROPIC_BASE_URL=http://localhost:8080${NC}"
|
|
219
|
+
echo " ${BLUE}claude${NC}"
|
|
195
220
|
echo ""
|
|
196
|
-
echo "
|
|
221
|
+
echo " ${YELLOW}Option B: Use Cloud Providers (Databricks/OpenAI/Azure)${NC}"
|
|
222
|
+
echo " ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
223
|
+
echo ""
|
|
224
|
+
echo " 1. Edit configuration file:"
|
|
225
|
+
echo " ${BLUE}nano $INSTALL_DIR/.env${NC}"
|
|
226
|
+
echo ""
|
|
227
|
+
echo " Update these lines:"
|
|
228
|
+
echo " ${BLUE}MODEL_PROVIDER=databricks${NC} ${GREEN}← Change from 'ollama'${NC}"
|
|
229
|
+
echo " ${BLUE}DATABRICKS_API_KEY=dapi_xxxxx${NC} ${GREEN}← Add your key${NC}"
|
|
230
|
+
echo " ${BLUE}DATABRICKS_API_BASE=https://your-workspace.databricks.com${NC}"
|
|
231
|
+
echo ""
|
|
232
|
+
echo " 2. Start Lynkr:"
|
|
233
|
+
echo " ${BLUE}lynkr${NC}"
|
|
234
|
+
echo ""
|
|
235
|
+
echo " 3. Configure Claude Code CLI:"
|
|
236
|
+
echo " ${BLUE}export ANTHROPIC_BASE_URL=http://localhost:8080${NC}"
|
|
237
|
+
echo " ${BLUE}export ANTHROPIC_API_KEY=any-non-empty-value${NC} ${GREEN}← Placeholder${NC}"
|
|
197
238
|
echo " ${BLUE}claude${NC}"
|
|
198
239
|
echo ""
|
|
199
|
-
echo "
|
|
240
|
+
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
241
|
+
echo ""
|
|
242
|
+
echo "💡 ${YELLOW}Tip:${NC} Memory system is enabled by default"
|
|
243
|
+
echo " Lynkr remembers preferences and project context across sessions"
|
|
244
|
+
echo ""
|
|
245
|
+
echo "📚 Documentation: ${BLUE}https://github.com/vishalveerareddy123/Lynkr${NC}"
|
|
246
|
+
echo "💬 Discord: ${BLUE}https://discord.gg/qF7DDxrX${NC}"
|
|
200
247
|
echo ""
|
|
201
248
|
}
|
|
202
249
|
|
package/package.json
CHANGED