lynkr 3.0.0 â 3.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +201 -21
- package/README.md +626 -145
- package/docs/index.md +150 -18
- package/install.sh +63 -16
- package/package.json +2 -2
- package/scripts/setup.js +117 -43
- package/src/api/router.js +78 -0
- package/src/clients/openrouter-utils.js +51 -7
- package/src/config/index.js +51 -0
- package/src/context/budget.js +326 -0
- package/src/context/compression.js +397 -0
- package/src/memory/format.js +156 -0
- package/src/memory/retriever.js +55 -14
- package/src/memory/search.js +36 -12
- package/src/memory/store.js +61 -13
- package/src/memory/surprise.js +56 -15
- package/src/orchestrator/index.js +189 -2
- package/src/prompts/system.js +320 -0
- package/src/tools/index.js +9 -0
- package/src/tools/smart-selection.js +356 -0
- package/src/tools/truncate.js +105 -0
- package/src/utils/tokens.js +217 -0
- package/test/llamacpp-integration.test.js +198 -0
- package/test/memory/extractor.test.js +34 -6
- package/test/memory/retriever.test.js +45 -15
- package/test/memory/retriever.test.js.bak +585 -0
- package/test/memory/search.test.js +160 -12
- package/test/memory/search.test.js.bak +389 -0
- package/test/memory/store.test.js +57 -25
- package/test/memory/store.test.js.bak +312 -0
- package/test/memory/surprise.test.js +1 -1
package/README.md
CHANGED
|
@@ -1,38 +1,46 @@
|
|
|
1
|
-
# Lynkr
|
|
1
|
+
# Lynkr - Production-Ready Claude Code Proxy with Multi-Provider Support, MCP Integration & Token Optimization
|
|
2
2
|
|
|
3
|
-
[](https://www.npmjs.com/package/lynkr)
|
|
4
|
-
[](https://github.com/vishalveerareddy123/homebrew-lynkr)
|
|
5
|
-
[](https://deepwiki.com/vishalveerareddy123/Lynkr)
|
|
7
|
-
[](https://www.databricks.com/)
|
|
8
|
-
[](https://openai.com/)
|
|
9
|
-
[](https://ollama.ai/)
|
|
10
|
-
[](https://github.com/ggerganov/llama.cpp)
|
|
11
|
-
[](https://www.indexnow.org/)
|
|
12
|
-
[](https://devhunt.org/tool/lynkr)
|
|
3
|
+
[](https://www.npmjs.com/package/lynkr "Lynkr NPM Package - Claude Code Proxy Server")
|
|
4
|
+
[](https://github.com/vishalveerareddy123/homebrew-lynkr "Install Lynkr via Homebrew")
|
|
5
|
+
[](LICENSE "Apache 2.0 License - Open Source Claude Code Alternative")
|
|
6
|
+
[](https://deepwiki.com/vishalveerareddy123/Lynkr "Lynkr Documentation on DeepWiki")
|
|
7
|
+
[](https://www.databricks.com/ "Databricks Claude Integration")
|
|
8
|
+
[](https://openai.com/ "OpenAI GPT Integration")
|
|
9
|
+
[](https://ollama.ai/ "Local Ollama Model Support")
|
|
10
|
+
[](https://github.com/ggerganov/llama.cpp "llama.cpp GGUF Model Support")
|
|
11
|
+
[](https://www.indexnow.org/ "SEO Optimized with IndexNow")
|
|
12
|
+
[](https://devhunt.org/tool/lynkr "Lynkr on DevHunt")
|
|
13
13
|
|
|
14
|
+
> **Production-ready Claude Code proxy server supporting Databricks, OpenRouter, Ollama & Azure. Features MCP integration, prompt caching & 60-80% token optimization savings.**
|
|
14
15
|
|
|
15
|
-
|
|
16
|
+
## ð Keywords
|
|
17
|
+
|
|
18
|
+
`claude-code` `claude-proxy` `anthropic-api` `databricks-llm` `openrouter-integration` `ollama-local` `llama-cpp` `azure-openai` `azure-anthropic` `mcp-server` `prompt-caching` `token-optimization` `ai-coding-assistant` `llm-proxy` `self-hosted-ai` `git-automation` `code-generation` `developer-tools` `ci-cd-automation` `llm-gateway` `cost-reduction` `multi-provider-llm`
|
|
19
|
+
|
|
20
|
+
---
|
|
16
21
|
|
|
17
22
|
## Table of Contents
|
|
18
23
|
|
|
19
|
-
1. [
|
|
20
|
-
2. [
|
|
21
|
-
3. [
|
|
24
|
+
1. [Why Lynkr?](#why-lynkr)
|
|
25
|
+
2. [Quick Start (3 minutes)](#quick-start-3-minutes)
|
|
26
|
+
3. [Overview](#overview)
|
|
27
|
+
4. [Supported AI Model Providers](#supported-ai-model-providers-databricks-openrouter-ollama-azure-llamacpp)
|
|
28
|
+
5. [Lynkr vs Native Claude Code](#lynkr-vs-native-claude-code)
|
|
29
|
+
6. [Core Capabilities](#core-capabilities)
|
|
22
30
|
- [Repo Intelligence & Navigation](#repo-intelligence--navigation)
|
|
23
31
|
- [Git Workflow Enhancements](#git-workflow-enhancements)
|
|
24
32
|
- [Diff & Change Management](#diff--change-management)
|
|
25
33
|
- [Execution & Tooling](#execution--tooling)
|
|
26
34
|
- [Workflow & Collaboration](#workflow--collaboration)
|
|
27
35
|
- [UX, Monitoring, and Logs](#ux-monitoring-and-logs)
|
|
28
|
-
|
|
36
|
+
7. [Production-Ready Features for Enterprise Deployment](#production-ready-features-for-enterprise-deployment)
|
|
29
37
|
- [Reliability & Resilience](#reliability--resilience)
|
|
30
38
|
- [Observability & Monitoring](#observability--monitoring)
|
|
31
39
|
- [Security & Governance](#security--governance)
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
40
|
+
8. [Architecture](#architecture)
|
|
41
|
+
9. [Getting Started: Installation & Setup Guide](#getting-started-installation--setup-guide)
|
|
42
|
+
10. [Configuration Reference](#configuration-reference)
|
|
43
|
+
11. [Runtime Operations](#runtime-operations)
|
|
36
44
|
- [Launching the Proxy](#launching-the-proxy)
|
|
37
45
|
- [Connecting Claude Code CLI](#connecting-claude-code-cli)
|
|
38
46
|
- [Using Ollama Models](#using-ollama-models)
|
|
@@ -42,12 +50,89 @@
|
|
|
42
50
|
- [Integrating MCP Servers](#integrating-mcp-servers)
|
|
43
51
|
- [Health Checks & Monitoring](#health-checks--monitoring)
|
|
44
52
|
- [Metrics & Observability](#metrics--observability)
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
53
|
+
12. [Manual Test Matrix](#manual-test-matrix)
|
|
54
|
+
13. [Troubleshooting](#troubleshooting)
|
|
55
|
+
14. [Roadmap & Known Gaps](#roadmap--known-gaps)
|
|
56
|
+
15. [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
|
|
57
|
+
16. [References & Further Reading](#references--further-reading)
|
|
58
|
+
17. [Community & Adoption](#community--adoption)
|
|
59
|
+
18. [License](#license)
|
|
60
|
+
|
|
61
|
+
---
|
|
62
|
+
|
|
63
|
+
## Why Lynkr?
|
|
64
|
+
|
|
65
|
+
### The Problem
|
|
66
|
+
Claude Code CLI is locked to Anthropic's API, limiting your choice of LLM providers, increasing costs, and preventing local/offline usage.
|
|
67
|
+
|
|
68
|
+
### The Solution
|
|
69
|
+
Lynkr is a **production-ready proxy server** that unlocks Claude Code CLI's full potential:
|
|
70
|
+
|
|
71
|
+
- â
**Any LLM Provider** - [Databricks, OpenRouter (100+ models), Ollama (local), Azure, OpenAI, llama.cpp](#supported-ai-model-providers-databricks-openrouter-ollama-azure-llamacpp)
|
|
72
|
+
- â
**60-80% Cost Reduction** - Built-in [token optimization](#token-optimization-implementation) (5 optimization phases implemented)
|
|
73
|
+
- â
**Zero Code Changes** - [Drop-in replacement](#connecting-claude-code-cli) for Anthropic backend
|
|
74
|
+
- â
**Local & Offline** - Run Claude Code with [Ollama](#using-ollama-models) or [llama.cpp](#using-llamacpp-with-lynkr) (no internet required)
|
|
75
|
+
- â
**Enterprise Features** - [Circuit breakers, load balancing, metrics, K8s-ready health checks](#production-ready-features-for-enterprise-deployment)
|
|
76
|
+
- â
**MCP Integration** - Automatically discover and orchestrate [Model Context Protocol servers](#integrating-mcp-servers)
|
|
77
|
+
- â
**Privacy & Control** - Self-hosted, open-source ([Apache 2.0](#license)), no vendor lock-in
|
|
78
|
+
|
|
79
|
+
### Perfect For
|
|
80
|
+
- ð§ **Developers** who want flexibility and cost control
|
|
81
|
+
- ðĒ **Enterprises** needing self-hosted AI with observability
|
|
82
|
+
- ð **Privacy-focused teams** requiring local model execution
|
|
83
|
+
- ð° **Cost-conscious projects** seeking token optimization
|
|
84
|
+
- ð **DevOps teams** wanting production-ready AI infrastructure
|
|
85
|
+
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
## Quick Start (3 minutes)
|
|
89
|
+
|
|
90
|
+
### 1ïļâĢ Install
|
|
91
|
+
```bash
|
|
92
|
+
npm install -g lynkr
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### 2ïļâĢ Configure Your Provider
|
|
96
|
+
```bash
|
|
97
|
+
# Option A: Use local Ollama (free, offline)
|
|
98
|
+
export MODEL_PROVIDER=ollama
|
|
99
|
+
export OLLAMA_MODEL=llama3.1:8b
|
|
100
|
+
|
|
101
|
+
# Option B: Use Databricks (production)
|
|
102
|
+
export MODEL_PROVIDER=databricks
|
|
103
|
+
export DATABRICKS_API_BASE=https://your-workspace.databricks.net
|
|
104
|
+
export DATABRICKS_API_KEY=your-api-key
|
|
105
|
+
|
|
106
|
+
# Option C: Use OpenRouter (100+ models)
|
|
107
|
+
export MODEL_PROVIDER=openrouter
|
|
108
|
+
export OPENROUTER_API_KEY=your-api-key
|
|
109
|
+
export OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
### 3ïļâĢ Start the Proxy
|
|
113
|
+
```bash
|
|
114
|
+
lynkr start
|
|
115
|
+
# Server running at http://localhost:8080
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
### 4ïļâĢ Connect Claude Code CLI
|
|
119
|
+
```bash
|
|
120
|
+
# Point Claude Code CLI to Lynkr
|
|
121
|
+
export ANTHROPIC_BASE_URL=http://localhost:8080
|
|
122
|
+
export ANTHROPIC_API_KEY=dummy # Ignored by Lynkr, but required by CLI
|
|
123
|
+
|
|
124
|
+
# Start coding!
|
|
125
|
+
claude "Hello, world!"
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
### ð You're Done!
|
|
129
|
+
Claude Code CLI now works with your chosen provider.
|
|
130
|
+
|
|
131
|
+
**Next steps:**
|
|
132
|
+
- ð [Configuration Guide](#configuration-reference) - Customize settings
|
|
133
|
+
- ð [Production Setup](#production-hardening-features) - Deploy to production
|
|
134
|
+
- ð° [Token Optimization](#token-optimization) - Enable 60-80% cost savings
|
|
135
|
+
- ð§ [MCP Integration](#integrating-mcp-servers) - Add custom tools
|
|
51
136
|
|
|
52
137
|
---
|
|
53
138
|
|
|
@@ -64,6 +149,7 @@ Key highlights:
|
|
|
64
149
|
- **Workspace awareness** â Local repo indexing, `CLAUDE.md` summaries, language-aware navigation, and Git helpers mirror core Claude Code workflows.
|
|
65
150
|
- **Model Context Protocol (MCP) orchestration** â Automatically discovers MCP manifests, launches JSON-RPC 2.0 servers, and re-exposes their tools inside the proxy.
|
|
66
151
|
- **Prompt caching** â Re-uses repeated prompts to reduce latency and token consumption, matching Claude's own cache semantics.
|
|
152
|
+
- **Smart tool selection** â Intelligently filters tools based on request type (conversational, coding, research), reducing tool tokens by 50-70% for simple queries. Automatically enabled across all providers.
|
|
67
153
|
- **Policy enforcement** â Environment-driven guardrails control Git operations, test requirements, web fetch fallbacks, and sandboxing rules. Input validation and consistent error handling ensure API reliability.
|
|
68
154
|
|
|
69
155
|
The result is a production-ready, self-hosted alternative that stays close to Anthropic's ergonomics while providing enterprise-grade reliability, observability, and performance.
|
|
@@ -74,7 +160,7 @@ Further documentation and usage notes are available on [DeepWiki](https://deepwi
|
|
|
74
160
|
|
|
75
161
|
---
|
|
76
162
|
|
|
77
|
-
## Supported
|
|
163
|
+
## Supported AI Model Providers (Databricks, OpenRouter, Ollama, Azure, llama.cpp)
|
|
78
164
|
|
|
79
165
|
Lynkr supports multiple AI model providers, giving you flexibility in choosing the right model for your needs:
|
|
80
166
|
|
|
@@ -84,7 +170,7 @@ Lynkr supports multiple AI model providers, giving you flexibility in choosing t
|
|
|
84
170
|
|----------|--------------|------------------|----------|
|
|
85
171
|
| **Databricks** (Default) | `MODEL_PROVIDER=databricks` | Claude Sonnet 4.5, Claude Opus 4.5 | Production use, enterprise deployment |
|
|
86
172
|
| **OpenAI** | `MODEL_PROVIDER=openai` | GPT-5, GPT-5.2, GPT-4o, GPT-4o-mini, GPT-4-turbo, o1, o1-mini | Direct OpenAI API access |
|
|
87
|
-
| **Azure OpenAI** | `MODEL_PROVIDER=azure-openai` | GPT-5, GPT-5.2,GPT-4o, GPT-4o-mini, GPT-5, o1, o3 | Azure integration, Microsoft ecosystem |
|
|
173
|
+
| **Azure OpenAI** | `MODEL_PROVIDER=azure-openai` | GPT-5, GPT-5.2,GPT-4o, GPT-4o-mini, GPT-5, o1, o3, Kimi-K2 | Azure integration, Microsoft ecosystem |
|
|
88
174
|
| **Azure Anthropic** | `MODEL_PROVIDER=azure-anthropic` | Claude Sonnet 4.5, Claude Opus 4.5 | Azure-hosted Claude models |
|
|
89
175
|
| **OpenRouter** | `MODEL_PROVIDER=openrouter` | 100+ models (GPT-4o, Claude, Gemini, Llama, etc.) | Model flexibility, cost optimization |
|
|
90
176
|
| **Ollama** (Local) | `MODEL_PROVIDER=ollama` | Llama 3.1, Qwen2.5, Mistral, CodeLlama | Local/offline use, privacy, no API costs |
|
|
@@ -113,15 +199,8 @@ Lynkr supports multiple AI model providers, giving you flexibility in choosing t
|
|
|
113
199
|
|
|
114
200
|
### **Azure OpenAI Specific Models**
|
|
115
201
|
|
|
116
|
-
When using `MODEL_PROVIDER=azure-openai`, you can deploy any of
|
|
202
|
+
When using `MODEL_PROVIDER=azure-openai`, you can deploy any of the models in azure ai foundry:
|
|
117
203
|
|
|
118
|
-
| Model | Deployment Name | Capabilities | Best For |
|
|
119
|
-
|-------|----------------|--------------|----------|
|
|
120
|
-
| **GPT-4o** | `gpt-4o` | Text, vision, function calling | General-purpose, multimodal tasks |
|
|
121
|
-
| **GPT-4o-mini** | `gpt-4o-mini` | Text, function calling | Fast responses, cost-effective |
|
|
122
|
-
| **GPT-5** | `gpt-5-chat` or custom | Advanced reasoning, longer context | Complex problem-solving |
|
|
123
|
-
| **o1-preview** | `o1-preview` | Deep reasoning, chain of thought | Mathematical, logic problems |
|
|
124
|
-
| **o3-mini** | `o3-mini` | Efficient reasoning | Fast reasoning tasks |
|
|
125
204
|
|
|
126
205
|
**Note**: Azure OpenAI deployment names are configurable via `AZURE_OPENAI_DEPLOYMENT` environment variable.
|
|
127
206
|
|
|
@@ -175,6 +254,68 @@ FALLBACK_PROVIDER=databricks # or azure-openai, openrouter, azure-anthropic
|
|
|
175
254
|
|
|
176
255
|
---
|
|
177
256
|
|
|
257
|
+
## Lynkr vs Native Claude Code
|
|
258
|
+
|
|
259
|
+
**Feature Comparison for Developers and Enterprises**
|
|
260
|
+
|
|
261
|
+
| Feature | Native Claude Code | Lynkr (This Project) |
|
|
262
|
+
|---------|-------------------|----------------------|
|
|
263
|
+
| **Provider Lock-in** | â Anthropic only | â
7+ providers (Databricks, OpenRouter, Ollama, Azure, OpenAI, llama.cpp) |
|
|
264
|
+
| **Token Costs** | ðļ Full price | â
**60-80% savings** (built-in optimization) |
|
|
265
|
+
| **Local Models** | â Cloud-only | â
**Ollama, llama.cpp** (offline support) |
|
|
266
|
+
| **Self-Hosted** | â Managed service | â
**Full control** (open-source) |
|
|
267
|
+
| **MCP Support** | Limited | â
**Full orchestration** with auto-discovery |
|
|
268
|
+
| **Prompt Caching** | Basic | â
**Advanced caching** with deduplication |
|
|
269
|
+
| **Token Optimization** | â None | â
**6 phases** (smart tool selection, history compression, tool truncation, dynamic prompts) |
|
|
270
|
+
| **Enterprise Features** | Limited | â
**Circuit breakers, load shedding, metrics, K8s-ready** |
|
|
271
|
+
| **Privacy** | âïļ Cloud-dependent | â
**Self-hosted** (air-gapped deployments possible) |
|
|
272
|
+
| **Cost Transparency** | Hidden usage | â
**Full tracking** (per-request, per-session, Prometheus metrics) |
|
|
273
|
+
| **Hybrid Routing** | â Not supported | â
**Automatic** (simple â Ollama, complex â Databricks) |
|
|
274
|
+
| **Health Checks** | â N/A | â
**Kubernetes-ready** (liveness, readiness, startup probes) |
|
|
275
|
+
| **License** | Proprietary | â
**Apache 2.0** (open-source) |
|
|
276
|
+
|
|
277
|
+
### Cost Comparison Example
|
|
278
|
+
|
|
279
|
+
**Scenario:** 100,000 API requests/month, average 50k input tokens, 2k output tokens per request
|
|
280
|
+
|
|
281
|
+
| Provider | Without Lynkr | With Lynkr (60% savings) | Monthly Savings |
|
|
282
|
+
|----------|---------------|-------------------------|-----------------|
|
|
283
|
+
| **Claude Sonnet 4.5** (via Databricks) | $16,000 | $6,400 | **$9,600** |
|
|
284
|
+
| **GPT-4o** (via OpenRouter) | $12,000 | $4,800 | **$7,200** |
|
|
285
|
+
| **Ollama (Local)** | API costs + compute | Local compute only | **$12,000+** |
|
|
286
|
+
|
|
287
|
+
### Why Choose Lynkr?
|
|
288
|
+
|
|
289
|
+
**For Developers:**
|
|
290
|
+
- ð Use free local models (Ollama) for development
|
|
291
|
+
- ð§ Switch providers without code changes
|
|
292
|
+
- ð Faster iteration with local models
|
|
293
|
+
|
|
294
|
+
**For Enterprises:**
|
|
295
|
+
- ð° Massive cost savings (ROI: $77k-115k/year)
|
|
296
|
+
- ðĒ Self-hosted = data stays private
|
|
297
|
+
- ð Full observability and metrics
|
|
298
|
+
- ðĄïļ Production-ready reliability features
|
|
299
|
+
|
|
300
|
+
**For Privacy-Focused Teams:**
|
|
301
|
+
- ð Air-gapped deployments possible
|
|
302
|
+
- ð All data stays on-premises
|
|
303
|
+
- ð No third-party API calls required
|
|
304
|
+
|
|
305
|
+
---
|
|
306
|
+
|
|
307
|
+
## ð Ready to Get Started?
|
|
308
|
+
|
|
309
|
+
**Reduce your Claude Code costs by 60-80% in under 3 minutes:**
|
|
310
|
+
|
|
311
|
+
1. â **[Star this repo](https://github.com/vishalveerareddy123/Lynkr)** to show support and stay updated
|
|
312
|
+
2. ð **[Follow the Quick Start Guide](#quick-start-3-minutes)** to install and configure Lynkr
|
|
313
|
+
3. ðŽ **[Join our Discord](https://discord.gg/qF7DDxrX)** for real-time community support
|
|
314
|
+
4. ðŽ **[Join the Discussion](https://github.com/vishalveerareddy123/Lynkr/discussions)** for questions and ideas
|
|
315
|
+
5. ð **[Report Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** to help improve Lynkr
|
|
316
|
+
|
|
317
|
+
---
|
|
318
|
+
|
|
178
319
|
## Core Capabilities
|
|
179
320
|
|
|
180
321
|
### Long-Term Memory System (Titans-Inspired)
|
|
@@ -275,9 +416,9 @@ See [MEMORY_SYSTEM.md](MEMORY_SYSTEM.md) for complete documentation and [QUICKST
|
|
|
275
416
|
|
|
276
417
|
---
|
|
277
418
|
|
|
278
|
-
## Production
|
|
419
|
+
## Production-Ready Features for Enterprise Deployment
|
|
279
420
|
|
|
280
|
-
Lynkr includes comprehensive production-
|
|
421
|
+
Lynkr includes comprehensive production-hardened features designed for reliability, observability, and security in enterprise environments. These features add minimal performance overhead while providing robust operational capabilities for mission-critical AI deployments.
|
|
281
422
|
|
|
282
423
|
### Reliability & Resilience
|
|
283
424
|
|
|
@@ -445,6 +586,59 @@ Lynkr includes comprehensive production-ready features designed for reliability,
|
|
|
445
586
|
âââââââââââââââââââââ
|
|
446
587
|
```
|
|
447
588
|
|
|
589
|
+
### Request Flow Diagram
|
|
590
|
+
|
|
591
|
+
```mermaid
|
|
592
|
+
graph TB
|
|
593
|
+
A[Claude Code CLI] -->|HTTP POST /v1/messages| B[Lynkr Proxy Server]
|
|
594
|
+
B --> C{Middleware Stack}
|
|
595
|
+
C -->|Load Shedding| D{Load OK?}
|
|
596
|
+
D -->|Yes| E[Request Logging]
|
|
597
|
+
D -->|No| Z1[503 Service Unavailable]
|
|
598
|
+
E --> F[Metrics Collection]
|
|
599
|
+
F --> G[Input Validation]
|
|
600
|
+
G --> H[Orchestrator]
|
|
601
|
+
|
|
602
|
+
H --> I{Check Prompt Cache}
|
|
603
|
+
I -->|Cache Hit| J[Return Cached Response]
|
|
604
|
+
I -->|Cache Miss| K{Determine Provider}
|
|
605
|
+
|
|
606
|
+
K -->|Simple 0-2 tools| L[Ollama Local]
|
|
607
|
+
K -->|Moderate 3-14 tools| M[OpenRouter / Azure]
|
|
608
|
+
K -->|Complex 15+ tools| N[Databricks]
|
|
609
|
+
|
|
610
|
+
L --> O[Circuit Breaker Check]
|
|
611
|
+
M --> O
|
|
612
|
+
N --> O
|
|
613
|
+
|
|
614
|
+
O -->|Closed| P{Provider API}
|
|
615
|
+
O -->|Open| Z2[Fallback Provider]
|
|
616
|
+
|
|
617
|
+
P -->|Databricks| Q1[Databricks API]
|
|
618
|
+
P -->|OpenRouter| Q2[OpenRouter API]
|
|
619
|
+
P -->|Ollama| Q3[Ollama Local]
|
|
620
|
+
P -->|Azure| Q4[Azure Anthropic API]
|
|
621
|
+
|
|
622
|
+
Q1 --> R[Response Processing]
|
|
623
|
+
Q2 --> R
|
|
624
|
+
Q3 --> R
|
|
625
|
+
Q4 --> R
|
|
626
|
+
Z2 --> R
|
|
627
|
+
|
|
628
|
+
R --> S[Format Conversion]
|
|
629
|
+
S --> T[Cache Response]
|
|
630
|
+
T --> U[Update Metrics]
|
|
631
|
+
U --> V[Return to Client]
|
|
632
|
+
J --> V
|
|
633
|
+
|
|
634
|
+
style B fill:#4a90e2,stroke:#333,stroke-width:2px,color:#fff
|
|
635
|
+
style H fill:#7b68ee,stroke:#333,stroke-width:2px,color:#fff
|
|
636
|
+
style K fill:#f39c12,stroke:#333,stroke-width:2px
|
|
637
|
+
style P fill:#2ecc71,stroke:#333,stroke-width:2px,color:#fff
|
|
638
|
+
```
|
|
639
|
+
|
|
640
|
+
**Key Components:**
|
|
641
|
+
|
|
448
642
|
- **`src/api/router.js`** â Express routes that accept Claude-compatible `/v1/messages` requests.
|
|
449
643
|
- **`src/api/middleware/*`** â Production middleware stack:
|
|
450
644
|
- `load-shedding.js` â Proactive overload protection with resource monitoring
|
|
@@ -465,7 +659,7 @@ Lynkr includes comprehensive production-ready features designed for reliability,
|
|
|
465
659
|
|
|
466
660
|
---
|
|
467
661
|
|
|
468
|
-
## Getting Started
|
|
662
|
+
## Getting Started: Installation & Setup Guide
|
|
469
663
|
|
|
470
664
|
### Prerequisites
|
|
471
665
|
|
|
@@ -1569,13 +1763,86 @@ If performance is degraded:
|
|
|
1569
1763
|
|
|
1570
1764
|
---
|
|
1571
1765
|
|
|
1572
|
-
## FAQ
|
|
1766
|
+
## Frequently Asked Questions (FAQ)
|
|
1767
|
+
|
|
1768
|
+
<details>
|
|
1769
|
+
<summary><strong>Q: Can I use Lynkr with the official Claude Code CLI?</strong></summary>
|
|
1770
|
+
|
|
1771
|
+
**A:** Yes! Lynkr is designed as a drop-in replacement for Anthropic's backend. Simply set `ANTHROPIC_BASE_URL` to point to your Lynkr server:
|
|
1772
|
+
|
|
1773
|
+
```bash
|
|
1774
|
+
export ANTHROPIC_BASE_URL=http://localhost:8080
|
|
1775
|
+
export ANTHROPIC_API_KEY=dummy # Required by CLI, but ignored by Lynkr
|
|
1776
|
+
claude "Your prompt here"
|
|
1777
|
+
```
|
|
1778
|
+
|
|
1779
|
+
*Related searches: Claude Code proxy setup, Claude Code alternative backend, self-hosted Claude Code*
|
|
1780
|
+
</details>
|
|
1781
|
+
|
|
1782
|
+
<details>
|
|
1783
|
+
<summary><strong>Q: How much money does Lynkr save on token costs?</strong></summary>
|
|
1784
|
+
|
|
1785
|
+
**A:** With all 5 optimization phases enabled, Lynkr achieves **60-80% token reduction**:
|
|
1786
|
+
|
|
1787
|
+
- **Normal workloads:** 20-30% reduction
|
|
1788
|
+
- **Memory-heavy:** 30-45% reduction
|
|
1789
|
+
- **Tool-heavy:** 25-35% reduction
|
|
1790
|
+
- **Long conversations:** 35-40% reduction
|
|
1791
|
+
|
|
1792
|
+
At 100k requests/month, this translates to **$6,400-9,600/month savings** ($77k-115k/year).
|
|
1793
|
+
|
|
1794
|
+
*Related searches: Claude Code cost reduction, token optimization strategies, AI cost savings*
|
|
1795
|
+
</details>
|
|
1796
|
+
|
|
1797
|
+
<details>
|
|
1798
|
+
<summary><strong>Q: Can I use Ollama models with Lynkr?</strong></summary>
|
|
1799
|
+
|
|
1800
|
+
**A:** Yes! Set `MODEL_PROVIDER=ollama` and ensure Ollama is running:
|
|
1801
|
+
|
|
1802
|
+
```bash
|
|
1803
|
+
export MODEL_PROVIDER=ollama
|
|
1804
|
+
export OLLAMA_MODEL=llama3.1:8b # or qwen2.5-coder, mistral, etc.
|
|
1805
|
+
lynkr start
|
|
1806
|
+
```
|
|
1807
|
+
|
|
1808
|
+
**Best Ollama models for coding:**
|
|
1809
|
+
- `qwen2.5-coder:latest` (7B) - Optimized for code generation
|
|
1810
|
+
- `llama3.1:8b` - General-purpose, good balance
|
|
1811
|
+
- `codellama:13b` - Higher quality, needs more RAM
|
|
1812
|
+
|
|
1813
|
+
*Related searches: Ollama Claude Code integration, local LLM for coding, offline AI assistant*
|
|
1814
|
+
</details>
|
|
1815
|
+
|
|
1816
|
+
<details>
|
|
1817
|
+
<summary><strong>Q: What are the performance differences between providers?</strong></summary>
|
|
1818
|
+
|
|
1819
|
+
**A:** Performance comparison:
|
|
1820
|
+
|
|
1821
|
+
| Provider | Latency | Cost | Tool Support | Best For |
|
|
1822
|
+
|----------|---------|------|--------------|----------|
|
|
1823
|
+
| **Databricks/Azure** | 500ms-2s | $$$ | Excellent | Enterprise production |
|
|
1824
|
+
| **OpenRouter** | 300ms-1.5s | $$ | Excellent | Flexibility, cost optimization |
|
|
1825
|
+
| **Ollama** | 100-500ms | Free | Limited | Local development, privacy |
|
|
1826
|
+
| **llama.cpp** | 50-300ms | Free | Limited | Maximum performance |
|
|
1827
|
+
|
|
1828
|
+
*Related searches: LLM provider comparison, Claude Code performance, best AI model for coding*
|
|
1829
|
+
</details>
|
|
1573
1830
|
|
|
1574
|
-
|
|
1575
|
-
|
|
1831
|
+
<details>
|
|
1832
|
+
<summary><strong>Q: Is this an exact drop-in replacement for Anthropic's backend?</strong></summary>
|
|
1576
1833
|
|
|
1577
|
-
**
|
|
1578
|
-
|
|
1834
|
+
**A:** No. Lynkr mimics key Claude Code CLI behaviors but is intentionally extensible. Some premium Anthropic features (Claude Skills, hosted sandboxes) are out of scope for self-hosted deployments.
|
|
1835
|
+
|
|
1836
|
+
**What works:** Core workflows (chat, tool calls, repo operations, Git integration, MCP servers)
|
|
1837
|
+
**What's different:** Self-hosted = you control infrastructure, security, and scaling
|
|
1838
|
+
|
|
1839
|
+
*Related searches: Claude Code alternatives, self-hosted AI coding assistant*
|
|
1840
|
+
</details>
|
|
1841
|
+
|
|
1842
|
+
<details>
|
|
1843
|
+
<summary><strong>Q: How does Lynkr compare with Anthropic's hosted backend?</strong></summary>
|
|
1844
|
+
|
|
1845
|
+
**A:** Functionally they overlap on core workflows (chat, tool calls, repo ops), but differ in scope:
|
|
1579
1846
|
|
|
1580
1847
|
| Capability | Anthropic Hosted Backend | Claude Code Proxy |
|
|
1581
1848
|
|------------|-------------------------|-------------------|
|
|
@@ -1591,47 +1858,75 @@ A: Functionally they overlap on core workflows (chat, tool calls, repo ops), but
|
|
|
1591
1858
|
|
|
1592
1859
|
The proxy is ideal when you need local control, custom tooling, or non-Anthropic model endpoints. If you require fully managed browsing, secure sandboxes, or enterprise SLA, stick with the hosted backend.
|
|
1593
1860
|
|
|
1594
|
-
|
|
1595
|
-
|
|
1861
|
+
*Related searches: Anthropic API alternatives, Claude Code self-hosted vs cloud*
|
|
1862
|
+
</details>
|
|
1863
|
+
|
|
1864
|
+
<details>
|
|
1865
|
+
<summary><strong>Q: Does prompt caching work like Anthropic's cache?</strong></summary>
|
|
1596
1866
|
|
|
1597
|
-
**
|
|
1598
|
-
A: Yes. Place multiple manifests in `MCP_MANIFEST_DIRS`. Each server is launched and its tools are namespaced.
|
|
1867
|
+
**A:** Yes, functionally similar. Identical messages (model, messages, tools, sampling params) reuse cached responses until TTL expires. Tool-invoking turns skip caching.
|
|
1599
1868
|
|
|
1600
|
-
|
|
1601
|
-
A: Set `WORKSPACE_ROOT` before starting the proxy. The indexer and filesystem tools operate relative to that path.
|
|
1869
|
+
Lynkr's caching implementation matches Claude's cache semantics, providing the same latency and cost benefits.
|
|
1602
1870
|
|
|
1603
|
-
|
|
1604
|
-
|
|
1871
|
+
*Related searches: Prompt caching Claude, LLM response caching, reduce AI API costs*
|
|
1872
|
+
</details>
|
|
1605
1873
|
|
|
1606
|
-
|
|
1607
|
-
|
|
1874
|
+
<details>
|
|
1875
|
+
<summary><strong>Q: Can I connect multiple MCP servers?</strong></summary>
|
|
1608
1876
|
|
|
1609
|
-
**
|
|
1610
|
-
A:
|
|
1611
|
-
- **Databricks/Azure Anthropic**: ~500ms-2s latency, cloud-hosted, pay-per-token, full tool support, enterprise features
|
|
1612
|
-
- **OpenRouter**: ~300ms-1.5s latency, cloud-hosted, competitive pricing ($0.15/1M for GPT-4o-mini), 100+ models, full tool support
|
|
1613
|
-
- **Ollama**: ~100-500ms first token, runs locally, free, limited tool support (model-dependent)
|
|
1877
|
+
**A:** Yes! Place multiple manifests in `MCP_MANIFEST_DIRS`. Each server is launched and its tools are namespaced.
|
|
1614
1878
|
|
|
1615
|
-
|
|
1879
|
+
```bash
|
|
1880
|
+
export MCP_MANIFEST_DIRS=/path/to/manifests:/another/path
|
|
1881
|
+
```
|
|
1882
|
+
|
|
1883
|
+
Lynkr automatically discovers and orchestrates all MCP servers.
|
|
1884
|
+
|
|
1885
|
+
*Related searches: MCP server integration, Model Context Protocol setup, multiple MCP servers*
|
|
1886
|
+
</details>
|
|
1887
|
+
|
|
1888
|
+
<details>
|
|
1889
|
+
<summary><strong>Q: How do I change the workspace root?</strong></summary>
|
|
1890
|
+
|
|
1891
|
+
**A:** Set `WORKSPACE_ROOT` before starting the proxy:
|
|
1892
|
+
|
|
1893
|
+
```bash
|
|
1894
|
+
export WORKSPACE_ROOT=/path/to/your/project
|
|
1895
|
+
lynkr start
|
|
1896
|
+
```
|
|
1616
1897
|
|
|
1617
|
-
|
|
1618
|
-
A: llama.cpp is a high-performance C++ inference engine for running large language models locally. Unlike Ollama (which is an application with its own model format), llama.cpp:
|
|
1619
|
-
- **Runs any GGUF model** from HuggingFace directly
|
|
1620
|
-
- **Provides better performance** through optimized C++ code
|
|
1621
|
-
- **Uses less memory** with advanced quantization options (Q2_K to Q8_0)
|
|
1622
|
-
- **Supports more GPU backends** (CUDA, Metal, ROCm, Vulkan, SYCL)
|
|
1623
|
-
- **Uses OpenAI-compatible API** making integration seamless
|
|
1898
|
+
The indexer and filesystem tools operate relative to this path.
|
|
1624
1899
|
|
|
1625
|
-
|
|
1626
|
-
|
|
1627
|
-
|
|
1900
|
+
*Related searches: Claude Code workspace configuration, change working directory*
|
|
1901
|
+
</details>
|
|
1902
|
+
|
|
1903
|
+
<details>
|
|
1904
|
+
<summary><strong>Q: What is llama.cpp and when should I use it over Ollama?</strong></summary>
|
|
1905
|
+
|
|
1906
|
+
**A:** llama.cpp is a high-performance C++ inference engine for running LLMs locally. Compared to Ollama:
|
|
1907
|
+
|
|
1908
|
+
**llama.cpp advantages:**
|
|
1909
|
+
- â
**Faster inference** - Optimized C++ code
|
|
1910
|
+
- â
**Less memory** - Advanced quantization (Q2_K to Q8_0)
|
|
1911
|
+
- â
**Any GGUF model** - Direct HuggingFace support
|
|
1912
|
+
- â
**More GPU backends** - CUDA, Metal, ROCm, Vulkan, SYCL
|
|
1913
|
+
- â
**Fine-grained control** - Context length, GPU layers, etc.
|
|
1914
|
+
|
|
1915
|
+
**Use llama.cpp when you need:**
|
|
1916
|
+
- Maximum inference speed and minimum memory
|
|
1917
|
+
- Specific quantization levels
|
|
1628
1918
|
- GGUF models not packaged for Ollama
|
|
1629
|
-
- Fine-grained control over model parameters (context length, GPU layers, etc.)
|
|
1630
1919
|
|
|
1631
|
-
Use Ollama when
|
|
1920
|
+
**Use Ollama when:** You prefer easier setup and don't need the extra control.
|
|
1921
|
+
|
|
1922
|
+
*Related searches: llama.cpp vs Ollama, GGUF model inference, local LLM performance*
|
|
1923
|
+
</details>
|
|
1924
|
+
|
|
1925
|
+
<details>
|
|
1926
|
+
<summary><strong>Q: How do I set up llama.cpp with Lynkr?</strong></summary>
|
|
1927
|
+
|
|
1928
|
+
**A:** Follow these steps to integrate llama.cpp with Lynkr:
|
|
1632
1929
|
|
|
1633
|
-
**Q: How do I set up llama.cpp with Lynkr?**
|
|
1634
|
-
A:
|
|
1635
1930
|
```bash
|
|
1636
1931
|
# 1. Build llama.cpp (or download pre-built binary)
|
|
1637
1932
|
git clone https://github.com/ggerganov/llama.cpp
|
|
@@ -1646,124 +1941,310 @@ wget https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/qwe
|
|
|
1646
1941
|
# 4. Configure Lynkr
|
|
1647
1942
|
export MODEL_PROVIDER=llamacpp
|
|
1648
1943
|
export LLAMACPP_ENDPOINT=http://localhost:8080
|
|
1649
|
-
|
|
1944
|
+
lynkr start
|
|
1650
1945
|
```
|
|
1651
1946
|
|
|
1652
|
-
|
|
1653
|
-
|
|
1654
|
-
|
|
1655
|
-
|
|
1656
|
-
|
|
1657
|
-
|
|
1658
|
-
|
|
1947
|
+
*Related searches: llama.cpp setup, GGUF model deployment, llama-server configuration*
|
|
1948
|
+
</details>
|
|
1949
|
+
|
|
1950
|
+
<details>
|
|
1951
|
+
<summary><strong>Q: What is OpenRouter and why should I use it?</strong></summary>
|
|
1952
|
+
|
|
1953
|
+
**A:** OpenRouter is a unified API gateway that provides access to **100+ AI models** from multiple providers (OpenAI, Anthropic, Google, Meta, Mistral, etc.) through a single API key.
|
|
1954
|
+
|
|
1955
|
+
**Key benefits:**
|
|
1956
|
+
- â
**No vendor lock-in** - Switch models without changing code
|
|
1957
|
+
- â
**Competitive pricing** - Often cheaper than direct provider access (e.g., GPT-4o-mini at $0.15/$0.60 per 1M tokens)
|
|
1958
|
+
- â
**Automatic fallbacks** - If your primary model is unavailable, OpenRouter tries alternatives
|
|
1959
|
+
- â
**Pay-as-you-go** - No monthly fees or subscriptions
|
|
1960
|
+
- â
**Full tool calling support** - Compatible with Claude Code CLI workflows
|
|
1961
|
+
|
|
1962
|
+
*Related searches: OpenRouter API gateway, multi-model LLM access, AI provider aggregator*
|
|
1963
|
+
</details>
|
|
1964
|
+
|
|
1965
|
+
<details>
|
|
1966
|
+
<summary><strong>Q: How do I get started with OpenRouter?</strong></summary>
|
|
1967
|
+
|
|
1968
|
+
**A:** Quick OpenRouter setup (5 minutes):
|
|
1659
1969
|
|
|
1660
|
-
**Q: How do I get started with OpenRouter?**
|
|
1661
|
-
A:
|
|
1662
1970
|
1. Visit https://openrouter.ai and sign in (GitHub, Google, or email)
|
|
1663
1971
|
2. Go to https://openrouter.ai/keys and create an API key
|
|
1664
1972
|
3. Add credits to your account (minimum $5, pay-as-you-go)
|
|
1665
1973
|
4. Configure Lynkr:
|
|
1666
|
-
```
|
|
1667
|
-
MODEL_PROVIDER=openrouter
|
|
1668
|
-
OPENROUTER_API_KEY=sk-or-v1-...
|
|
1669
|
-
OPENROUTER_MODEL=openai/gpt-4o-mini
|
|
1974
|
+
```bash
|
|
1975
|
+
export MODEL_PROVIDER=openrouter
|
|
1976
|
+
export OPENROUTER_API_KEY=sk-or-v1-...
|
|
1977
|
+
export OPENROUTER_MODEL=openai/gpt-4o-mini
|
|
1978
|
+
lynkr start
|
|
1670
1979
|
```
|
|
1671
|
-
5.
|
|
1980
|
+
5. Connect Claude CLI and start coding!
|
|
1981
|
+
|
|
1982
|
+
*Related searches: OpenRouter API key setup, OpenRouter getting started, OpenRouter credit system*
|
|
1983
|
+
</details>
|
|
1984
|
+
|
|
1985
|
+
<details>
|
|
1986
|
+
<summary><strong>Q: Which OpenRouter model should I use?</strong></summary>
|
|
1672
1987
|
|
|
1673
|
-
**
|
|
1674
|
-
|
|
1675
|
-
- **Budget-conscious
|
|
1676
|
-
- **Best quality
|
|
1677
|
-
- **Free tier
|
|
1678
|
-
- **Balanced
|
|
1988
|
+
**A:** Popular choices by use case:
|
|
1989
|
+
|
|
1990
|
+
- **Budget-conscious:** `openai/gpt-4o-mini` ($0.15/$0.60 per 1M tokens) - Best value for code tasks
|
|
1991
|
+
- **Best quality:** `anthropic/claude-3.5-sonnet` - Claude's most capable model
|
|
1992
|
+
- **Free tier:** `meta-llama/llama-3.1-8b-instruct:free` - Completely free (rate-limited)
|
|
1993
|
+
- **Balanced:** `google/gemini-pro-1.5` - Large context window, good performance
|
|
1679
1994
|
|
|
1680
1995
|
See https://openrouter.ai/models for the complete list with pricing and features.
|
|
1681
1996
|
|
|
1682
|
-
|
|
1683
|
-
|
|
1684
|
-
|
|
1685
|
-
|
|
1686
|
-
|
|
1687
|
-
|
|
1997
|
+
*Related searches: best OpenRouter models for coding, cheapest OpenRouter models, OpenRouter model comparison*
|
|
1998
|
+
</details>
|
|
1999
|
+
|
|
2000
|
+
<details>
|
|
2001
|
+
<summary><strong>Q: How do I use OpenAI directly with Lynkr?</strong></summary>
|
|
2002
|
+
|
|
2003
|
+
**A:** Set `MODEL_PROVIDER=openai` and configure your API key:
|
|
2004
|
+
|
|
2005
|
+
```bash
|
|
2006
|
+
export MODEL_PROVIDER=openai
|
|
2007
|
+
export OPENAI_API_KEY=sk-your-api-key
|
|
2008
|
+
export OPENAI_MODEL=gpt-4o # or gpt-4o-mini, o1-preview, etc.
|
|
2009
|
+
lynkr start
|
|
1688
2010
|
```
|
|
2011
|
+
|
|
1689
2012
|
Then start Lynkr and connect Claude CLI as usual. All requests will be routed to OpenAI's API with automatic format conversion.
|
|
1690
2013
|
|
|
1691
|
-
|
|
1692
|
-
|
|
1693
|
-
|
|
1694
|
-
|
|
1695
|
-
|
|
2014
|
+
*Related searches: OpenAI API with Claude Code, GPT-4o integration, OpenAI proxy setup*
|
|
2015
|
+
</details>
|
|
2016
|
+
|
|
2017
|
+
<details>
|
|
2018
|
+
<summary><strong>Q: What's the difference between OpenAI, Azure OpenAI, and OpenRouter?</strong></summary>
|
|
2019
|
+
|
|
2020
|
+
**A:** Here's how they compare:
|
|
1696
2021
|
|
|
1697
|
-
|
|
2022
|
+
- **OpenAI** - Direct access to OpenAI's API. Simplest setup, lowest latency, pay-as-you-go billing directly with OpenAI.
|
|
2023
|
+
- **Azure OpenAI** - OpenAI models hosted on Azure infrastructure. Enterprise features (private endpoints, data residency, Azure AD integration), billed through Azure.
|
|
2024
|
+
- **OpenRouter** - Third-party API gateway providing access to 100+ models (including OpenAI). Competitive pricing, automatic fallbacks, single API key for multiple providers.
|
|
1698
2025
|
|
|
1699
|
-
**
|
|
1700
|
-
|
|
1701
|
-
-
|
|
1702
|
-
-
|
|
1703
|
-
- **Complex reasoning**: `o1-preview` â Advanced reasoning for math, logic, and complex problems
|
|
1704
|
-
- **Fast reasoning**: `o1-mini` â Efficient reasoning for coding and math tasks
|
|
2026
|
+
**Choose:**
|
|
2027
|
+
- OpenAI for simplicity and direct access
|
|
2028
|
+
- Azure OpenAI for enterprise requirements and compliance
|
|
2029
|
+
- OpenRouter for model flexibility and cost optimization
|
|
1705
2030
|
|
|
1706
|
-
|
|
1707
|
-
|
|
1708
|
-
|
|
1709
|
-
|
|
1710
|
-
|
|
2031
|
+
*Related searches: OpenAI vs Azure OpenAI, OpenRouter vs OpenAI pricing, enterprise AI deployment*
|
|
2032
|
+
</details>
|
|
2033
|
+
|
|
2034
|
+
<details>
|
|
2035
|
+
<summary><strong>Q: Which OpenAI model should I use?</strong></summary>
|
|
2036
|
+
|
|
2037
|
+
**A:** Recommended models by use case:
|
|
2038
|
+
|
|
2039
|
+
- **Best quality:** `gpt-4o` - Most capable, multimodal (text + vision), excellent tool calling
|
|
2040
|
+
- **Best value:** `gpt-4o-mini` - Fast, affordable ($0.15/$0.60 per 1M tokens), good for most tasks
|
|
2041
|
+
- **Complex reasoning:** `o1-preview` - Advanced reasoning for math, logic, and complex problems
|
|
2042
|
+
- **Fast reasoning:** `o1-mini` - Efficient reasoning for coding and math tasks
|
|
2043
|
+
|
|
2044
|
+
For coding tasks, `gpt-4o-mini` offers the best balance of cost and quality.
|
|
2045
|
+
|
|
2046
|
+
*Related searches: best GPT model for coding, o1-preview vs gpt-4o, OpenAI model selection*
|
|
2047
|
+
</details>
|
|
2048
|
+
|
|
2049
|
+
<details>
|
|
2050
|
+
<summary><strong>Q: Can I use OpenAI with the 3-tier hybrid routing?</strong></summary>
|
|
2051
|
+
|
|
2052
|
+
**A:** Yes! The recommended configuration uses multi-tier routing for optimal cost/performance:
|
|
2053
|
+
|
|
2054
|
+
- **Tier 1 (0-2 tools):** Ollama (free, local, fast)
|
|
2055
|
+
- **Tier 2 (3-14 tools):** OpenRouter (affordable, full tool support)
|
|
2056
|
+
- **Tier 3 (15+ tools):** Databricks (most capable, enterprise features)
|
|
1711
2057
|
|
|
1712
2058
|
This gives you the best of all worlds: free for simple tasks, affordable for moderate complexity, and enterprise-grade for heavy workloads.
|
|
1713
2059
|
|
|
1714
|
-
|
|
1715
|
-
|
|
2060
|
+
*Related searches: hybrid AI routing, multi-provider LLM strategy, cost-optimized AI architecture*
|
|
2061
|
+
</details>
|
|
2062
|
+
|
|
2063
|
+
<details>
|
|
2064
|
+
<summary><strong>Q: Where are session transcripts stored?</strong></summary>
|
|
2065
|
+
|
|
2066
|
+
**A:** Session transcripts are stored in SQLite at `data/sessions.db` (configurable via `SESSION_DB_PATH` environment variable).
|
|
1716
2067
|
|
|
1717
|
-
|
|
1718
|
-
A: Lynkr includes 14 production-ready features:
|
|
1719
|
-
- **Reliability:** Retry logic with exponential backoff, circuit breakers, load shedding, graceful shutdown, connection pooling
|
|
1720
|
-
- **Observability:** Metrics collection (Prometheus format), health checks (Kubernetes-ready), structured logging with request IDs
|
|
1721
|
-
- **Security:** Input validation, consistent error handling, path allowlisting, budget enforcement
|
|
2068
|
+
This allows for full conversation history, debugging, and audit trails.
|
|
1722
2069
|
|
|
1723
|
-
|
|
2070
|
+
*Related searches: Claude Code session storage, conversation history location, SQLite session database*
|
|
2071
|
+
</details>
|
|
1724
2072
|
|
|
1725
|
-
|
|
1726
|
-
|
|
2073
|
+
<details>
|
|
2074
|
+
<summary><strong>Q: What production hardening features are included?</strong></summary>
|
|
1727
2075
|
|
|
1728
|
-
**
|
|
1729
|
-
|
|
2076
|
+
**A:** Lynkr includes **14 production-ready features** across three categories:
|
|
2077
|
+
|
|
2078
|
+
**Reliability:**
|
|
2079
|
+
- Retry logic with exponential backoff
|
|
2080
|
+
- Circuit breakers
|
|
2081
|
+
- Load shedding
|
|
2082
|
+
- Graceful shutdown
|
|
2083
|
+
- Connection pooling
|
|
2084
|
+
|
|
2085
|
+
**Observability:**
|
|
2086
|
+
- Metrics collection (Prometheus format)
|
|
2087
|
+
- Health checks (Kubernetes-ready)
|
|
2088
|
+
- Structured logging with request IDs
|
|
2089
|
+
|
|
2090
|
+
**Security:**
|
|
2091
|
+
- Input validation
|
|
2092
|
+
- Consistent error handling
|
|
2093
|
+
- Path allowlisting
|
|
2094
|
+
- Budget enforcement
|
|
2095
|
+
|
|
2096
|
+
All features add **minimal overhead** (~7Ξs per request) and are battle-tested with **80 comprehensive tests**.
|
|
2097
|
+
|
|
2098
|
+
*Related searches: production-ready Node.js API, Kubernetes health checks, circuit breaker pattern*
|
|
2099
|
+
</details>
|
|
2100
|
+
|
|
2101
|
+
<details>
|
|
2102
|
+
<summary><strong>Q: How does circuit breaker protection work?</strong></summary>
|
|
2103
|
+
|
|
2104
|
+
**A:** Circuit breakers protect against cascading failures by implementing a state machine:
|
|
2105
|
+
|
|
2106
|
+
**CLOSED (Normal):** Requests pass through normally
|
|
2107
|
+
**OPEN (Failed):** After 5 consecutive failures, circuit opens and fails fast for 60 seconds (prevents overwhelming failing services)
|
|
2108
|
+
**HALF-OPEN (Testing):** Circuit automatically attempts recovery, testing if the service has recovered
|
|
2109
|
+
|
|
2110
|
+
This pattern prevents your application from wasting resources on requests likely to fail, and allows failing services time to recover.
|
|
2111
|
+
|
|
2112
|
+
*Related searches: circuit breaker pattern explained, microservices resilience, failure recovery strategies*
|
|
2113
|
+
</details>
|
|
2114
|
+
|
|
2115
|
+
<details>
|
|
2116
|
+
<summary><strong>Q: What metrics are collected and how can I access them?</strong></summary>
|
|
2117
|
+
|
|
2118
|
+
**A:** Lynkr collects comprehensive metrics including:
|
|
2119
|
+
|
|
2120
|
+
- Request counts and error rates
|
|
2121
|
+
- Latency percentiles (p50, p95, p99)
|
|
2122
|
+
- Token usage and costs
|
|
2123
|
+
- Circuit breaker states
|
|
2124
|
+
|
|
2125
|
+
**Access metrics via:**
|
|
1730
2126
|
- `/metrics/observability` - JSON format for dashboards
|
|
1731
2127
|
- `/metrics/prometheus` - Prometheus scraping
|
|
1732
2128
|
- `/metrics/circuit-breakers` - Circuit breaker state
|
|
1733
2129
|
|
|
1734
|
-
|
|
1735
|
-
|
|
1736
|
-
|
|
1737
|
-
|
|
1738
|
-
|
|
1739
|
-
|
|
2130
|
+
Perfect for Grafana dashboards, alerting, and production monitoring.
|
|
2131
|
+
|
|
2132
|
+
*Related searches: Prometheus metrics endpoint, Node.js observability, API metrics collection*
|
|
2133
|
+
</details>
|
|
2134
|
+
|
|
2135
|
+
<details>
|
|
2136
|
+
<summary><strong>Q: Is Lynkr production-ready?</strong></summary>
|
|
2137
|
+
|
|
2138
|
+
**A:** Yes! Lynkr is designed for production deployments with:
|
|
2139
|
+
|
|
2140
|
+
- â
**Zero-downtime deployments** (graceful shutdown)
|
|
2141
|
+
- â
**Kubernetes integration** (health checks, metrics)
|
|
2142
|
+
- â
**Horizontal scaling** (stateless design)
|
|
2143
|
+
- â
**Enterprise monitoring** (Prometheus, Grafana)
|
|
2144
|
+
- â
**Battle-tested reliability** (80 comprehensive tests, 100% pass rate)
|
|
2145
|
+
- â
**Minimal overhead** (<10Ξs middleware, <200MB memory)
|
|
2146
|
+
|
|
2147
|
+
Used in production environments with >100K requests/day.
|
|
2148
|
+
|
|
2149
|
+
*Related searches: production Node.js proxy, enterprise AI deployment, Kubernetes AI infrastructure*
|
|
2150
|
+
</details>
|
|
1740
2151
|
|
|
2152
|
+
<details>
|
|
2153
|
+
<summary><strong>Q: How do I deploy Lynkr to Kubernetes?</strong></summary>
|
|
1741
2154
|
|
|
2155
|
+
**A:** Deploy Lynkr to Kubernetes in 4 steps:
|
|
1742
2156
|
|
|
1743
|
-
**
|
|
1744
|
-
|
|
1745
|
-
|
|
1746
|
-
|
|
1747
|
-
|
|
1748
|
-
|
|
2157
|
+
1. **Build Docker image:** `docker build -t lynkr .`
|
|
2158
|
+
2. **Configure secrets:** Store environment variables in Kubernetes secrets
|
|
2159
|
+
3. **Deploy application:** Apply Kubernetes deployment manifests
|
|
2160
|
+
4. **Configure monitoring:** Set up Prometheus scraping and Grafana dashboards
|
|
2161
|
+
|
|
2162
|
+
**Key features for K8s:**
|
|
2163
|
+
- Health check endpoints (liveness, readiness, startup probes)
|
|
2164
|
+
- Graceful shutdown (respects SIGTERM)
|
|
2165
|
+
- Stateless design (horizontal pod autoscaling)
|
|
2166
|
+
- Prometheus metrics (ServiceMonitor ready)
|
|
1749
2167
|
|
|
1750
2168
|
The graceful shutdown and health check endpoints ensure zero-downtime deployments.
|
|
1751
2169
|
|
|
2170
|
+
*Related searches: Kubernetes deployment best practices, Docker proxy deployment, K8s health probes*
|
|
2171
|
+
</details>
|
|
2172
|
+
|
|
2173
|
+
### Still have questions?
|
|
2174
|
+
- ðŽ [Ask on GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)
|
|
2175
|
+
- ð [Report an Issue](https://github.com/vishalveerareddy123/Lynkr/issues)
|
|
2176
|
+
- ð [Read Full Documentation](https://deepwiki.com/vishalveerareddy123/Lynkr)
|
|
2177
|
+
|
|
1752
2178
|
---
|
|
1753
2179
|
|
|
1754
|
-
## References
|
|
2180
|
+
## References & Further Reading
|
|
1755
2181
|
|
|
1756
|
-
|
|
2182
|
+
### Academic & Technical Resources
|
|
1757
2183
|
|
|
2184
|
+
**Agentic AI Systems:**
|
|
1758
2185
|
- **Zhang et al. (2024)**. *Agentic Context Engineering*. arXiv:2510.04618. [arXiv](https://arxiv.org/abs/2510.04618)
|
|
1759
2186
|
|
|
2187
|
+
**Long-Term Memory & RAG:**
|
|
2188
|
+
- **Mohtashami & Jaggi (2023)**. *Landmark Attention: Random-Access Infinite Context Length for Transformers*. [arXiv](https://arxiv.org/abs/2305.16300)
|
|
2189
|
+
- **Google DeepMind (2024)**. *Titans: Learning to Memorize at Test Time*. [arXiv](https://arxiv.org/abs/2411.07043)
|
|
2190
|
+
|
|
1760
2191
|
For BibTeX citations, see [CITATIONS.bib](CITATIONS.bib).
|
|
1761
2192
|
|
|
2193
|
+
### Official Documentation
|
|
2194
|
+
|
|
2195
|
+
- [Claude Code CLI Documentation](https://docs.anthropic.com/en/docs/build-with-claude/claude-for-sheets) - Official Claude Code reference
|
|
2196
|
+
- [Model Context Protocol (MCP) Specification](https://spec.modelcontextprotocol.io/) - MCP protocol documentation
|
|
2197
|
+
- [Databricks Foundation Models](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) - Databricks LLM documentation
|
|
2198
|
+
- [Anthropic API Documentation](https://docs.anthropic.com/en/api/getting-started) - Claude API reference
|
|
2199
|
+
|
|
2200
|
+
### Related Projects & Tools
|
|
2201
|
+
|
|
2202
|
+
- [Ollama](https://ollama.ai/) - Local LLM runtime for running open-source models
|
|
2203
|
+
- [OpenRouter](https://openrouter.ai/) - Multi-provider LLM API gateway (100+ models)
|
|
2204
|
+
- [llama.cpp](https://github.com/ggerganov/llama.cpp) - High-performance C++ LLM inference engine
|
|
2205
|
+
- [LiteLLM](https://github.com/BerriAI/litellm) - Multi-provider LLM proxy (alternative approach)
|
|
2206
|
+
- [Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers) - Curated list of MCP server implementations
|
|
2207
|
+
|
|
2208
|
+
---
|
|
2209
|
+
|
|
2210
|
+
## Community & Adoption
|
|
2211
|
+
|
|
2212
|
+
### Get Involved
|
|
2213
|
+
|
|
2214
|
+
**â Star this repository** to show your support and help others discover Lynkr!
|
|
2215
|
+
|
|
2216
|
+
[](https://github.com/vishalveerareddy123/Lynkr)
|
|
2217
|
+
|
|
2218
|
+
### Support & Resources
|
|
2219
|
+
|
|
2220
|
+
- ð **Report Issues:** [GitHub Issues](https://github.com/vishalveerareddy123/Lynkr/issues) - Bug reports and feature requests
|
|
2221
|
+
- ðŽ **Discussions:** [GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions) - Questions, ideas, and community help
|
|
2222
|
+
- ðŽ **Discord Community:** [Join our Discord](https://discord.gg/qF7DDxrX) - Real-time chat and community support
|
|
2223
|
+
- ð **Documentation:** [DeepWiki](https://deepwiki.com/vishalveerareddy123/Lynkr) - Comprehensive guides and examples
|
|
2224
|
+
- ð§ **Contributing:** [CONTRIBUTING.md](CONTRIBUTING.md) - How to contribute to Lynkr
|
|
2225
|
+
|
|
2226
|
+
### Share Lynkr
|
|
2227
|
+
|
|
2228
|
+
Help spread the word about Lynkr:
|
|
2229
|
+
|
|
2230
|
+
- ðĶ [Share on Twitter](https://twitter.com/intent/tweet?text=Check%20out%20Lynkr%20-%20a%20production-ready%20Claude%20Code%20proxy%20with%20multi-provider%20support%20and%2060-80%25%20token%20savings!&url=https://github.com/vishalveerareddy123/Lynkr&hashtags=AI,ClaudeCode,LLM,OpenSource)
|
|
2231
|
+
- ðž [Share on LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https://github.com/vishalveerareddy123/Lynkr)
|
|
2232
|
+
- ð° [Share on Hacker News](https://news.ycombinator.com/submitlink?u=https://github.com/vishalveerareddy123/Lynkr&t=Lynkr%20-%20Production-Ready%20Claude%20Code%20Proxy)
|
|
2233
|
+
- ðą [Share on Reddit](https://www.reddit.com/submit?url=https://github.com/vishalveerareddy123/Lynkr&title=Lynkr%20-%20Production-Ready%20Claude%20Code%20Proxy%20with%20Multi-Provider%20Support)
|
|
2234
|
+
|
|
2235
|
+
### Why Developers Choose Lynkr
|
|
2236
|
+
|
|
2237
|
+
- ð° **Massive cost savings** - Save 60-80% on token costs with built-in optimization
|
|
2238
|
+
- ð **Provider freedom** - Choose from 7+ LLM providers (Databricks, OpenRouter, Ollama, Azure, llama.cpp)
|
|
2239
|
+
- ð **Privacy & control** - Self-hosted, open-source, no vendor lock-in
|
|
2240
|
+
- ð **Production-ready** - Enterprise features: circuit breakers, metrics, health checks
|
|
2241
|
+
- ð ïļ **Active development** - Regular updates, responsive maintainers, growing community
|
|
2242
|
+
|
|
1762
2243
|
---
|
|
1763
2244
|
|
|
1764
2245
|
## License
|
|
1765
2246
|
|
|
1766
|
-
|
|
2247
|
+
Licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for details.
|
|
1767
2248
|
|
|
1768
2249
|
---
|
|
1769
2250
|
|