lynkr 8.0.1 → 9.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.lynkr/telemetry.db +0 -0
- package/.lynkr/telemetry.db-shm +0 -0
- package/.lynkr/telemetry.db-wal +0 -0
- package/README.md +195 -321
- package/lynkr-skill.tar.gz +0 -0
- package/package.json +4 -3
- package/src/api/openai-router.js +30 -11
- package/src/api/providers-handler.js +171 -3
- package/src/api/router.js +9 -2
- package/src/clients/circuit-breaker.js +10 -247
- package/src/clients/codex-process.js +342 -0
- package/src/clients/codex-utils.js +143 -0
- package/src/clients/databricks.js +210 -63
- package/src/clients/resilience.js +540 -0
- package/src/clients/retry.js +22 -167
- package/src/config/index.js +57 -0
- package/src/context/compression.js +42 -9
- package/src/context/distill.js +492 -0
- package/src/orchestrator/index.js +46 -6
- package/src/routing/complexity-analyzer.js +258 -5
- package/src/routing/index.js +12 -2
- package/src/routing/latency-tracker.js +148 -0
- package/src/routing/model-tiers.js +2 -0
- package/src/routing/quality-scorer.js +113 -0
- package/src/routing/telemetry.js +464 -0
- package/src/server.js +11 -0
- package/src/tools/code-graph.js +538 -0
- package/src/tools/code-mode.js +304 -0
- package/src/tools/lazy-loader.js +11 -0
- package/src/tools/mcp-remote.js +7 -0
- package/src/tools/smart-selection.js +11 -0
- package/src/utils/payload.js +206 -0
- package/src/utils/perf-timer.js +80 -0
|
Binary file
|
|
Binary file
|
|
Binary file
|
package/README.md
CHANGED
|
@@ -1,429 +1,303 @@
|
|
|
1
|
-
# Lynkr
|
|
2
|
-
|
|
1
|
+
# Lynkr
|
|
2
|
+
|
|
3
|
+
### Run Claude Code, Cursor, and Codex on any model. One proxy, every provider.
|
|
3
4
|
|
|
4
5
|
[](https://www.npmjs.com/package/lynkr)
|
|
5
|
-
[](https://github.com/vishalveerareddy123/Lynkr)
|
|
6
7
|
[](LICENSE)
|
|
8
|
+
[](https://nodejs.org)
|
|
9
|
+
[](https://github.com/vishalveerareddy123/homebrew-lynkr)
|
|
7
10
|
[](https://deepwiki.com/vishalveerareddy123/Lynkr)
|
|
8
|
-
[](https://www.databricks.com/)
|
|
9
|
-
[](https://aws.amazon.com/bedrock/)
|
|
10
|
-
[](https://openai.com/)
|
|
11
|
-
[](https://ollama.ai/)
|
|
12
|
-
[](https://github.com/ggerganov/llama.cpp)
|
|
13
11
|
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
---
|
|
12
|
+
<table>
|
|
13
|
+
<tr>
|
|
14
|
+
<td align="center"><strong>10+</strong><br/>LLM Providers</td>
|
|
15
|
+
<td align="center"><strong>60-80%</strong><br/>Cost Reduction</td>
|
|
16
|
+
<td align="center"><strong>652</strong><br/>Tests Passing</td>
|
|
17
|
+
<td align="center"><strong>0</strong><br/>Code Changes Required</td>
|
|
18
|
+
</tr>
|
|
19
|
+
</table>
|
|
23
20
|
|
|
24
|
-
|
|
21
|
+
---
|
|
25
22
|
|
|
26
|
-
|
|
23
|
+
## The Problem
|
|
27
24
|
|
|
28
|
-
|
|
29
|
-
- 💰 **60-80% Cost Reduction** - Built-in token optimization with smart tool selection, prompt caching, and memory deduplication
|
|
30
|
-
- 🔒 **100% Local/Private** - Run completely offline with Ollama or llama.cpp
|
|
31
|
-
- 🌐 **Remote or Local** - Connect to providers on any IP/hostname (not limited to localhost)
|
|
32
|
-
- 🎯 **Zero Code Changes** - Drop-in replacement for Anthropic's backend
|
|
33
|
-
- 🏢 **Enterprise-Ready** - Circuit breakers, load shedding, Prometheus metrics, health checks
|
|
25
|
+
AI coding tools lock you into one provider. Claude Code requires Anthropic. Codex requires OpenAI. You can't use your company's Databricks endpoint, your local Ollama models, or your AWS Bedrock account — at least, not without Lynkr.
|
|
34
26
|
|
|
35
|
-
**
|
|
36
|
-
-
|
|
37
|
-
-
|
|
38
|
-
-
|
|
39
|
-
-
|
|
27
|
+
**The real costs:**
|
|
28
|
+
- Anthropic API at $15/MTok output adds up fast for daily coding
|
|
29
|
+
- No way to use free local models (Ollama, llama.cpp) with Claude Code
|
|
30
|
+
- Enterprise teams can't route through their own cloud infrastructure
|
|
31
|
+
- Provider outages take your entire workflow down
|
|
40
32
|
|
|
41
|
-
|
|
33
|
+
## The Solution
|
|
42
34
|
|
|
43
|
-
|
|
35
|
+
Lynkr is a self-hosted proxy that sits between your AI coding tools and any LLM provider. One environment variable change, and your tools work with any model.
|
|
44
36
|
|
|
45
|
-
|
|
37
|
+
```
|
|
38
|
+
Claude Code / Cursor / Codex / Cline / Continue / Vercel AI SDK
|
|
39
|
+
|
|
|
40
|
+
Lynkr
|
|
41
|
+
|
|
|
42
|
+
Ollama | Bedrock | Databricks | OpenRouter | Azure | OpenAI | llama.cpp
|
|
43
|
+
```
|
|
46
44
|
|
|
47
|
-
**Option 1: NPM Package (Recommended)**
|
|
48
45
|
```bash
|
|
49
|
-
#
|
|
50
|
-
npm install -g pino-pretty
|
|
46
|
+
# That's it. Three lines.
|
|
51
47
|
npm install -g lynkr
|
|
52
|
-
|
|
48
|
+
export ANTHROPIC_BASE_URL=http://localhost:8081
|
|
53
49
|
lynkr start
|
|
54
50
|
```
|
|
55
51
|
|
|
56
|
-
|
|
57
|
-
```bash
|
|
58
|
-
# Clone repository
|
|
59
|
-
git clone https://github.com/vishalveerareddy123/Lynkr.git
|
|
60
|
-
cd Lynkr
|
|
61
|
-
|
|
62
|
-
# Install dependencies
|
|
63
|
-
npm install
|
|
52
|
+
---
|
|
64
53
|
|
|
65
|
-
|
|
66
|
-
cp .env.example .env
|
|
54
|
+
## Quick Start
|
|
67
55
|
|
|
68
|
-
|
|
69
|
-
nano .env
|
|
56
|
+
### Install
|
|
70
57
|
|
|
71
|
-
|
|
72
|
-
npm
|
|
58
|
+
```bash
|
|
59
|
+
npm install -g pino-pretty && npm install -g lynkr
|
|
73
60
|
```
|
|
74
61
|
|
|
75
|
-
|
|
76
|
-
- **Node 20-24**: Full support with all features
|
|
77
|
-
- **Node 25+**: Full support (native modules auto-rebuild, babel fallback for code parsing)
|
|
62
|
+
### Pick a Provider
|
|
78
63
|
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
**Option 3: Docker**
|
|
64
|
+
**Free & Local (Ollama)**
|
|
82
65
|
```bash
|
|
83
|
-
|
|
66
|
+
export MODEL_PROVIDER=ollama
|
|
67
|
+
export OLLAMA_MODEL=qwen2.5-coder:latest
|
|
68
|
+
lynkr start
|
|
84
69
|
```
|
|
85
70
|
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|----------|------|--------|------|---------|
|
|
94
|
-
| **AWS Bedrock** | Cloud | 100+ (Claude, Titan, Llama, Mistral, etc.) | $$-$$$ | Cloud |
|
|
95
|
-
| **Databricks** | Cloud | Claude Sonnet 4.5, Opus 4.5 | $$$ | Cloud |
|
|
96
|
-
| **OpenRouter** | Cloud | 100+ (GPT, Claude, Llama, Gemini, etc.) | $-$$ | Cloud |
|
|
97
|
-
| **Ollama** | Local | Unlimited (free, offline) | **FREE** | 🔒 100% Local |
|
|
98
|
-
| **llama.cpp** | Local | GGUF models | **FREE** | 🔒 100% Local |
|
|
99
|
-
| **Azure OpenAI** | Cloud | GPT-4o, GPT-5, o1, o3 | $$$ | Cloud |
|
|
100
|
-
| **Azure Anthropic** | Cloud | Claude models | $$$ | Cloud |
|
|
101
|
-
| **OpenAI** | Cloud | GPT-4o, o1, o3 | $$$ | Cloud |
|
|
102
|
-
| **LM Studio** | Local | Local models with GUI | **FREE** | 🔒 100% Local |
|
|
103
|
-
| **MLX OpenAI Server** | Local | Apple Silicon (M1/M2/M3/M4) | **FREE** | 🔒 100% Local |
|
|
104
|
-
|
|
105
|
-
📖 **[Full Provider Configuration Guide](documentation/providers.md)**
|
|
106
|
-
|
|
107
|
-
---
|
|
71
|
+
**AWS Bedrock (100+ models)**
|
|
72
|
+
```bash
|
|
73
|
+
export MODEL_PROVIDER=bedrock
|
|
74
|
+
export AWS_BEDROCK_API_KEY=your-key
|
|
75
|
+
export AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0
|
|
76
|
+
lynkr start
|
|
77
|
+
```
|
|
108
78
|
|
|
109
|
-
|
|
79
|
+
**OpenRouter (cheapest cloud)**
|
|
80
|
+
```bash
|
|
81
|
+
export MODEL_PROVIDER=openrouter
|
|
82
|
+
export OPENROUTER_API_KEY=sk-or-v1-your-key
|
|
83
|
+
lynkr start
|
|
84
|
+
```
|
|
110
85
|
|
|
111
|
-
|
|
86
|
+
### Connect Your Tool
|
|
112
87
|
|
|
88
|
+
**Claude Code**
|
|
113
89
|
```bash
|
|
114
|
-
# Set Lynkr as backend
|
|
115
90
|
export ANTHROPIC_BASE_URL=http://localhost:8081
|
|
116
91
|
export ANTHROPIC_API_KEY=dummy
|
|
117
|
-
|
|
118
|
-
# Run Claude Code
|
|
119
92
|
claude "Your prompt here"
|
|
120
93
|
```
|
|
121
94
|
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
📖 **[Detailed Claude Code Setup](documentation/claude-code-cli.md)**
|
|
125
|
-
|
|
126
|
-
---
|
|
127
|
-
|
|
128
|
-
## Cursor Integration
|
|
129
|
-
|
|
130
|
-
Configure Cursor IDE to use Lynkr:
|
|
131
|
-
|
|
132
|
-
1. **Open Cursor Settings**
|
|
133
|
-
- Mac: `Cmd+,` | Windows/Linux: `Ctrl+,`
|
|
134
|
-
- Navigate to: **Features** → **Models**
|
|
135
|
-
|
|
136
|
-
2. **Configure OpenAI API Settings**
|
|
137
|
-
- **API Key**: `sk-lynkr` (any non-empty value)
|
|
138
|
-
- **Base URL**: `http://localhost:8081/v1`
|
|
139
|
-
- **Model**: `claude-3.5-sonnet` (or your provider's model)
|
|
140
|
-
|
|
141
|
-
3. **Test It**
|
|
142
|
-
- Chat: `Cmd+L` / `Ctrl+L`
|
|
143
|
-
- Inline edits: `Cmd+K` / `Ctrl+K`
|
|
144
|
-
- @Codebase search: Requires [embeddings setup](documentation/embeddings.md)
|
|
145
|
-
|
|
146
|
-
📖 **[Full Cursor Setup Guide](documentation/cursor-integration.md)** | **[Embeddings Configuration](documentation/embeddings.md)**
|
|
147
|
-
---
|
|
148
|
-
## Codex CLI Integration
|
|
149
|
-
|
|
150
|
-
Configure [OpenAI Codex CLI](https://github.com/openai/codex) to use Lynkr as its backend.
|
|
151
|
-
|
|
152
|
-
### Option 1: Environment Variables (Quick Start)
|
|
153
|
-
|
|
154
|
-
```bash
|
|
155
|
-
export OPENAI_BASE_URL=http://localhost:8081/v1
|
|
156
|
-
export OPENAI_API_KEY=dummy
|
|
157
|
-
|
|
158
|
-
codex
|
|
159
|
-
```
|
|
160
|
-
|
|
161
|
-
### Option 2: Config File (Recommended)
|
|
162
|
-
|
|
163
|
-
Edit `~/.codex/config.toml`:
|
|
164
|
-
|
|
95
|
+
**Codex CLI** — edit `~/.codex/config.toml`:
|
|
165
96
|
```toml
|
|
166
|
-
# Set Lynkr as the default provider
|
|
167
97
|
model_provider = "lynkr"
|
|
168
98
|
model = "gpt-4o"
|
|
169
99
|
|
|
170
|
-
# Define the Lynkr provider
|
|
171
100
|
[model_providers.lynkr]
|
|
172
101
|
name = "Lynkr Proxy"
|
|
173
102
|
base_url = "http://localhost:8081/v1"
|
|
174
103
|
wire_api = "responses"
|
|
175
|
-
|
|
176
|
-
# Optional: Trust your project directories
|
|
177
|
-
[projects."/path/to/your/project"]
|
|
178
|
-
trust_level = "trusted"
|
|
179
104
|
```
|
|
180
105
|
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
106
|
+
**Cursor IDE**
|
|
107
|
+
- Settings > Features > Models
|
|
108
|
+
- Base URL: `http://localhost:8081/v1`
|
|
109
|
+
- API Key: `sk-lynkr`
|
|
110
|
+
|
|
111
|
+
**Vercel AI SDK**
|
|
112
|
+
```ts
|
|
113
|
+
import { generateText } from "ai";
|
|
114
|
+
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
|
|
115
|
+
|
|
116
|
+
const lynkr = createOpenAICompatible({
|
|
117
|
+
baseURL: "http://localhost:8081/v1",
|
|
118
|
+
name: "lynkr",
|
|
119
|
+
apiKey: "sk-lynkr",
|
|
120
|
+
});
|
|
121
|
+
|
|
122
|
+
const { text } = await generateText({
|
|
123
|
+
model: lynkr.chatModel("auto"),
|
|
124
|
+
prompt: "Hello!",
|
|
125
|
+
});
|
|
200
126
|
```
|
|
201
127
|
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
| Issue | Solution |
|
|
205
|
-
|-------|----------|
|
|
206
|
-
| Same response for all queries | Disable semantic cache: `SEMANTIC_CACHE_ENABLED=false` |
|
|
207
|
-
| Tool calls not executing | Increase threshold: `POLICY_TOOL_LOOP_THRESHOLD=15` |
|
|
208
|
-
| Slow first request | Keep Ollama loaded: `OLLAMA_KEEP_ALIVE=24h` |
|
|
209
|
-
| Connection refused | Ensure Lynkr is running: `npm start` |
|
|
210
|
-
|
|
211
|
-
> **Note:** Codex uses the OpenAI Responses API format. Lynkr automatically converts this to your configured provider's format.
|
|
128
|
+
> Works with any OpenAI-compatible client: Cline, Continue.dev, ClawdBot, KiloCode, and more.
|
|
212
129
|
|
|
213
130
|
---
|
|
214
131
|
|
|
215
|
-
##
|
|
216
|
-
|
|
217
|
-
Lynkr supports [ClawdBot](https://github.com/openclaw/openclaw) via its OpenAI-compatible API. ClawdBot users can route requests through Lynkr to access any supported provider.
|
|
132
|
+
## Supported Providers
|
|
218
133
|
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
|
|
|
223
|
-
|
|
|
224
|
-
|
|
|
225
|
-
|
|
|
134
|
+
| Provider | Type | Models | Cost |
|
|
135
|
+
|----------|------|--------|------|
|
|
136
|
+
| **Ollama** | Local | Unlimited (free, offline) | **Free** |
|
|
137
|
+
| **llama.cpp** | Local | Any GGUF model | **Free** |
|
|
138
|
+
| **LM Studio** | Local | Local models with GUI | **Free** |
|
|
139
|
+
| **MLX Server** | Local | Apple Silicon optimized | **Free** |
|
|
140
|
+
| **AWS Bedrock** | Cloud | 100+ (Claude, Llama, Mistral, Titan) | $$ |
|
|
141
|
+
| **OpenRouter** | Cloud | 100+ (GPT, Claude, Llama, Gemini) | $-$$ |
|
|
142
|
+
| **Databricks** | Cloud | Claude Sonnet 4.5, Opus 4.5 | $$$ |
|
|
143
|
+
| **Azure OpenAI** | Cloud | GPT-4o, GPT-5, o1, o3 | $$$ |
|
|
144
|
+
| **Azure Anthropic** | Cloud | Claude models | $$$ |
|
|
145
|
+
| **OpenAI** | Cloud | GPT-4o, o1, o3 | $$$ |
|
|
146
|
+
|
|
147
|
+
4 local providers for **100% offline, free** usage. 6+ cloud providers for scale.
|
|
226
148
|
|
|
227
|
-
|
|
228
|
-
`gpt-5.2`, `gpt-5.1-codex`, `claude-opus-4.5`, `claude-sonnet-4.5`, `claude-haiku-4.5`, `gemini-3-pro`, `gemini-3-flash`, and more.
|
|
149
|
+
---
|
|
229
150
|
|
|
230
|
-
|
|
151
|
+
## Why Lynkr Over Alternatives
|
|
152
|
+
|
|
153
|
+
| Feature | Lynkr | LiteLLM (42K stars) | OpenRouter | PortKey |
|
|
154
|
+
|---------|-------|---------------------|------------|---------|
|
|
155
|
+
| **Setup** | `npm install -g lynkr` | Python + Docker + Postgres | Account signup | Docker + config |
|
|
156
|
+
| **Claude Code support** | Drop-in, native | Requires config | No CLI support | Requires config |
|
|
157
|
+
| **Cursor support** | Drop-in, native | Partial | Via API key | Partial |
|
|
158
|
+
| **Codex CLI support** | Drop-in, native | No | No | No |
|
|
159
|
+
| **Built for coding tools** | Yes (purpose-built) | No (general gateway) | No (general API) | No (general gateway) |
|
|
160
|
+
| **Local models** | Ollama, llama.cpp, LM Studio, MLX | Ollama only | No | No |
|
|
161
|
+
| **Token optimization** | Built-in (60-80% savings) | No | No | Caching only |
|
|
162
|
+
| **Complexity routing** | Auto-routes by task difficulty | Manual | Cost/latency only | Manual |
|
|
163
|
+
| **Memory system** | Titans-inspired long-term memory | No | No | No |
|
|
164
|
+
| **Self-hosted** | Yes (Node.js) | Yes (Python stack) | No (SaaS) | Yes (Docker) |
|
|
165
|
+
| **Offline capable** | Yes | Yes | No | No |
|
|
166
|
+
| **Transaction fees** | None | None (OSS) / Paid enterprise | 5.5% on credits | Free tier / Paid |
|
|
167
|
+
| **Dependencies** | Node.js only | Python, Prisma, PostgreSQL | N/A | Docker, Python |
|
|
168
|
+
| **Format conversion** | Anthropic <-> OpenAI (automatic) | Automatic | N/A | Automatic |
|
|
169
|
+
| **License** | Apache 2.0 | MIT | Proprietary | MIT (gateway) |
|
|
170
|
+
|
|
171
|
+
**Lynkr's edge:** Purpose-built for AI coding tools. Not a general LLM gateway — a proxy that understands Claude Code, Cursor, and Codex natively, with built-in token optimization, complexity-based routing, and a memory system designed for coding workflows. Installs in one command, runs on Node.js, zero infrastructure required.
|
|
231
172
|
|
|
232
173
|
---
|
|
233
174
|
|
|
234
|
-
##
|
|
235
|
-
---
|
|
175
|
+
## Cost Comparison
|
|
236
176
|
|
|
237
|
-
|
|
177
|
+
| Scenario | Direct Anthropic | Lynkr + Ollama | Lynkr + OpenRouter | Lynkr + Bedrock |
|
|
178
|
+
|----------|-----------------|----------------|--------------------| --------------- |
|
|
179
|
+
| Daily Claude Code usage | ~$10-30/day | **$0 (free)** | ~$2-8/day | ~$5-15/day |
|
|
180
|
+
| Token optimization savings | — | — | 60-80% further | 60-80% further |
|
|
181
|
+
| Monthly (heavy use) | $300-900 | **$0** | $60-240 | $150-450 |
|
|
238
182
|
|
|
239
|
-
|
|
240
|
-
- 📦 **[Installation Guide](documentation/installation.md)** - Detailed installation for all methods
|
|
241
|
-
- ⚙️ **[Provider Configuration](documentation/providers.md)** - Complete setup for all 12+ providers
|
|
242
|
-
- 🎯 **[Quick Start Examples](documentation/installation.md#quick-start-examples)** - Copy-paste configs
|
|
243
|
-
|
|
244
|
-
### IDE & CLI Integration
|
|
245
|
-
- 🖥️ **[Claude Code CLI Setup](documentation/claude-code-cli.md)** - Connect Claude Code CLI
|
|
246
|
-
- 🤖 **[Codex CLI Setup](documentation/codex-cli.md)** - Configure OpenAI Codex CLI with config.toml
|
|
247
|
-
- 🎨 **[Cursor IDE Setup](documentation/cursor-integration.md)** - Full Cursor integration with troubleshooting
|
|
248
|
-
- 🔍 **[Embeddings Guide](documentation/embeddings.md)** - Enable @Codebase semantic search (4 options: Ollama, llama.cpp, OpenRouter, OpenAI)
|
|
249
|
-
|
|
250
|
-
### Features & Capabilities
|
|
251
|
-
- ✨ **[Core Features](documentation/features.md)** - Architecture, request flow, format conversion
|
|
252
|
-
- 🧠 **[Memory System](documentation/memory-system.md)** - Titans-inspired long-term memory
|
|
253
|
-
- 🗃️ **[Semantic Cache](#semantic-cache)** - Cache responses for similar prompts
|
|
254
|
-
- 💰 **[Token Optimization](documentation/token-optimization.md)** - 60-80% cost reduction strategies
|
|
255
|
-
- 🔧 **[Tools & Execution](documentation/tools.md)** - Tool calling, execution modes, custom tools
|
|
256
|
-
|
|
257
|
-
### Deployment & Operations
|
|
258
|
-
- 🐳 **[Docker Deployment](documentation/docker.md)** - docker-compose setup with GPU support
|
|
259
|
-
- 🏭 **[Production Hardening](documentation/production.md)** - Circuit breakers, load shedding, metrics
|
|
260
|
-
- 📊 **[API Reference](documentation/api.md)** - All endpoints and formats
|
|
261
|
-
|
|
262
|
-
### Support
|
|
263
|
-
- 🔧 **[Troubleshooting](documentation/troubleshooting.md)** - Common issues and solutions
|
|
264
|
-
- ❓ **[FAQ](documentation/faq.md)** - Frequently asked questions
|
|
265
|
-
- 🧪 **[Testing Guide](documentation/testing.md)** - Running tests and validation
|
|
183
|
+
> With token optimization enabled, Lynkr's smart tool selection, prompt caching, and memory deduplication reduce token usage by 60-80% on top of provider savings.
|
|
266
184
|
|
|
267
185
|
---
|
|
268
186
|
|
|
269
|
-
##
|
|
187
|
+
## What's Under the Hood
|
|
270
188
|
|
|
271
|
-
|
|
272
|
-
- 💬 **[GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Community Q&A
|
|
273
|
-
- 🐛 **[Report Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Bug reports and feature requests
|
|
274
|
-
- 📦 **[NPM Package](https://www.npmjs.com/package/lynkr)** - Official npm package
|
|
189
|
+
Lynkr isn't just a passthrough proxy. It's an optimization layer.
|
|
275
190
|
|
|
276
|
-
|
|
191
|
+
### Smart Routing
|
|
192
|
+
Routes requests to the right model based on task complexity. Simple questions go to fast/cheap models. Complex architectural tasks go to powerful models. You configure the tiers.
|
|
277
193
|
|
|
278
|
-
|
|
279
|
-
|
|
280
|
-
-
|
|
281
|
-
-
|
|
282
|
-
-
|
|
283
|
-
- ✅ **OpenAI Compatible** - Works with Cursor IDE, Continue.dev, and any OpenAI-compatible client
|
|
284
|
-
- ✅ **Embeddings Support** - 4 options for @Codebase search: Ollama (local), llama.cpp (local), OpenRouter, OpenAI
|
|
285
|
-
- ✅ **MCP Integration** - Automatic Model Context Protocol server discovery and orchestration
|
|
286
|
-
- ✅ **Enterprise Features** - Circuit breakers, load shedding, Prometheus metrics, K8s health checks
|
|
287
|
-
- ✅ **Streaming Support** - Real-time token streaming for all providers
|
|
288
|
-
- ✅ **Memory System** - Titans-inspired long-term memory with surprise-based filtering
|
|
289
|
-
- ✅ **Tool Calling** - Full tool support with server and passthrough execution modes
|
|
290
|
-
- ✅ **Production Ready** - Battle-tested with 400+ tests, observability, and error resilience
|
|
291
|
-
- ✅ **Node 20-25 Support** - Works with latest Node.js versions including v25
|
|
292
|
-
- ✅ **Semantic Caching** - Cache responses for similar prompts (requires embeddings)
|
|
194
|
+
### Token Optimization
|
|
195
|
+
- **Smart tool selection** — only sends tools relevant to the current task
|
|
196
|
+
- **Prompt compression** — removes redundant context before sending
|
|
197
|
+
- **Memory deduplication** — eliminates repeated information across turns
|
|
198
|
+
- **TOON format** — compact serialization that cuts token count
|
|
293
199
|
|
|
294
|
-
|
|
200
|
+
### Enterprise Resilience
|
|
201
|
+
- **Circuit breakers** — automatic failover when a provider goes down
|
|
202
|
+
- **Load shedding** — graceful degradation under high load
|
|
203
|
+
- **Prometheus metrics** — full observability at `/metrics`
|
|
204
|
+
- **Health checks** — K8s-ready endpoints at `/health`
|
|
295
205
|
|
|
296
|
-
|
|
206
|
+
### Memory System
|
|
207
|
+
Titans-inspired long-term memory with surprise-based filtering. The system remembers important context across sessions and forgets noise — reducing token waste from repeated context.
|
|
297
208
|
|
|
298
|
-
|
|
209
|
+
### Semantic Cache
|
|
210
|
+
Cache responses for semantically similar prompts. Hit rate depends on your workflow, but repeat questions (common in coding) get instant responses.
|
|
299
211
|
|
|
300
|
-
**Enable Semantic Cache:**
|
|
301
212
|
```bash
|
|
302
|
-
# Requires an embeddings provider (Ollama recommended)
|
|
303
|
-
ollama pull nomic-embed-text
|
|
304
|
-
|
|
305
|
-
# Add to .env
|
|
306
213
|
SEMANTIC_CACHE_ENABLED=true
|
|
307
214
|
SEMANTIC_CACHE_THRESHOLD=0.95
|
|
308
|
-
OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
|
|
309
|
-
OLLAMA_EMBEDDINGS_ENDPOINT=http://localhost:11434/api/embeddings
|
|
310
|
-
```
|
|
311
|
-
|
|
312
|
-
| Setting | Default | Description |
|
|
313
|
-
|---------|---------|-------------|
|
|
314
|
-
| `SEMANTIC_CACHE_ENABLED` | `false` | Enable/disable semantic caching |
|
|
315
|
-
| `SEMANTIC_CACHE_THRESHOLD` | `0.95` | Similarity threshold (0.0-1.0) |
|
|
316
|
-
|
|
317
|
-
> **Note:** Without a proper embeddings provider, the cache uses hash-based fallback which may cause false matches. Use Ollama with `nomic-embed-text` for best results.
|
|
318
|
-
|
|
319
|
-
---
|
|
320
|
-
|
|
321
|
-
## Architecture
|
|
322
|
-
|
|
323
|
-
```
|
|
324
|
-
┌─────────────────┐
|
|
325
|
-
│ AI Tools │
|
|
326
|
-
└────────┬────────┘
|
|
327
|
-
│ Anthropic/OpenAI Format
|
|
328
|
-
↓
|
|
329
|
-
┌─────────────────┐
|
|
330
|
-
│ Lynkr Proxy │
|
|
331
|
-
│ Port: 8081 │
|
|
332
|
-
│ │
|
|
333
|
-
│ • Format Conv. │
|
|
334
|
-
│ • Token Optim. │
|
|
335
|
-
│ • Provider Route│
|
|
336
|
-
│ • Tool Calling │
|
|
337
|
-
│ • Caching │
|
|
338
|
-
└────────┬────────┘
|
|
339
|
-
│
|
|
340
|
-
├──→ Databricks (Claude 4.5)
|
|
341
|
-
├──→ AWS Bedrock (100+ models)
|
|
342
|
-
├──→ OpenRouter (100+ models)
|
|
343
|
-
├──→ Ollama (local, free)
|
|
344
|
-
├──→ llama.cpp (local, free)
|
|
345
|
-
├──→ Azure OpenAI (GPT-4o, o1)
|
|
346
|
-
├──→ OpenAI (GPT-4o, o3)
|
|
347
|
-
└──→ Azure Anthropic (Claude)
|
|
348
215
|
```
|
|
349
216
|
|
|
350
|
-
|
|
217
|
+
### MCP Integration
|
|
218
|
+
Automatic Model Context Protocol server discovery and orchestration. Your MCP tools work through Lynkr without configuration.
|
|
351
219
|
|
|
352
220
|
---
|
|
353
221
|
|
|
354
|
-
##
|
|
222
|
+
## Deployment Options
|
|
355
223
|
|
|
356
|
-
**
|
|
224
|
+
**NPM (recommended)**
|
|
357
225
|
```bash
|
|
358
|
-
|
|
359
|
-
export OLLAMA_MODEL=qwen2.5-coder:latest
|
|
360
|
-
export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
|
|
361
|
-
npm start
|
|
226
|
+
npm install -g lynkr && lynkr start
|
|
362
227
|
```
|
|
363
|
-
> 💡 **Tip:** Prevent slow cold starts by keeping Ollama models loaded: `launchctl setenv OLLAMA_KEEP_ALIVE "24h"` (macOS) or set `OLLAMA_KEEP_ALIVE=24h` env var. See [troubleshooting](documentation/troubleshooting.md#slow-first-request--cold-start-warning).
|
|
364
228
|
|
|
365
|
-
**
|
|
229
|
+
**Docker**
|
|
366
230
|
```bash
|
|
367
|
-
|
|
368
|
-
export OLLAMA_ENDPOINT=http://192.168.1.100:11434 # Any IP or hostname
|
|
369
|
-
export OLLAMA_MODEL=llama3.1:70b
|
|
370
|
-
npm start
|
|
231
|
+
docker-compose up -d
|
|
371
232
|
```
|
|
372
|
-
> 🌐 **Note:** All provider endpoints support remote addresses - not limited to localhost. Use any IP, hostname, or domain.
|
|
373
233
|
|
|
374
|
-
**
|
|
234
|
+
**Git Clone**
|
|
375
235
|
```bash
|
|
376
|
-
|
|
377
|
-
|
|
378
|
-
|
|
379
|
-
# Terminal 2: Start Lynkr
|
|
380
|
-
export MODEL_PROVIDER=openai
|
|
381
|
-
export OPENAI_ENDPOINT=http://localhost:8000/v1/chat/completions
|
|
382
|
-
export OPENAI_API_KEY=not-needed
|
|
236
|
+
git clone https://github.com/vishalveerareddy123/Lynkr.git
|
|
237
|
+
cd Lynkr && npm install && cp .env.example .env
|
|
383
238
|
npm start
|
|
384
239
|
```
|
|
385
|
-
> 🍎 **Apple Silicon optimized** - Native MLX performance on M1/M2/M3/M4 Macs. See [MLX setup guide](documentation/providers.md#10-mlx-openai-server-apple-silicon).
|
|
386
240
|
|
|
387
|
-
**
|
|
241
|
+
**Homebrew**
|
|
388
242
|
```bash
|
|
389
|
-
|
|
390
|
-
|
|
391
|
-
export AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0
|
|
392
|
-
npm start
|
|
243
|
+
brew tap vishalveerareddy123/lynkr
|
|
244
|
+
brew install lynkr
|
|
393
245
|
```
|
|
394
246
|
|
|
395
|
-
|
|
396
|
-
|
|
397
|
-
|
|
398
|
-
|
|
399
|
-
|
|
400
|
-
|
|
401
|
-
|
|
402
|
-
|
|
247
|
+
---
|
|
248
|
+
|
|
249
|
+
## Documentation
|
|
250
|
+
|
|
251
|
+
| Guide | Description |
|
|
252
|
+
|-------|-------------|
|
|
253
|
+
| [Installation](documentation/installation.md) | All installation methods |
|
|
254
|
+
| [Provider Config](documentation/providers.md) | Setup for all 10+ providers |
|
|
255
|
+
| [Claude Code CLI](documentation/claude-code-cli.md) | Detailed Claude Code integration |
|
|
256
|
+
| [Codex CLI](documentation/codex-cli.md) | Codex config.toml setup |
|
|
257
|
+
| [Cursor IDE](documentation/cursor-integration.md) | Cursor integration + troubleshooting |
|
|
258
|
+
| [Embeddings](documentation/embeddings.md) | @Codebase semantic search (4 options) |
|
|
259
|
+
| [Token Optimization](documentation/token-optimization.md) | 60-80% cost reduction strategies |
|
|
260
|
+
| [Memory System](documentation/memory-system.md) | Titans-inspired long-term memory |
|
|
261
|
+
| [Tools & Execution](documentation/tools.md) | Tool calling and execution modes |
|
|
262
|
+
| [Smart Routing](documentation/routing.md) | Complexity-based model routing |
|
|
263
|
+
| [Docker Deployment](documentation/docker.md) | docker-compose with GPU support |
|
|
264
|
+
| [Production Hardening](documentation/production.md) | Circuit breakers, metrics, load shedding |
|
|
265
|
+
| [API Reference](documentation/api.md) | All endpoints and formats |
|
|
266
|
+
| [Troubleshooting](documentation/troubleshooting.md) | Common issues and solutions |
|
|
267
|
+
| [FAQ](documentation/faq.md) | Frequently asked questions |
|
|
268
|
+
|
|
269
|
+
---
|
|
270
|
+
|
|
271
|
+
## Troubleshooting
|
|
272
|
+
|
|
273
|
+
| Issue | Solution |
|
|
274
|
+
|-------|----------|
|
|
275
|
+
| Same response for all queries | Disable semantic cache: `SEMANTIC_CACHE_ENABLED=false` |
|
|
276
|
+
| Tool calls not executing | Increase threshold: `POLICY_TOOL_LOOP_THRESHOLD=15` |
|
|
277
|
+
| Slow first request | Keep Ollama loaded: `OLLAMA_KEEP_ALIVE=24h` |
|
|
278
|
+
| Connection refused | Ensure Lynkr is running: `lynkr start` |
|
|
403
279
|
|
|
404
280
|
---
|
|
405
281
|
|
|
406
282
|
## Contributing
|
|
407
283
|
|
|
408
|
-
We welcome contributions
|
|
409
|
-
- **[Contributing Guide](documentation/contributing.md)** - How to contribute
|
|
410
|
-
- **[Testing Guide](documentation/testing.md)** - Running tests
|
|
284
|
+
We welcome contributions. See the [Contributing Guide](documentation/contributing.md) and [Testing Guide](documentation/testing.md).
|
|
411
285
|
|
|
412
286
|
---
|
|
413
287
|
|
|
414
288
|
## License
|
|
415
289
|
|
|
416
|
-
Apache 2.0
|
|
290
|
+
Apache 2.0 — See [LICENSE](LICENSE).
|
|
417
291
|
|
|
418
292
|
---
|
|
419
293
|
|
|
420
|
-
## Community
|
|
294
|
+
## Community
|
|
421
295
|
|
|
422
|
-
-
|
|
423
|
-
-
|
|
424
|
-
-
|
|
425
|
-
-
|
|
296
|
+
- [GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions) — Questions and tips
|
|
297
|
+
- [Report Issues](https://github.com/vishalveerareddy123/Lynkr/issues) — Bug reports and feature requests
|
|
298
|
+
- [NPM Package](https://www.npmjs.com/package/lynkr) — Official package
|
|
299
|
+
- [DeepWiki](https://deepwiki.com/vishalveerareddy123/Lynkr) — AI-powered docs search
|
|
426
300
|
|
|
427
301
|
---
|
|
428
302
|
|
|
429
|
-
**
|
|
303
|
+
**Built by [Vishal Veera Reddy](https://github.com/vishalveerareddy123) — for developers who want control over their AI tools.**
|
|
Binary file
|