lynkr 7.2.2 β 7.2.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +66 -21
- package/documentation/README.md +4 -3
- package/documentation/codex-cli.md +397 -0
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -145,26 +145,70 @@ Configure Cursor IDE to use Lynkr:
|
|
|
145
145
|
|
|
146
146
|
π **[Full Cursor Setup Guide](documentation/cursor-integration.md)** | **[Embeddings Configuration](documentation/embeddings.md)**
|
|
147
147
|
---
|
|
148
|
-
## Codex CLI
|
|
149
|
-
|
|
150
|
-
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
|
|
166
|
-
|
|
167
|
-
|
|
148
|
+
## Codex CLI Integration
|
|
149
|
+
|
|
150
|
+
Configure [OpenAI Codex CLI](https://github.com/openai/codex) to use Lynkr as its backend.
|
|
151
|
+
|
|
152
|
+
### Option 1: Environment Variables (Quick Start)
|
|
153
|
+
|
|
154
|
+
```bash
|
|
155
|
+
export OPENAI_BASE_URL=http://localhost:8081/v1
|
|
156
|
+
export OPENAI_API_KEY=dummy
|
|
157
|
+
|
|
158
|
+
codex
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
### Option 2: Config File (Recommended)
|
|
162
|
+
|
|
163
|
+
Edit `~/.codex/config.toml`:
|
|
164
|
+
|
|
165
|
+
```toml
|
|
166
|
+
# Set Lynkr as the default provider
|
|
167
|
+
model_provider = "lynkr"
|
|
168
|
+
model = "gpt-4o"
|
|
169
|
+
|
|
170
|
+
# Define the Lynkr provider
|
|
171
|
+
[model_providers.lynkr]
|
|
172
|
+
name = "Lynkr Proxy"
|
|
173
|
+
base_url = "http://localhost:8081/v1"
|
|
174
|
+
wire_api = "responses"
|
|
175
|
+
|
|
176
|
+
# Optional: Trust your project directories
|
|
177
|
+
[projects."/path/to/your/project"]
|
|
178
|
+
trust_level = "trusted"
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
### Configuration Options
|
|
182
|
+
|
|
183
|
+
| Option | Description | Example |
|
|
184
|
+
|--------|-------------|---------|
|
|
185
|
+
| `model_provider` | Active provider name | `"lynkr"` |
|
|
186
|
+
| `model` | Model to request (mapped by Lynkr) | `"gpt-4o"`, `"claude-sonnet-4-5"` |
|
|
187
|
+
| `base_url` | Lynkr endpoint | `"http://localhost:8081/v1"` |
|
|
188
|
+
| `wire_api` | API format (`responses` or `chat`) | `"responses"` |
|
|
189
|
+
| `trust_level` | Project trust (`trusted`, `sandboxed`) | `"trusted"` |
|
|
190
|
+
|
|
191
|
+
### Remote Lynkr Server
|
|
192
|
+
|
|
193
|
+
To connect Codex to a remote Lynkr instance:
|
|
194
|
+
|
|
195
|
+
```toml
|
|
196
|
+
[model_providers.lynkr-remote]
|
|
197
|
+
name = "Remote Lynkr"
|
|
198
|
+
base_url = "http://192.168.1.100:8081/v1"
|
|
199
|
+
wire_api = "responses"
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
### Troubleshooting
|
|
203
|
+
|
|
204
|
+
| Issue | Solution |
|
|
205
|
+
|-------|----------|
|
|
206
|
+
| Same response for all queries | Disable semantic cache: `SEMANTIC_CACHE_ENABLED=false` |
|
|
207
|
+
| Tool calls not executing | Increase threshold: `POLICY_TOOL_LOOP_THRESHOLD=15` |
|
|
208
|
+
| Slow first request | Keep Ollama loaded: `OLLAMA_KEEP_ALIVE=24h` |
|
|
209
|
+
| Connection refused | Ensure Lynkr is running: `npm start` |
|
|
210
|
+
|
|
211
|
+
> **Note:** Codex uses the OpenAI Responses API format. Lynkr automatically converts this to your configured provider's format.
|
|
168
212
|
|
|
169
213
|
---
|
|
170
214
|
|
|
@@ -197,8 +241,9 @@ Lynkr supports [ClawdBot](https://github.com/openclaw/openclaw) via its OpenAI-c
|
|
|
197
241
|
- βοΈ **[Provider Configuration](documentation/providers.md)** - Complete setup for all 9+ providers
|
|
198
242
|
- π― **[Quick Start Examples](documentation/installation.md#quick-start-examples)** - Copy-paste configs
|
|
199
243
|
|
|
200
|
-
### IDE Integration
|
|
244
|
+
### IDE & CLI Integration
|
|
201
245
|
- π₯οΈ **[Claude Code CLI Setup](documentation/claude-code-cli.md)** - Connect Claude Code CLI
|
|
246
|
+
- π€ **[Codex CLI Setup](documentation/codex-cli.md)** - Configure OpenAI Codex CLI with config.toml
|
|
202
247
|
- π¨ **[Cursor IDE Setup](documentation/cursor-integration.md)** - Full Cursor integration with troubleshooting
|
|
203
248
|
- π **[Embeddings Guide](documentation/embeddings.md)** - Enable @Codebase semantic search (4 options: Ollama, llama.cpp, OpenRouter, OpenAI)
|
|
204
249
|
|
package/documentation/README.md
CHANGED
|
@@ -14,11 +14,12 @@ New to Lynkr? Start here:
|
|
|
14
14
|
|
|
15
15
|
---
|
|
16
16
|
|
|
17
|
-
## IDE Integration
|
|
17
|
+
## IDE & CLI Integration
|
|
18
18
|
|
|
19
19
|
Connect Lynkr to your development tools:
|
|
20
20
|
|
|
21
21
|
- **[Claude Code CLI Setup](claude-code-cli.md)** - Configure Claude Code CLI to use Lynkr
|
|
22
|
+
- **[Codex CLI Setup](codex-cli.md)** - Configure OpenAI Codex CLI with Lynkr (config.toml, wire_api, troubleshooting)
|
|
22
23
|
- **[Cursor IDE Integration](cursor-integration.md)** - Full Cursor IDE setup with troubleshooting
|
|
23
24
|
- **[Embeddings Configuration](embeddings.md)** - Enable @Codebase semantic search with 4 provider options (Ollama, llama.cpp, OpenRouter, OpenAI)
|
|
24
25
|
|
|
@@ -69,7 +70,7 @@ Get help and contribute:
|
|
|
69
70
|
## Quick Navigation by Topic
|
|
70
71
|
|
|
71
72
|
### Setup & Configuration
|
|
72
|
-
- [Installation](installation.md) | [Providers](providers.md) | [Claude Code](claude-code-cli.md) | [Cursor](cursor-integration.md) | [Embeddings](embeddings.md)
|
|
73
|
+
- [Installation](installation.md) | [Providers](providers.md) | [Claude Code](claude-code-cli.md) | [Codex CLI](codex-cli.md) | [Cursor](cursor-integration.md) | [Embeddings](embeddings.md)
|
|
73
74
|
|
|
74
75
|
### Features & Optimization
|
|
75
76
|
- [Features](features.md) | [Memory System](memory-system.md) | [Token Optimization](token-optimization.md) | [Headroom](headroom.md) | [Tools](tools.md)
|
|
@@ -87,7 +88,7 @@ Get help and contribute:
|
|
|
87
88
|
This documentation is organized into focused guides:
|
|
88
89
|
|
|
89
90
|
1. **Getting Started** - Installation and basic configuration
|
|
90
|
-
2. **IDE Integration** - Connect to Claude Code and Cursor
|
|
91
|
+
2. **IDE & CLI Integration** - Connect to Claude Code, Codex CLI, and Cursor
|
|
91
92
|
3. **Core Features** - Deep dives into capabilities
|
|
92
93
|
4. **Deployment** - Production setup and operations
|
|
93
94
|
5. **Support** - Troubleshooting and community resources
|
|
@@ -0,0 +1,397 @@
|
|
|
1
|
+
# Codex CLI Integration
|
|
2
|
+
|
|
3
|
+
This guide explains how to configure [OpenAI Codex CLI](https://github.com/openai/codex) to use Lynkr as its backend, enabling you to use any LLM provider (Ollama, Azure OpenAI, Bedrock, Databricks, etc.) with Codex.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Overview
|
|
8
|
+
|
|
9
|
+
Codex CLI is OpenAI's terminal-based AI coding assistant. By routing it through Lynkr, you can:
|
|
10
|
+
|
|
11
|
+
- Use **local models** (Ollama, llama.cpp, LM Studio) for free, private coding assistance
|
|
12
|
+
- Access **enterprise providers** (Azure OpenAI, Databricks, AWS Bedrock)
|
|
13
|
+
- Benefit from Lynkr's **token optimization** and **caching** features
|
|
14
|
+
- Switch between providers without changing Codex configuration
|
|
15
|
+
|
|
16
|
+
---
|
|
17
|
+
|
|
18
|
+
## Quick Start
|
|
19
|
+
|
|
20
|
+
### Option 1: Environment Variables
|
|
21
|
+
|
|
22
|
+
The fastest way to get started:
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
# Set Lynkr as the OpenAI endpoint
|
|
26
|
+
export OPENAI_BASE_URL=http://localhost:8081/v1
|
|
27
|
+
export OPENAI_API_KEY=dummy
|
|
28
|
+
|
|
29
|
+
# Start Lynkr (in another terminal)
|
|
30
|
+
cd /path/to/lynkr && npm start
|
|
31
|
+
|
|
32
|
+
# Run Codex
|
|
33
|
+
codex
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
### Option 2: Config File (Recommended)
|
|
37
|
+
|
|
38
|
+
For persistent configuration, edit `~/.codex/config.toml`:
|
|
39
|
+
|
|
40
|
+
```toml
|
|
41
|
+
# Set Lynkr as the default provider
|
|
42
|
+
model_provider = "lynkr"
|
|
43
|
+
model = "gpt-4o"
|
|
44
|
+
|
|
45
|
+
# Define the Lynkr provider
|
|
46
|
+
[model_providers.lynkr]
|
|
47
|
+
name = "Lynkr Proxy"
|
|
48
|
+
base_url = "http://localhost:8081/v1"
|
|
49
|
+
wire_api = "responses"
|
|
50
|
+
|
|
51
|
+
# Optional: Trust your project directories for tool execution
|
|
52
|
+
[projects."/path/to/your/project"]
|
|
53
|
+
trust_level = "trusted"
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
## Complete Configuration Reference
|
|
59
|
+
|
|
60
|
+
### Full config.toml Example
|
|
61
|
+
|
|
62
|
+
```toml
|
|
63
|
+
# =============================================================================
|
|
64
|
+
# Codex CLI Configuration for Lynkr
|
|
65
|
+
# Location: ~/.codex/config.toml
|
|
66
|
+
# =============================================================================
|
|
67
|
+
|
|
68
|
+
# Active provider (must match a key in [model_providers])
|
|
69
|
+
model_provider = "lynkr"
|
|
70
|
+
|
|
71
|
+
# Model to request (Lynkr maps this to your configured provider)
|
|
72
|
+
model = "gpt-4o"
|
|
73
|
+
|
|
74
|
+
# Personality affects response style: default, pragmatic, concise, educational
|
|
75
|
+
personality = "pragmatic"
|
|
76
|
+
|
|
77
|
+
# =============================================================================
|
|
78
|
+
# Lynkr Provider Definition
|
|
79
|
+
# =============================================================================
|
|
80
|
+
|
|
81
|
+
[model_providers.lynkr]
|
|
82
|
+
name = "Lynkr Proxy"
|
|
83
|
+
base_url = "http://localhost:8081/v1"
|
|
84
|
+
wire_api = "responses"
|
|
85
|
+
|
|
86
|
+
# Alternative: Use chat completions API instead of responses API
|
|
87
|
+
# wire_api = "chat"
|
|
88
|
+
|
|
89
|
+
# =============================================================================
|
|
90
|
+
# Remote Lynkr Server (Optional)
|
|
91
|
+
# =============================================================================
|
|
92
|
+
|
|
93
|
+
[model_providers.lynkr-remote]
|
|
94
|
+
name = "Remote Lynkr (GPU Server)"
|
|
95
|
+
base_url = "http://192.168.1.100:8081/v1"
|
|
96
|
+
wire_api = "responses"
|
|
97
|
+
|
|
98
|
+
# =============================================================================
|
|
99
|
+
# Project Trust Levels
|
|
100
|
+
# =============================================================================
|
|
101
|
+
# trusted - Full tool execution allowed
|
|
102
|
+
# sandboxed - Restricted tool execution
|
|
103
|
+
# untrusted - No tool execution (default for new projects)
|
|
104
|
+
|
|
105
|
+
[projects."/Users/yourname/work"]
|
|
106
|
+
trust_level = "trusted"
|
|
107
|
+
|
|
108
|
+
[projects."/Users/yourname/personal"]
|
|
109
|
+
trust_level = "trusted"
|
|
110
|
+
|
|
111
|
+
# =============================================================================
|
|
112
|
+
# Agent Configuration (Optional)
|
|
113
|
+
# =============================================================================
|
|
114
|
+
|
|
115
|
+
[agent]
|
|
116
|
+
enabled = true
|
|
117
|
+
|
|
118
|
+
# =============================================================================
|
|
119
|
+
# Skills Configuration (Optional)
|
|
120
|
+
# =============================================================================
|
|
121
|
+
|
|
122
|
+
[skills]
|
|
123
|
+
enabled = true
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
## Configuration Options
|
|
129
|
+
|
|
130
|
+
### Provider Options
|
|
131
|
+
|
|
132
|
+
| Option | Description | Values |
|
|
133
|
+
|--------|-------------|--------|
|
|
134
|
+
| `model_provider` | Active provider name | `"lynkr"`, `"openai"`, etc. |
|
|
135
|
+
| `model` | Model to request | `"gpt-4o"`, `"claude-sonnet-4-5"`, etc. |
|
|
136
|
+
| `personality` | Response style | `"default"`, `"pragmatic"`, `"concise"`, `"educational"` |
|
|
137
|
+
|
|
138
|
+
### Model Provider Options
|
|
139
|
+
|
|
140
|
+
| Option | Description | Example |
|
|
141
|
+
|--------|-------------|---------|
|
|
142
|
+
| `name` | Display name | `"Lynkr Proxy"` |
|
|
143
|
+
| `base_url` | API endpoint URL | `"http://localhost:8081/v1"` |
|
|
144
|
+
| `wire_api` | API format | `"responses"` (recommended) or `"chat"` |
|
|
145
|
+
| `env_key` | Environment variable for API key | `"OPENAI_API_KEY"` |
|
|
146
|
+
|
|
147
|
+
### Project Options
|
|
148
|
+
|
|
149
|
+
| Option | Description | Values |
|
|
150
|
+
|--------|-------------|--------|
|
|
151
|
+
| `trust_level` | Tool execution permissions | `"trusted"`, `"sandboxed"`, `"untrusted"` |
|
|
152
|
+
|
|
153
|
+
---
|
|
154
|
+
|
|
155
|
+
## Wire API Formats
|
|
156
|
+
|
|
157
|
+
Codex supports two API formats:
|
|
158
|
+
|
|
159
|
+
### Responses API (Recommended)
|
|
160
|
+
|
|
161
|
+
```toml
|
|
162
|
+
wire_api = "responses"
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
- Uses OpenAI's newer Responses API format
|
|
166
|
+
- Better support for multi-turn conversations
|
|
167
|
+
- Recommended for Lynkr integration
|
|
168
|
+
|
|
169
|
+
### Chat Completions API
|
|
170
|
+
|
|
171
|
+
```toml
|
|
172
|
+
wire_api = "chat"
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
- Uses standard OpenAI Chat Completions format
|
|
176
|
+
- Broader compatibility with proxies
|
|
177
|
+
- Use if you encounter issues with `responses`
|
|
178
|
+
|
|
179
|
+
---
|
|
180
|
+
|
|
181
|
+
## Remote Server Configuration
|
|
182
|
+
|
|
183
|
+
Connect Codex to a Lynkr instance running on another machine:
|
|
184
|
+
|
|
185
|
+
### On the Remote Server
|
|
186
|
+
|
|
187
|
+
```bash
|
|
188
|
+
# Edit .env to allow remote connections
|
|
189
|
+
PORT=8081
|
|
190
|
+
|
|
191
|
+
# Start Lynkr
|
|
192
|
+
npm start
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
### On Your Local Machine
|
|
196
|
+
|
|
197
|
+
```toml
|
|
198
|
+
# ~/.codex/config.toml
|
|
199
|
+
model_provider = "lynkr-remote"
|
|
200
|
+
|
|
201
|
+
[model_providers.lynkr-remote]
|
|
202
|
+
name = "Remote Lynkr"
|
|
203
|
+
base_url = "http://192.168.1.100:8081/v1"
|
|
204
|
+
wire_api = "responses"
|
|
205
|
+
```
|
|
206
|
+
|
|
207
|
+
---
|
|
208
|
+
|
|
209
|
+
## Lynkr Configuration for Codex
|
|
210
|
+
|
|
211
|
+
Optimize Lynkr for Codex usage by configuring these `.env` settings:
|
|
212
|
+
|
|
213
|
+
### Recommended Settings
|
|
214
|
+
|
|
215
|
+
```bash
|
|
216
|
+
# =============================================================================
|
|
217
|
+
# Lynkr .env Configuration for Codex
|
|
218
|
+
# =============================================================================
|
|
219
|
+
|
|
220
|
+
# Your LLM provider (Codex works with all Lynkr providers)
|
|
221
|
+
MODEL_PROVIDER=azure-openai
|
|
222
|
+
# MODEL_PROVIDER=ollama
|
|
223
|
+
# MODEL_PROVIDER=bedrock
|
|
224
|
+
|
|
225
|
+
# Tool execution mode - let Codex handle tools locally
|
|
226
|
+
TOOL_EXECUTION_MODE=client
|
|
227
|
+
|
|
228
|
+
# Increase tool loop threshold for complex multi-step tasks
|
|
229
|
+
POLICY_TOOL_LOOP_THRESHOLD=15
|
|
230
|
+
|
|
231
|
+
# Semantic cache (disable if getting repeated responses)
|
|
232
|
+
SEMANTIC_CACHE_ENABLED=false
|
|
233
|
+
|
|
234
|
+
# Or keep enabled with proper embeddings for faster responses
|
|
235
|
+
# SEMANTIC_CACHE_ENABLED=true
|
|
236
|
+
# OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
### Provider-Specific Examples
|
|
240
|
+
|
|
241
|
+
**Azure OpenAI:**
|
|
242
|
+
```bash
|
|
243
|
+
MODEL_PROVIDER=azure-openai
|
|
244
|
+
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/openai/responses?api-version=2025-04-01-preview
|
|
245
|
+
AZURE_OPENAI_API_KEY=your-key
|
|
246
|
+
AZURE_OPENAI_DEPLOYMENT=gpt-4o
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
**Ollama (Local, Free):**
|
|
250
|
+
```bash
|
|
251
|
+
MODEL_PROVIDER=ollama
|
|
252
|
+
OLLAMA_MODEL=qwen2.5-coder:latest
|
|
253
|
+
OLLAMA_ENDPOINT=http://localhost:11434
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
**AWS Bedrock:**
|
|
257
|
+
```bash
|
|
258
|
+
MODEL_PROVIDER=bedrock
|
|
259
|
+
AWS_BEDROCK_API_KEY=your-key
|
|
260
|
+
AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0
|
|
261
|
+
```
|
|
262
|
+
|
|
263
|
+
---
|
|
264
|
+
|
|
265
|
+
## Troubleshooting
|
|
266
|
+
|
|
267
|
+
### Common Issues
|
|
268
|
+
|
|
269
|
+
| Issue | Cause | Solution |
|
|
270
|
+
|-------|-------|----------|
|
|
271
|
+
| Same response for all queries | Semantic cache matching on system prompt | Set `SEMANTIC_CACHE_ENABLED=false` in Lynkr `.env` |
|
|
272
|
+
| Tool calls not executing | Tool loop threshold too low | Set `POLICY_TOOL_LOOP_THRESHOLD=15` |
|
|
273
|
+
| Connection refused | Lynkr not running | Run `npm start` in Lynkr directory |
|
|
274
|
+
| Slow first request | Cold start / model loading | Set `OLLAMA_KEEP_ALIVE=24h` for Ollama |
|
|
275
|
+
| "Invalid API key" errors | API key not set | Set `OPENAI_API_KEY=dummy` (Lynkr doesn't validate) |
|
|
276
|
+
| Streaming issues | Wire API mismatch | Try `wire_api = "chat"` instead of `"responses"` |
|
|
277
|
+
|
|
278
|
+
### Debug Mode
|
|
279
|
+
|
|
280
|
+
Enable verbose logging to diagnose issues:
|
|
281
|
+
|
|
282
|
+
```bash
|
|
283
|
+
# In Lynkr .env
|
|
284
|
+
LOG_LEVEL=debug
|
|
285
|
+
|
|
286
|
+
# Restart Lynkr and watch logs
|
|
287
|
+
npm start
|
|
288
|
+
```
|
|
289
|
+
|
|
290
|
+
### Verify Connection
|
|
291
|
+
|
|
292
|
+
Test that Codex can reach Lynkr:
|
|
293
|
+
|
|
294
|
+
```bash
|
|
295
|
+
curl http://localhost:8081/health
|
|
296
|
+
# Expected: {"status":"ok",...}
|
|
297
|
+
```
|
|
298
|
+
|
|
299
|
+
---
|
|
300
|
+
|
|
301
|
+
## Model Mapping
|
|
302
|
+
|
|
303
|
+
When you specify a model in Codex, Lynkr maps it to your configured provider:
|
|
304
|
+
|
|
305
|
+
| Codex Model | Lynkr Mapping |
|
|
306
|
+
|-------------|---------------|
|
|
307
|
+
| `gpt-4o` | Uses your `MODEL_PROVIDER` default model |
|
|
308
|
+
| `gpt-4o-mini` | Maps to smaller/cheaper model variant |
|
|
309
|
+
| `claude-sonnet-4-5` | Routes to Anthropic-compatible provider |
|
|
310
|
+
| `claude-opus-4-5` | Routes to most capable model |
|
|
311
|
+
|
|
312
|
+
The actual model used depends on your Lynkr provider configuration.
|
|
313
|
+
|
|
314
|
+
---
|
|
315
|
+
|
|
316
|
+
## Architecture
|
|
317
|
+
|
|
318
|
+
```
|
|
319
|
+
βββββββββββββββββββ
|
|
320
|
+
β Codex CLI β Terminal AI coding assistant
|
|
321
|
+
β (Your Machine)β
|
|
322
|
+
ββββββββββ¬βββββββββ
|
|
323
|
+
β OpenAI Responses API
|
|
324
|
+
β http://localhost:8081/v1
|
|
325
|
+
βΌ
|
|
326
|
+
βββββββββββββββββββ
|
|
327
|
+
β Lynkr β Universal LLM proxy
|
|
328
|
+
β Port 8081 β
|
|
329
|
+
β β
|
|
330
|
+
β β’ Format conv. β Converts between API formats
|
|
331
|
+
β β’ Token optim. β Reduces costs 60-80%
|
|
332
|
+
β β’ Caching β Semantic + prompt caching
|
|
333
|
+
β β’ Tool routing β Server or client execution
|
|
334
|
+
ββββββββββ¬βββββββββ
|
|
335
|
+
β
|
|
336
|
+
ββββββ΄βββββ¬βββββββββββ¬βββββββββββ
|
|
337
|
+
βΌ βΌ βΌ βΌ
|
|
338
|
+
βββββββββ βββββββββ βββββββββββ βββββββββββ
|
|
339
|
+
βOllama β βAzure β βBedrock β βDatabricksβ
|
|
340
|
+
β(Free) β βOpenAI β β(100+) β β β
|
|
341
|
+
βββββββββ βββββββββ βββββββββββ βββββββββββ
|
|
342
|
+
```
|
|
343
|
+
|
|
344
|
+
---
|
|
345
|
+
|
|
346
|
+
## Tips & Best Practices
|
|
347
|
+
|
|
348
|
+
### 1. Use Trusted Projects
|
|
349
|
+
|
|
350
|
+
For frequently used projects, set trust level to avoid repeated permission prompts:
|
|
351
|
+
|
|
352
|
+
```toml
|
|
353
|
+
[projects."/Users/yourname/main-project"]
|
|
354
|
+
trust_level = "trusted"
|
|
355
|
+
```
|
|
356
|
+
|
|
357
|
+
### 2. Configure Personality
|
|
358
|
+
|
|
359
|
+
Choose a personality that matches your workflow:
|
|
360
|
+
|
|
361
|
+
- `pragmatic` - Direct, solution-focused responses
|
|
362
|
+
- `concise` - Minimal explanations, code-focused
|
|
363
|
+
- `educational` - Detailed explanations, good for learning
|
|
364
|
+
- `default` - Balanced approach
|
|
365
|
+
|
|
366
|
+
### 3. Keep Models Loaded
|
|
367
|
+
|
|
368
|
+
Prevent slow first requests with Ollama:
|
|
369
|
+
|
|
370
|
+
```bash
|
|
371
|
+
# macOS
|
|
372
|
+
launchctl setenv OLLAMA_KEEP_ALIVE "24h"
|
|
373
|
+
|
|
374
|
+
# Linux/Windows
|
|
375
|
+
export OLLAMA_KEEP_ALIVE=24h
|
|
376
|
+
```
|
|
377
|
+
|
|
378
|
+
### 4. Monitor Token Usage
|
|
379
|
+
|
|
380
|
+
Check Lynkr metrics to monitor usage:
|
|
381
|
+
|
|
382
|
+
```bash
|
|
383
|
+
curl http://localhost:8081/metrics/token-usage
|
|
384
|
+
```
|
|
385
|
+
|
|
386
|
+
---
|
|
387
|
+
|
|
388
|
+
## Related Documentation
|
|
389
|
+
|
|
390
|
+
- **[Installation Guide](installation.md)** - Install and configure Lynkr
|
|
391
|
+
- **[Provider Configuration](providers.md)** - Configure your LLM provider
|
|
392
|
+
- **[Token Optimization](token-optimization.md)** - Reduce costs with Lynkr
|
|
393
|
+
- **[Troubleshooting](troubleshooting.md)** - Common issues and solutions
|
|
394
|
+
|
|
395
|
+
---
|
|
396
|
+
|
|
397
|
+
**Need help?** Visit [GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions) or check the [FAQ](faq.md).
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "lynkr",
|
|
3
|
-
"version": "7.2.
|
|
3
|
+
"version": "7.2.3",
|
|
4
4
|
"description": "Self-hosted Claude Code & Cursor proxy with Databricks,AWS BedRock,Azure adapters, openrouter, Ollama,llamacpp,LM Studio, workspace tooling, and MCP integration.",
|
|
5
5
|
"main": "index.js",
|
|
6
6
|
"bin": {
|