gitlab-ai-provider 5.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +438 -0
- package/LICENSE +28 -0
- package/README.md +815 -0
- package/dist/index.d.mts +1521 -0
- package/dist/index.d.ts +1658 -0
- package/dist/index.js +4289 -0
- package/dist/index.js.map +1 -0
- package/dist/index.mjs +4220 -0
- package/dist/index.mjs.map +1 -0
- package/package.json +119 -0
package/README.md
ADDED
|
@@ -0,0 +1,815 @@
|
|
|
1
|
+
# GitLab AI Provider
|
|
2
|
+
|
|
3
|
+
A comprehensive TypeScript provider for integrating GitLab Duo AI capabilities with the Vercel AI SDK. This package enables seamless access to GitLab's AI-powered features including chat, agentic workflows, and tool calling through a unified interface.
|
|
4
|
+
|
|
5
|
+
## 🌟 Features
|
|
6
|
+
|
|
7
|
+
- **🤖 Multi-Provider Agentic Chat**: Native tool calling support via GitLab's AI Gateway (Anthropic & OpenAI)
|
|
8
|
+
- **🔄 Duo Workflow Service**: Server-side agentic loop with WebSocket streaming and dynamic model discovery
|
|
9
|
+
- **🔐 Multiple Authentication**: Support for OAuth, Personal Access Tokens, and OpenCode auth
|
|
10
|
+
- **🌐 Self-Hosted Support**: Works with both GitLab.com and self-hosted instances
|
|
11
|
+
- **🔧 Tool Support**: Native tool calling via Vercel AI SDK and MCP tools for workflows
|
|
12
|
+
- **🔍 Project Detection**: Automatic GitLab project detection from git remotes
|
|
13
|
+
- **💾 Smart Caching**: Project, token, and model discovery caching for optimal performance
|
|
14
|
+
- **🎯 Type-Safe**: Complete TypeScript definitions with Zod validation
|
|
15
|
+
|
|
16
|
+
## 📦 Installation
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
npm install gitlab-ai-provider
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
### Peer Dependencies
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
npm install @ai-sdk/provider @ai-sdk/provider-utils
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
## 🚀 Quick Start
|
|
29
|
+
|
|
30
|
+
### Basic Chat
|
|
31
|
+
|
|
32
|
+
```typescript
|
|
33
|
+
import { createGitLab } from 'gitlab-ai-provider';
|
|
34
|
+
import { generateText } from 'ai';
|
|
35
|
+
|
|
36
|
+
const gitlab = createGitLab({
|
|
37
|
+
apiKey: process.env.GITLAB_TOKEN,
|
|
38
|
+
instanceUrl: 'https://gitlab.com', // optional, defaults to gitlab.com
|
|
39
|
+
});
|
|
40
|
+
|
|
41
|
+
// All equivalent ways to create a chat model:
|
|
42
|
+
const model = gitlab('duo-chat'); // callable provider
|
|
43
|
+
const model2 = gitlab.chat('duo-chat'); // .chat() alias (recommended)
|
|
44
|
+
const model3 = gitlab.languageModel('duo-chat'); // explicit method
|
|
45
|
+
|
|
46
|
+
const { text } = await generateText({
|
|
47
|
+
model: gitlab.chat('duo-chat'),
|
|
48
|
+
prompt: 'Explain how to create a merge request in GitLab',
|
|
49
|
+
});
|
|
50
|
+
|
|
51
|
+
console.log(text);
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
### Agentic Chat with Tool Calling
|
|
55
|
+
|
|
56
|
+
```typescript
|
|
57
|
+
import { createGitLab } from 'gitlab-ai-provider';
|
|
58
|
+
import { generateText } from 'ai';
|
|
59
|
+
|
|
60
|
+
const gitlab = createGitLab({
|
|
61
|
+
apiKey: process.env.GITLAB_TOKEN,
|
|
62
|
+
});
|
|
63
|
+
|
|
64
|
+
// Use agentic model for native tool calling support
|
|
65
|
+
const model = gitlab.agenticChat('duo-chat', {
|
|
66
|
+
anthropicModel: 'claude-sonnet-4-20250514',
|
|
67
|
+
maxTokens: 8192,
|
|
68
|
+
});
|
|
69
|
+
|
|
70
|
+
const { text } = await generateText({
|
|
71
|
+
model,
|
|
72
|
+
prompt: 'List all open merge requests in my project',
|
|
73
|
+
tools: {
|
|
74
|
+
// Your custom tools here
|
|
75
|
+
},
|
|
76
|
+
});
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
### Model Variants
|
|
80
|
+
|
|
81
|
+
The provider automatically maps specific model IDs to their corresponding provider models (Anthropic or OpenAI) and routes requests to the appropriate AI Gateway proxy:
|
|
82
|
+
|
|
83
|
+
```typescript
|
|
84
|
+
import { createGitLab } from 'gitlab-ai-provider';
|
|
85
|
+
import { generateText } from 'ai';
|
|
86
|
+
|
|
87
|
+
const gitlab = createGitLab({
|
|
88
|
+
apiKey: process.env.GITLAB_TOKEN,
|
|
89
|
+
});
|
|
90
|
+
|
|
91
|
+
// Anthropic Models (Claude)
|
|
92
|
+
const opusModel = gitlab.agenticChat('duo-chat-opus-4-5');
|
|
93
|
+
// Automatically uses: claude-opus-4-5-20251101
|
|
94
|
+
|
|
95
|
+
const sonnetModel = gitlab.agenticChat('duo-chat-sonnet-4-5');
|
|
96
|
+
// Automatically uses: claude-sonnet-4-5-20250929
|
|
97
|
+
|
|
98
|
+
const haikuModel = gitlab.agenticChat('duo-chat-haiku-4-5');
|
|
99
|
+
// Automatically uses: claude-haiku-4-5-20251001
|
|
100
|
+
|
|
101
|
+
// OpenAI Models (GPT-5)
|
|
102
|
+
const gpt5Model = gitlab.agenticChat('duo-chat-gpt-5-1');
|
|
103
|
+
// Automatically uses: gpt-5.1-2025-11-13
|
|
104
|
+
|
|
105
|
+
const gpt5MiniModel = gitlab.agenticChat('duo-chat-gpt-5-mini');
|
|
106
|
+
// Automatically uses: gpt-5-mini-2025-08-07
|
|
107
|
+
|
|
108
|
+
const codexModel = gitlab.agenticChat('duo-chat-gpt-5-codex');
|
|
109
|
+
// Automatically uses: gpt-5-codex
|
|
110
|
+
|
|
111
|
+
// You can still override with explicit providerModel option
|
|
112
|
+
const customModel = gitlab.agenticChat('duo-chat-opus-4-5', {
|
|
113
|
+
providerModel: 'claude-sonnet-4-5-20250929', // Override mapping
|
|
114
|
+
});
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
**Available Model Mappings:**
|
|
118
|
+
|
|
119
|
+
| Model ID | Provider | Backend Model |
|
|
120
|
+
| ------------------------ | --------- | ---------------------------- |
|
|
121
|
+
| `duo-chat-opus-4-5` | Anthropic | `claude-opus-4-5-20251101` |
|
|
122
|
+
| `duo-chat-sonnet-4-5` | Anthropic | `claude-sonnet-4-5-20250929` |
|
|
123
|
+
| `duo-chat-haiku-4-5` | Anthropic | `claude-haiku-4-5-20251001` |
|
|
124
|
+
| `duo-chat-gpt-5-1` | OpenAI | `gpt-5.1-2025-11-13` |
|
|
125
|
+
| `duo-chat-gpt-5-mini` | OpenAI | `gpt-5-mini-2025-08-07` |
|
|
126
|
+
| `duo-chat-gpt-5-codex` | OpenAI | `gpt-5-codex` |
|
|
127
|
+
| `duo-chat-gpt-5-2-codex` | OpenAI | `gpt-5.2-codex` |
|
|
128
|
+
|
|
129
|
+
For unmapped Anthropic model IDs, the provider defaults to `claude-sonnet-4-5-20250929`.
|
|
130
|
+
|
|
131
|
+
### OpenAI Models (GPT-5)
|
|
132
|
+
|
|
133
|
+
The provider supports OpenAI GPT-5 models through GitLab's AI Gateway proxy. OpenAI models are automatically detected based on the model ID and routed to the appropriate proxy endpoint.
|
|
134
|
+
|
|
135
|
+
```typescript
|
|
136
|
+
import { createGitLab } from 'gitlab-ai-provider';
|
|
137
|
+
import { generateText } from 'ai';
|
|
138
|
+
|
|
139
|
+
const gitlab = createGitLab({
|
|
140
|
+
apiKey: process.env.GITLAB_TOKEN,
|
|
141
|
+
});
|
|
142
|
+
|
|
143
|
+
// GPT-5.1 - Most capable model
|
|
144
|
+
const { text } = await generateText({
|
|
145
|
+
model: gitlab.agenticChat('duo-chat-gpt-5-1'),
|
|
146
|
+
prompt: 'Explain GitLab CI/CD pipelines',
|
|
147
|
+
});
|
|
148
|
+
|
|
149
|
+
// GPT-5 Mini - Fast and efficient
|
|
150
|
+
const { text: quickResponse } = await generateText({
|
|
151
|
+
model: gitlab.agenticChat('duo-chat-gpt-5-mini'),
|
|
152
|
+
prompt: 'Summarize this code',
|
|
153
|
+
});
|
|
154
|
+
|
|
155
|
+
// GPT-5 Codex - Optimized for code
|
|
156
|
+
const { text: codeExplanation } = await generateText({
|
|
157
|
+
model: gitlab.agenticChat('duo-chat-gpt-5-codex'),
|
|
158
|
+
prompt: 'Refactor this function for better performance',
|
|
159
|
+
});
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
**OpenAI Models with Tool Calling:**
|
|
163
|
+
|
|
164
|
+
```typescript
|
|
165
|
+
import { createGitLab } from 'gitlab-ai-provider';
|
|
166
|
+
import { generateText, tool } from 'ai';
|
|
167
|
+
import { z } from 'zod';
|
|
168
|
+
|
|
169
|
+
const gitlab = createGitLab({
|
|
170
|
+
apiKey: process.env.GITLAB_TOKEN,
|
|
171
|
+
});
|
|
172
|
+
|
|
173
|
+
const { text, toolCalls } = await generateText({
|
|
174
|
+
model: gitlab.agenticChat('duo-chat-gpt-5-1', {
|
|
175
|
+
maxTokens: 4096,
|
|
176
|
+
}),
|
|
177
|
+
prompt: 'What is the weather in San Francisco?',
|
|
178
|
+
tools: {
|
|
179
|
+
getWeather: tool({
|
|
180
|
+
description: 'Get the weather for a location',
|
|
181
|
+
parameters: z.object({
|
|
182
|
+
location: z.string().describe('The city name'),
|
|
183
|
+
}),
|
|
184
|
+
execute: async ({ location }) => {
|
|
185
|
+
return { temperature: 72, condition: 'sunny', location };
|
|
186
|
+
},
|
|
187
|
+
}),
|
|
188
|
+
},
|
|
189
|
+
});
|
|
190
|
+
```
|
|
191
|
+
|
|
192
|
+
### Agentic Chat with Feature Flags
|
|
193
|
+
|
|
194
|
+
You can pass feature flags to enable experimental features in GitLab's AI Gateway proxy:
|
|
195
|
+
|
|
196
|
+
```typescript
|
|
197
|
+
import { createGitLab } from 'gitlab-ai-provider';
|
|
198
|
+
|
|
199
|
+
// Option 1: Set feature flags globally for all agentic chat models
|
|
200
|
+
const gitlab = createGitLab({
|
|
201
|
+
apiKey: process.env.GITLAB_TOKEN,
|
|
202
|
+
featureFlags: {
|
|
203
|
+
duo_agent_platform_agentic_chat: true,
|
|
204
|
+
duo_agent_platform: true,
|
|
205
|
+
},
|
|
206
|
+
});
|
|
207
|
+
|
|
208
|
+
const model = gitlab.agenticChat('duo-chat');
|
|
209
|
+
|
|
210
|
+
// Option 2: Set feature flags per model (overrides global flags)
|
|
211
|
+
const modelWithFlags = gitlab.agenticChat('duo-chat', {
|
|
212
|
+
featureFlags: {
|
|
213
|
+
duo_agent_platform_agentic_chat: true,
|
|
214
|
+
duo_agent_platform: true,
|
|
215
|
+
custom_feature_flag: false,
|
|
216
|
+
},
|
|
217
|
+
});
|
|
218
|
+
|
|
219
|
+
// Option 3: Merge both (model-level flags take precedence)
|
|
220
|
+
const gitlab2 = createGitLab({
|
|
221
|
+
featureFlags: {
|
|
222
|
+
duo_agent_platform: true, // will be overridden
|
|
223
|
+
},
|
|
224
|
+
});
|
|
225
|
+
|
|
226
|
+
const mergedModel = gitlab2.agenticChat('duo-chat', {
|
|
227
|
+
featureFlags: {
|
|
228
|
+
duo_agent_platform: false, // overrides provider-level
|
|
229
|
+
duo_agent_platform_agentic_chat: true, // adds new flag
|
|
230
|
+
},
|
|
231
|
+
});
|
|
232
|
+
```
|
|
233
|
+
|
|
234
|
+
### Duo Workflow Service (Server-Side Agentic)
|
|
235
|
+
|
|
236
|
+
The Duo Workflow Service provides a server-side agentic loop where GitLab drives the LLM and streams tool execution requests to the client via WebSocket. This enables powerful agentic workflows with dynamic model discovery and MCP tool integration.
|
|
237
|
+
|
|
238
|
+
**Requirements:**
|
|
239
|
+
|
|
240
|
+
- GitLab Ultimate with Duo Enterprise add-on
|
|
241
|
+
- GitLab 18.4+ (18.5+ for pinned model support)
|
|
242
|
+
|
|
243
|
+
```typescript
|
|
244
|
+
import { createGitLab } from 'gitlab-ai-provider';
|
|
245
|
+
import { streamText } from 'ai';
|
|
246
|
+
|
|
247
|
+
const gitlab = createGitLab({
|
|
248
|
+
apiKey: process.env.GITLAB_TOKEN,
|
|
249
|
+
instanceUrl: 'https://gitlab.com', // or your self-hosted instance
|
|
250
|
+
});
|
|
251
|
+
|
|
252
|
+
// Use the duo-workflow model for server-side agentic workflows
|
|
253
|
+
const model = gitlab.workflowChat('duo-workflow', {
|
|
254
|
+
// Optional: Specify root namespace for model discovery
|
|
255
|
+
rootNamespaceId: 'gid://gitlab/Group/12345',
|
|
256
|
+
|
|
257
|
+
// Optional: Provide MCP tools for the workflow
|
|
258
|
+
mcpTools: [
|
|
259
|
+
{
|
|
260
|
+
name: 'searchCode',
|
|
261
|
+
description: 'Search for code in the repository',
|
|
262
|
+
inputSchema: JSON.stringify({
|
|
263
|
+
type: 'object',
|
|
264
|
+
properties: {
|
|
265
|
+
query: { type: 'string' },
|
|
266
|
+
},
|
|
267
|
+
required: ['query'],
|
|
268
|
+
}),
|
|
269
|
+
},
|
|
270
|
+
],
|
|
271
|
+
|
|
272
|
+
// Optional: Pre-approve tools for automatic execution
|
|
273
|
+
preapprovedTools: ['read_file', 'write_file'],
|
|
274
|
+
});
|
|
275
|
+
|
|
276
|
+
// Stream the workflow execution
|
|
277
|
+
const result = await streamText({
|
|
278
|
+
model,
|
|
279
|
+
prompt: 'Refactor the authentication module to use JWT tokens',
|
|
280
|
+
});
|
|
281
|
+
|
|
282
|
+
for await (const chunk of result.textStream) {
|
|
283
|
+
process.stdout.write(chunk);
|
|
284
|
+
}
|
|
285
|
+
```
|
|
286
|
+
|
|
287
|
+
**Dynamic Model Discovery:**
|
|
288
|
+
|
|
289
|
+
The workflow service automatically discovers available models for your namespace and respects admin-pinned models:
|
|
290
|
+
|
|
291
|
+
```typescript
|
|
292
|
+
const model = gitlab.workflowChat('duo-workflow', {
|
|
293
|
+
rootNamespaceId: 'gid://gitlab/Group/12345',
|
|
294
|
+
|
|
295
|
+
// Optional: Callback for interactive model selection
|
|
296
|
+
onSelectModel: async (models) => {
|
|
297
|
+
// Present model picker to user
|
|
298
|
+
console.log('Available models:', models);
|
|
299
|
+
// Return selected model ref or null for default
|
|
300
|
+
return models[0].ref;
|
|
301
|
+
},
|
|
302
|
+
});
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
**Model Selection Priority:**
|
|
306
|
+
|
|
307
|
+
1. Admin-pinned model (always used if set)
|
|
308
|
+
2. User-selected model (via `onSelectModel` callback)
|
|
309
|
+
3. Namespace default model
|
|
310
|
+
|
|
311
|
+
**Built-in Tool Support:**
|
|
312
|
+
|
|
313
|
+
The workflow service automatically maps DWS built-in tools to consumer tool names:
|
|
314
|
+
|
|
315
|
+
| DWS Tool | Consumer Tool | Description |
|
|
316
|
+
| ----------------- | ------------- | ----------------------------- |
|
|
317
|
+
| `runReadFile` | `read` | Read file contents |
|
|
318
|
+
| `runWriteFile` | `write` | Write file contents |
|
|
319
|
+
| `runEditFile` | `edit` | Edit file with old/new string |
|
|
320
|
+
| `runShellCommand` | `bash` | Execute shell command |
|
|
321
|
+
| `runCommand` | `bash` | Execute structured command |
|
|
322
|
+
| `runGitCommand` | `bash` | Execute git command |
|
|
323
|
+
| `listDirectory` | `read` | List directory contents |
|
|
324
|
+
| `findFiles` | `glob` | Find files by pattern |
|
|
325
|
+
| `grep` | `grep` | Search file contents |
|
|
326
|
+
| `mkdir` | `bash` | Create directory |
|
|
327
|
+
| `runHTTPRequest` | `bash` | Execute HTTP request |
|
|
328
|
+
|
|
329
|
+
**Workflow Options:**
|
|
330
|
+
|
|
331
|
+
```typescript
|
|
332
|
+
interface GitLabWorkflowOptions {
|
|
333
|
+
// Root namespace ID for model discovery and token scoping
|
|
334
|
+
rootNamespaceId?: string;
|
|
335
|
+
|
|
336
|
+
// GitLab project ID (numeric or path)
|
|
337
|
+
projectId?: string;
|
|
338
|
+
|
|
339
|
+
// GitLab namespace ID
|
|
340
|
+
namespaceId?: string;
|
|
341
|
+
|
|
342
|
+
// MCP tool definitions to expose to the workflow
|
|
343
|
+
mcpTools?: McpToolDefinition[];
|
|
344
|
+
|
|
345
|
+
// Client capabilities to advertise
|
|
346
|
+
clientCapabilities?: string[]; // Default: ['shell_command']
|
|
347
|
+
|
|
348
|
+
// Tool names pre-approved for execution without confirmation
|
|
349
|
+
preapprovedTools?: string[];
|
|
350
|
+
|
|
351
|
+
// Additional context items for conversation history
|
|
352
|
+
additionalContext?: AdditionalContext[];
|
|
353
|
+
|
|
354
|
+
// Feature flags for the workflow
|
|
355
|
+
featureFlags?: Record<string, boolean>;
|
|
356
|
+
|
|
357
|
+
// Working directory for project auto-detection
|
|
358
|
+
workingDirectory?: string; // Default: process.cwd()
|
|
359
|
+
|
|
360
|
+
// Flow configuration for agent behavior
|
|
361
|
+
flowConfig?: unknown;
|
|
362
|
+
|
|
363
|
+
// Flow configuration schema version
|
|
364
|
+
flowConfigSchemaVersion?: string;
|
|
365
|
+
|
|
366
|
+
// Callback for interactive model selection
|
|
367
|
+
onSelectModel?: (models: AiModel[]) => Promise<string | null | undefined>;
|
|
368
|
+
}
|
|
369
|
+
```
|
|
370
|
+
|
|
371
|
+
**Environment Variables:**
|
|
372
|
+
|
|
373
|
+
| Variable | Description | Default |
|
|
374
|
+
| --------------------- | ------------------------------------------- | -------------------- |
|
|
375
|
+
| `GITLAB_INSTANCE_URL` | GitLab instance URL | `https://gitlab.com` |
|
|
376
|
+
| `GITLAB_TOKEN` | GitLab Personal Access Token or OAuth token | - |
|
|
377
|
+
|
|
378
|
+
**Model Cache:**
|
|
379
|
+
|
|
380
|
+
The workflow service caches model discovery results and user selections in `~/.cache/opencode/gitlab-workflow-model-cache.json` (or `$XDG_CACHE_HOME/opencode/...`). The cache is keyed by workspace directory and instance URL, with a 10-minute TTL for discovery data.
|
|
381
|
+
|
|
382
|
+
## 🔑 Authentication
|
|
383
|
+
|
|
384
|
+
### Personal Access Token
|
|
385
|
+
|
|
386
|
+
```typescript
|
|
387
|
+
const gitlab = createGitLab({
|
|
388
|
+
apiKey: 'glpat-xxxxxxxxxxxxxxxxxxxx',
|
|
389
|
+
});
|
|
390
|
+
```
|
|
391
|
+
|
|
392
|
+
### Environment Variable
|
|
393
|
+
|
|
394
|
+
```bash
|
|
395
|
+
export GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx
|
|
396
|
+
```
|
|
397
|
+
|
|
398
|
+
```typescript
|
|
399
|
+
const gitlab = createGitLab(); // Automatically uses GITLAB_TOKEN
|
|
400
|
+
```
|
|
401
|
+
|
|
402
|
+
### OAuth (OpenCode Auth)
|
|
403
|
+
|
|
404
|
+
The provider automatically detects and uses OpenCode authentication if available:
|
|
405
|
+
|
|
406
|
+
```typescript
|
|
407
|
+
const gitlab = createGitLab({
|
|
408
|
+
instanceUrl: 'https://gitlab.com',
|
|
409
|
+
// OAuth tokens are loaded from ~/.opencode/auth.json
|
|
410
|
+
});
|
|
411
|
+
```
|
|
412
|
+
|
|
413
|
+
### Custom Headers
|
|
414
|
+
|
|
415
|
+
```typescript
|
|
416
|
+
const gitlab = createGitLab({
|
|
417
|
+
apiKey: 'your-token',
|
|
418
|
+
headers: {
|
|
419
|
+
'X-Custom-Header': 'value',
|
|
420
|
+
},
|
|
421
|
+
});
|
|
422
|
+
```
|
|
423
|
+
|
|
424
|
+
### AI Gateway Headers
|
|
425
|
+
|
|
426
|
+
Custom headers can be sent to GitLab's AI Gateway (Anthropic/OpenAI proxy) for traffic identification and routing. By default, the provider sends `User-Agent: gitlab-ai-provider/{version}`.
|
|
427
|
+
|
|
428
|
+
```typescript
|
|
429
|
+
// Provider-level headers (apply to all agentic models)
|
|
430
|
+
const gitlab = createGitLab({
|
|
431
|
+
apiKey: process.env.GITLAB_TOKEN,
|
|
432
|
+
aiGatewayHeaders: {
|
|
433
|
+
'X-Custom-Routing': 'premium-tier',
|
|
434
|
+
},
|
|
435
|
+
});
|
|
436
|
+
|
|
437
|
+
// Model-level headers (override provider-level)
|
|
438
|
+
const model = gitlab.agenticChat('duo-chat-opus-4-5', {
|
|
439
|
+
aiGatewayHeaders: {
|
|
440
|
+
'X-Request-Priority': 'high',
|
|
441
|
+
},
|
|
442
|
+
});
|
|
443
|
+
```
|
|
444
|
+
|
|
445
|
+
**Header Precedence (lowest to highest):**
|
|
446
|
+
|
|
447
|
+
1. Default headers (`User-Agent: gitlab-ai-provider/{version}`)
|
|
448
|
+
2. Provider-level `aiGatewayHeaders`
|
|
449
|
+
3. Model-level `aiGatewayHeaders`
|
|
450
|
+
|
|
451
|
+
## 🏗️ Architecture
|
|
452
|
+
|
|
453
|
+
### Core Components
|
|
454
|
+
|
|
455
|
+
#### 1. **GitLabProvider**
|
|
456
|
+
|
|
457
|
+
Main provider factory that creates language models with different capabilities.
|
|
458
|
+
|
|
459
|
+
```typescript
|
|
460
|
+
interface GitLabProvider {
|
|
461
|
+
(modelId: string): LanguageModelV2;
|
|
462
|
+
languageModel(modelId: string): LanguageModelV2;
|
|
463
|
+
agenticChat(modelId: string, options?: GitLabAgenticOptions): GitLabAgenticLanguageModel;
|
|
464
|
+
workflowChat(modelId: string, options?: GitLabWorkflowOptions): GitLabWorkflowLanguageModel;
|
|
465
|
+
}
|
|
466
|
+
```
|
|
467
|
+
|
|
468
|
+
#### 2. **GitLabAnthropicLanguageModel**
|
|
469
|
+
|
|
470
|
+
Provides native tool calling through GitLab's Anthropic proxy.
|
|
471
|
+
|
|
472
|
+
- Uses Claude models via `https://cloud.gitlab.com/ai/v1/proxy/anthropic/`
|
|
473
|
+
- Automatic token refresh and retry logic
|
|
474
|
+
- Direct access token management
|
|
475
|
+
- Supports all Anthropic tool calling features
|
|
476
|
+
|
|
477
|
+
#### 3. **GitLabOpenAILanguageModel**
|
|
478
|
+
|
|
479
|
+
Provides native tool calling through GitLab's OpenAI proxy.
|
|
480
|
+
|
|
481
|
+
- Uses GPT-5 models via `https://cloud.gitlab.com/ai/v1/proxy/openai/`
|
|
482
|
+
- Automatic token refresh and retry logic
|
|
483
|
+
- Direct access token management
|
|
484
|
+
- Supports all OpenAI tool calling features including parallel tool calls
|
|
485
|
+
|
|
486
|
+
#### 4. **GitLabWorkflowLanguageModel**
|
|
487
|
+
|
|
488
|
+
Provides server-side agentic execution through GitLab Duo Workflow Service.
|
|
489
|
+
|
|
490
|
+
- WebSocket-based bidirectional communication with DWS
|
|
491
|
+
- Dynamic model discovery via GraphQL (`aiChatAvailableModels`)
|
|
492
|
+
- Automatic model selection (pinned → user-selected → default)
|
|
493
|
+
- Built-in tool mapping and MCP tool support
|
|
494
|
+
- Per-stream state isolation for concurrent requests
|
|
495
|
+
- Dual heartbeat (WebSocket ping + JSON heartbeat)
|
|
496
|
+
|
|
497
|
+
### Supporting Utilities
|
|
498
|
+
|
|
499
|
+
#### GitLabProjectDetector
|
|
500
|
+
|
|
501
|
+
Automatically detects GitLab projects from git remotes.
|
|
502
|
+
|
|
503
|
+
```typescript
|
|
504
|
+
const detector = new GitLabProjectDetector({
|
|
505
|
+
instanceUrl: 'https://gitlab.com',
|
|
506
|
+
getHeaders: () => ({ Authorization: `Bearer ${token}` }),
|
|
507
|
+
});
|
|
508
|
+
|
|
509
|
+
const project = await detector.detectProject(process.cwd());
|
|
510
|
+
// Returns: { id: 12345, path: 'group/project', namespaceId: 67890 }
|
|
511
|
+
```
|
|
512
|
+
|
|
513
|
+
#### GitLabProjectCache
|
|
514
|
+
|
|
515
|
+
Caches project information with TTL.
|
|
516
|
+
|
|
517
|
+
```typescript
|
|
518
|
+
const cache = new GitLabProjectCache(5 * 60 * 1000); // 5 minutes
|
|
519
|
+
cache.set('key', project);
|
|
520
|
+
const cached = cache.get('key');
|
|
521
|
+
```
|
|
522
|
+
|
|
523
|
+
#### GitLabOAuthManager
|
|
524
|
+
|
|
525
|
+
Manages OAuth token lifecycle.
|
|
526
|
+
|
|
527
|
+
```typescript
|
|
528
|
+
const oauthManager = new GitLabOAuthManager();
|
|
529
|
+
|
|
530
|
+
// Exchange authorization code
|
|
531
|
+
const tokens = await oauthManager.exchangeAuthorizationCode({
|
|
532
|
+
instanceUrl: 'https://gitlab.com',
|
|
533
|
+
code: 'auth-code',
|
|
534
|
+
codeVerifier: 'verifier',
|
|
535
|
+
});
|
|
536
|
+
|
|
537
|
+
// Refresh tokens
|
|
538
|
+
const refreshed = await oauthManager.refreshIfNeeded(tokens);
|
|
539
|
+
```
|
|
540
|
+
|
|
541
|
+
#### GitLabDirectAccessClient
|
|
542
|
+
|
|
543
|
+
Manages direct access tokens for Anthropic proxy.
|
|
544
|
+
|
|
545
|
+
```typescript
|
|
546
|
+
const client = new GitLabDirectAccessClient({
|
|
547
|
+
instanceUrl: 'https://gitlab.com',
|
|
548
|
+
getHeaders: () => ({ Authorization: `Bearer ${token}` }),
|
|
549
|
+
});
|
|
550
|
+
|
|
551
|
+
const directToken = await client.getDirectAccessToken();
|
|
552
|
+
// Returns: { token: 'xxx', headers: {...}, expiresAt: 123456 }
|
|
553
|
+
```
|
|
554
|
+
|
|
555
|
+
#### GitLabModelDiscovery
|
|
556
|
+
|
|
557
|
+
Discovers available workflow models via GraphQL.
|
|
558
|
+
|
|
559
|
+
```typescript
|
|
560
|
+
const discovery = new GitLabModelDiscovery({
|
|
561
|
+
instanceUrl: 'https://gitlab.com',
|
|
562
|
+
getHeaders: () => ({ Authorization: `Bearer ${token}` }),
|
|
563
|
+
});
|
|
564
|
+
|
|
565
|
+
const models = await discovery.discover('gid://gitlab/Group/12345');
|
|
566
|
+
// Returns: { defaultModel, selectableModels, pinnedModel, modelSwitchingEnabled }
|
|
567
|
+
|
|
568
|
+
const effectiveRef = await discovery.getEffectiveModelRef(
|
|
569
|
+
'gid://gitlab/Group/12345',
|
|
570
|
+
'claude_sonnet_4_6'
|
|
571
|
+
);
|
|
572
|
+
```
|
|
573
|
+
|
|
574
|
+
#### GitLabModelCache
|
|
575
|
+
|
|
576
|
+
Persists model discovery results and user selections.
|
|
577
|
+
|
|
578
|
+
```typescript
|
|
579
|
+
const cache = new GitLabModelCache('/workspace/path', 'https://gitlab.com');
|
|
580
|
+
|
|
581
|
+
cache.saveDiscovery(discoveredModels);
|
|
582
|
+
cache.saveSelection('claude_sonnet_4_6', 'Claude Sonnet 4.6');
|
|
583
|
+
|
|
584
|
+
const selectedRef = cache.getSelectedModelRef();
|
|
585
|
+
const discovery = cache.getDiscovery();
|
|
586
|
+
```
|
|
587
|
+
|
|
588
|
+
#### GitLabWorkflowClient
|
|
589
|
+
|
|
590
|
+
Low-level WebSocket client for DWS communication.
|
|
591
|
+
|
|
592
|
+
```typescript
|
|
593
|
+
const client = new GitLabWorkflowClient();
|
|
594
|
+
|
|
595
|
+
await client.connect(
|
|
596
|
+
{
|
|
597
|
+
instanceUrl: 'https://gitlab.com',
|
|
598
|
+
modelRef: 'claude_sonnet_4_6',
|
|
599
|
+
headers: { Authorization: `Bearer ${token}` },
|
|
600
|
+
projectId: 'my-group/my-project',
|
|
601
|
+
},
|
|
602
|
+
(event) => {
|
|
603
|
+
if (event.type === 'checkpoint') {
|
|
604
|
+
console.log('Checkpoint:', event.data);
|
|
605
|
+
} else if (event.type === 'tool-request') {
|
|
606
|
+
console.log('Tool request:', event.data);
|
|
607
|
+
}
|
|
608
|
+
}
|
|
609
|
+
);
|
|
610
|
+
|
|
611
|
+
client.sendStartRequest({ workflowID: '123', goal: 'Refactor code', ... });
|
|
612
|
+
client.sendActionResponse('request-id', 'tool result');
|
|
613
|
+
client.stop();
|
|
614
|
+
```
|
|
615
|
+
|
|
616
|
+
#### GitLabWorkflowTokenClient
|
|
617
|
+
|
|
618
|
+
Manages DWS token lifecycle and workflow creation.
|
|
619
|
+
|
|
620
|
+
```typescript
|
|
621
|
+
const tokenClient = new GitLabWorkflowTokenClient({
|
|
622
|
+
instanceUrl: 'https://gitlab.com',
|
|
623
|
+
getHeaders: () => ({ Authorization: `Bearer ${token}` }),
|
|
624
|
+
});
|
|
625
|
+
|
|
626
|
+
const token = await tokenClient.getToken('chat', 'gid://gitlab/Group/12345');
|
|
627
|
+
const workflowId = await tokenClient.createWorkflow('Refactor authentication', {
|
|
628
|
+
projectId: 'my-group/my-project',
|
|
629
|
+
});
|
|
630
|
+
```
|
|
631
|
+
|
|
632
|
+
## 📚 API Reference
|
|
633
|
+
|
|
634
|
+
### Provider Configuration
|
|
635
|
+
|
|
636
|
+
```typescript
|
|
637
|
+
interface GitLabProviderSettings {
|
|
638
|
+
instanceUrl?: string; // Default: 'https://gitlab.com'
|
|
639
|
+
apiKey?: string; // PAT or OAuth access token
|
|
640
|
+
refreshToken?: string; // OAuth refresh token
|
|
641
|
+
name?: string; // Provider name prefix
|
|
642
|
+
headers?: Record<string, string>; // Custom headers for GitLab API
|
|
643
|
+
aiGatewayHeaders?: Record<string, string>; // Custom headers for AI Gateway proxy
|
|
644
|
+
fetch?: typeof fetch; // Custom fetch implementation
|
|
645
|
+
aiGatewayUrl?: string; // AI Gateway URL (default: 'https://cloud.gitlab.com')
|
|
646
|
+
}
|
|
647
|
+
```
|
|
648
|
+
|
|
649
|
+
### Environment Variables
|
|
650
|
+
|
|
651
|
+
| Variable | Description | Default |
|
|
652
|
+
| ----------------------- | ------------------------------------------- | -------------------------- |
|
|
653
|
+
| `GITLAB_TOKEN` | GitLab Personal Access Token or OAuth token | - |
|
|
654
|
+
| `GITLAB_INSTANCE_URL` | GitLab instance URL | `https://gitlab.com` |
|
|
655
|
+
| `GITLAB_AI_GATEWAY_URL` | AI Gateway URL for Anthropic proxy | `https://cloud.gitlab.com` |
|
|
656
|
+
|
|
657
|
+
### Agentic Chat Options
|
|
658
|
+
|
|
659
|
+
```typescript
|
|
660
|
+
interface GitLabAgenticOptions {
|
|
661
|
+
providerModel?: string; // Override the backend model (e.g., 'claude-sonnet-4-5-20250929' or 'gpt-5.1-2025-11-13')
|
|
662
|
+
maxTokens?: number; // Default: 8192
|
|
663
|
+
featureFlags?: Record<string, boolean>; // GitLab feature flags
|
|
664
|
+
aiGatewayHeaders?: Record<string, string>; // Custom headers for AI Gateway proxy (per-model)
|
|
665
|
+
}
|
|
666
|
+
```
|
|
667
|
+
|
|
668
|
+
**Note:** The `providerModel` option allows you to override the automatically mapped model. The provider will validate that the override is compatible with the model ID's provider (e.g., you cannot use an OpenAI model with a `duo-chat-opus-*` model ID).
|
|
669
|
+
|
|
670
|
+
### Error Handling
|
|
671
|
+
|
|
672
|
+
```typescript
|
|
673
|
+
import { GitLabError } from 'gitlab-ai-provider';
|
|
674
|
+
|
|
675
|
+
try {
|
|
676
|
+
const result = await generateText({ model, prompt });
|
|
677
|
+
} catch (error) {
|
|
678
|
+
if (error instanceof GitLabError) {
|
|
679
|
+
if (error.isAuthError()) {
|
|
680
|
+
console.error('Authentication failed');
|
|
681
|
+
} else if (error.isRateLimitError()) {
|
|
682
|
+
console.error('Rate limit exceeded');
|
|
683
|
+
} else if (error.isServerError()) {
|
|
684
|
+
console.error('Server error:', error.statusCode);
|
|
685
|
+
}
|
|
686
|
+
}
|
|
687
|
+
}
|
|
688
|
+
```
|
|
689
|
+
|
|
690
|
+
## 🔧 Development
|
|
691
|
+
|
|
692
|
+
### Build
|
|
693
|
+
|
|
694
|
+
```bash
|
|
695
|
+
npm run build # Build once
|
|
696
|
+
npm run build:watch # Build in watch mode
|
|
697
|
+
```
|
|
698
|
+
|
|
699
|
+
### Testing
|
|
700
|
+
|
|
701
|
+
```bash
|
|
702
|
+
npm test # Run all tests
|
|
703
|
+
npm run test:watch # Run tests in watch mode
|
|
704
|
+
```
|
|
705
|
+
|
|
706
|
+
### Code Quality
|
|
707
|
+
|
|
708
|
+
```bash
|
|
709
|
+
npm run lint # Lint code
|
|
710
|
+
npm run lint:fix # Lint and auto-fix
|
|
711
|
+
npm run format # Format code
|
|
712
|
+
npm run format:check # Check formatting
|
|
713
|
+
npm run type-check # TypeScript type checking
|
|
714
|
+
```
|
|
715
|
+
|
|
716
|
+
### Project Structure
|
|
717
|
+
|
|
718
|
+
```
|
|
719
|
+
gitlab-ai-provider/
|
|
720
|
+
├── src/
|
|
721
|
+
│ ├── index.ts # Main exports
|
|
722
|
+
│ ├── gitlab-provider.ts # Provider factory
|
|
723
|
+
│ ├── gitlab-anthropic-language-model.ts # Anthropic/Claude model
|
|
724
|
+
│ ├── gitlab-openai-language-model.ts # OpenAI/GPT model
|
|
725
|
+
│ ├── gitlab-workflow-language-model.ts # Workflow/DWS model
|
|
726
|
+
│ ├── gitlab-workflow-client.ts # WebSocket client for DWS
|
|
727
|
+
│ ├── gitlab-workflow-token-client.ts # DWS token management
|
|
728
|
+
│ ├── gitlab-workflow-builtins.ts # Built-in tool mapping
|
|
729
|
+
│ ├── gitlab-workflow-types.ts # DWS protocol types
|
|
730
|
+
│ ├── gitlab-model-discovery.ts # GraphQL model discovery
|
|
731
|
+
│ ├── gitlab-model-cache.ts # Model selection cache
|
|
732
|
+
│ ├── model-mappings.ts # Model ID mappings
|
|
733
|
+
│ ├── gitlab-direct-access.ts # Direct access tokens
|
|
734
|
+
│ ├── gitlab-oauth-manager.ts # OAuth management
|
|
735
|
+
│ ├── gitlab-oauth-types.ts # OAuth types
|
|
736
|
+
│ ├── gitlab-project-detector.ts # Project detection
|
|
737
|
+
│ ├── gitlab-project-cache.ts # Project caching
|
|
738
|
+
│ ├── gitlab-api-types.ts # API types
|
|
739
|
+
│ └── gitlab-error.ts # Error handling
|
|
740
|
+
├── tests/ # Test files (300 tests)
|
|
741
|
+
├── dist/ # Build output
|
|
742
|
+
├── package.json
|
|
743
|
+
├── tsconfig.json
|
|
744
|
+
├── tsup.config.ts
|
|
745
|
+
└── vitest.config.ts
|
|
746
|
+
```
|
|
747
|
+
|
|
748
|
+
## 📝 Code Style
|
|
749
|
+
|
|
750
|
+
- **Imports**: Named imports, organized by external → internal → types
|
|
751
|
+
- **Formatting**: Single quotes, semicolons, 100 char line width, 2 space indent
|
|
752
|
+
- **Types**: Interfaces for public APIs, Zod schemas for runtime validation
|
|
753
|
+
- **Naming**: camelCase (variables/functions), PascalCase (classes/types), kebab-case (files)
|
|
754
|
+
- **Exports**: Named exports only (no default exports)
|
|
755
|
+
- **Comments**: JSDoc for public APIs with @param/@returns
|
|
756
|
+
|
|
757
|
+
---
|
|
758
|
+
|
|
759
|
+
## Assistant
|
|
760
|
+
|
|
761
|
+
## 🤝 Contributing
|
|
762
|
+
|
|
763
|
+
Contributions are welcome! Please see our [Contributing Guide](https://gitlab.com/vglafirov/gitlab-ai-provider/-/blob/main/CONTRIBUTING.md) for detailed guidelines on:
|
|
764
|
+
|
|
765
|
+
- Code style and conventions
|
|
766
|
+
- Development workflow
|
|
767
|
+
- Testing requirements
|
|
768
|
+
- Submitting merge requests
|
|
769
|
+
- Developer Certificate of Origin and License
|
|
770
|
+
|
|
771
|
+
**Quick Start for Contributors**:
|
|
772
|
+
|
|
773
|
+
1. **Commit Messages**: Use conventional commits format
|
|
774
|
+
|
|
775
|
+
```
|
|
776
|
+
|
|
777
|
+
feat(scope): add new feature
|
|
778
|
+
fix(scope): fix bug
|
|
779
|
+
docs(scope): update documentation
|
|
780
|
+
|
|
781
|
+
```
|
|
782
|
+
|
|
783
|
+
2. **Code Quality**: Ensure all checks pass
|
|
784
|
+
|
|
785
|
+
```bash
|
|
786
|
+
npm run lint
|
|
787
|
+
npm run type-check
|
|
788
|
+
npm test
|
|
789
|
+
```
|
|
790
|
+
|
|
791
|
+
3. **Testing**: Add tests for new features
|
|
792
|
+
|
|
793
|
+
## 🔗 Links
|
|
794
|
+
|
|
795
|
+
- [GitLab Repository](https://gitlab.com/vglafirov/gitlab-ai-provider)
|
|
796
|
+
- [npm Package](https://www.npmjs.com/package/gitlab-ai-provider)
|
|
797
|
+
- [Issue Tracker](https://gitlab.com/vglafirov/gitlab-ai-provider/-/issues)
|
|
798
|
+
- [Contributing Guide](https://gitlab.com/vglafirov/gitlab-ai-provider/-/blob/main/CONTRIBUTING.md)
|
|
799
|
+
- [Changelog](https://gitlab.com/vglafirov/gitlab-ai-provider/-/blob/main/CHANGELOG.md)
|
|
800
|
+
- [Agent Guidelines](https://gitlab.com/vglafirov/gitlab-ai-provider/-/blob/main/AGENTS.md)
|
|
801
|
+
|
|
802
|
+
## 🙏 Acknowledgments
|
|
803
|
+
|
|
804
|
+
This project is built on top of:
|
|
805
|
+
|
|
806
|
+
- [Vercel AI SDK](https://sdk.vercel.ai/)
|
|
807
|
+
- [Anthropic SDK](https://github.com/anthropics/anthropic-sdk-typescript)
|
|
808
|
+
- [OpenAI SDK](https://github.com/openai/openai-node)
|
|
809
|
+
- [GitLab Duo](https://about.gitlab.com/gitlab-duo/)
|
|
810
|
+
|
|
811
|
+
---
|
|
812
|
+
|
|
813
|
+
**Made with ❤️ for the OpenCode community**
|
|
814
|
+
|
|
815
|
+
---
|