@neocode-ai/web 1.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (86) hide show
  1. package/README.md +54 -0
  2. package/astro.config.mjs +145 -0
  3. package/config.mjs +14 -0
  4. package/package.json +41 -0
  5. package/public/robots.txt +6 -0
  6. package/public/theme.json +183 -0
  7. package/src/assets/lander/check.svg +2 -0
  8. package/src/assets/lander/copy.svg +2 -0
  9. package/src/assets/lander/screenshot-github.png +0 -0
  10. package/src/assets/lander/screenshot-splash.png +0 -0
  11. package/src/assets/lander/screenshot-vscode.png +0 -0
  12. package/src/assets/lander/screenshot.png +0 -0
  13. package/src/assets/logo-dark.svg +20 -0
  14. package/src/assets/logo-light.svg +20 -0
  15. package/src/assets/logo-ornate-dark.svg +18 -0
  16. package/src/assets/logo-ornate-light.svg +18 -0
  17. package/src/assets/web/web-homepage-active-session.png +0 -0
  18. package/src/assets/web/web-homepage-new-session.png +0 -0
  19. package/src/assets/web/web-homepage-see-servers.png +0 -0
  20. package/src/components/Head.astro +50 -0
  21. package/src/components/Header.astro +128 -0
  22. package/src/components/Hero.astro +11 -0
  23. package/src/components/Lander.astro +713 -0
  24. package/src/components/Share.tsx +634 -0
  25. package/src/components/SiteTitle.astro +59 -0
  26. package/src/components/icons/custom.tsx +87 -0
  27. package/src/components/icons/index.tsx +4454 -0
  28. package/src/components/share/common.tsx +77 -0
  29. package/src/components/share/content-bash.module.css +85 -0
  30. package/src/components/share/content-bash.tsx +67 -0
  31. package/src/components/share/content-code.module.css +26 -0
  32. package/src/components/share/content-code.tsx +32 -0
  33. package/src/components/share/content-diff.module.css +153 -0
  34. package/src/components/share/content-diff.tsx +231 -0
  35. package/src/components/share/content-error.module.css +64 -0
  36. package/src/components/share/content-error.tsx +24 -0
  37. package/src/components/share/content-markdown.module.css +154 -0
  38. package/src/components/share/content-markdown.tsx +75 -0
  39. package/src/components/share/content-text.module.css +63 -0
  40. package/src/components/share/content-text.tsx +37 -0
  41. package/src/components/share/copy-button.module.css +30 -0
  42. package/src/components/share/copy-button.tsx +28 -0
  43. package/src/components/share/part.module.css +428 -0
  44. package/src/components/share/part.tsx +780 -0
  45. package/src/components/share.module.css +832 -0
  46. package/src/content/docs/1-0.mdx +67 -0
  47. package/src/content/docs/acp.mdx +156 -0
  48. package/src/content/docs/agents.mdx +720 -0
  49. package/src/content/docs/cli.mdx +597 -0
  50. package/src/content/docs/commands.mdx +323 -0
  51. package/src/content/docs/config.mdx +683 -0
  52. package/src/content/docs/custom-tools.mdx +170 -0
  53. package/src/content/docs/ecosystem.mdx +76 -0
  54. package/src/content/docs/enterprise.mdx +170 -0
  55. package/src/content/docs/formatters.mdx +130 -0
  56. package/src/content/docs/github.mdx +321 -0
  57. package/src/content/docs/gitlab.mdx +195 -0
  58. package/src/content/docs/ide.mdx +48 -0
  59. package/src/content/docs/index.mdx +359 -0
  60. package/src/content/docs/keybinds.mdx +191 -0
  61. package/src/content/docs/lsp.mdx +188 -0
  62. package/src/content/docs/mcp-servers.mdx +511 -0
  63. package/src/content/docs/models.mdx +223 -0
  64. package/src/content/docs/modes.mdx +331 -0
  65. package/src/content/docs/network.mdx +57 -0
  66. package/src/content/docs/permissions.mdx +237 -0
  67. package/src/content/docs/plugins.mdx +362 -0
  68. package/src/content/docs/providers.mdx +1889 -0
  69. package/src/content/docs/rules.mdx +180 -0
  70. package/src/content/docs/sdk.mdx +391 -0
  71. package/src/content/docs/server.mdx +286 -0
  72. package/src/content/docs/share.mdx +128 -0
  73. package/src/content/docs/skills.mdx +220 -0
  74. package/src/content/docs/themes.mdx +369 -0
  75. package/src/content/docs/tools.mdx +345 -0
  76. package/src/content/docs/troubleshooting.mdx +300 -0
  77. package/src/content/docs/tui.mdx +390 -0
  78. package/src/content/docs/web.mdx +136 -0
  79. package/src/content/docs/windows-wsl.mdx +113 -0
  80. package/src/content/docs/zen.mdx +251 -0
  81. package/src/content.config.ts +7 -0
  82. package/src/pages/[...slug].md.ts +18 -0
  83. package/src/pages/s/[id].astro +113 -0
  84. package/src/styles/custom.css +405 -0
  85. package/src/types/lang-map.d.ts +27 -0
  86. package/tsconfig.json +9 -0
@@ -0,0 +1,1889 @@
1
+ ---
2
+ title: Providers
3
+ description: Using any LLM provider in NeoCode.
4
+ ---
5
+
6
+ import config from "../../../config.mjs"
7
+ export const console = config.console
8
+
9
+ NeoCode uses the [AI SDK](https://ai-sdk.dev/) and [Models.dev](https://models.dev) to support **75+ LLM providers** and it supports running local models.
10
+
11
+ To add a provider you need to:
12
+
13
+ 1. Add the API keys for the provider using the `/connect` command.
14
+ 2. Configure the provider in your NeoCode config.
15
+
16
+ ---
17
+
18
+ ### Credentials
19
+
20
+ When you add a provider's API keys with the `/connect` command, they are stored
21
+ in `~/.local/share/neocode/auth.json`.
22
+
23
+ ---
24
+
25
+ ### Config
26
+
27
+ You can customize the providers through the `provider` section in your NeoCode
28
+ config.
29
+
30
+ ---
31
+
32
+ #### Base URL
33
+
34
+ You can customize the base URL for any provider by setting the `baseURL` option. This is useful when using proxy services or custom endpoints.
35
+
36
+ ```json title="neocode.json" {6}
37
+ {
38
+ "$schema": "https://neo.khulnasoft.com/config.json",
39
+ "provider": {
40
+ "anthropic": {
41
+ "options": {
42
+ "baseURL": "https://api.anthropic.com/v1"
43
+ }
44
+ }
45
+ }
46
+ }
47
+ ```
48
+
49
+ ---
50
+
51
+ ## NeoCode Zen
52
+
53
+ NeoCode Zen is a list of models provided by the NeoCode team that have been
54
+ tested and verified to work well with NeoCode. [Learn more](/docs/zen).
55
+
56
+ :::tip
57
+ If you are new, we recommend starting with NeoCode Zen.
58
+ :::
59
+
60
+ 1. Run the `/connect` command in the TUI, select neocode, and head to [neo.khulnasoft.com/auth](https://neo.khulnasoft.com/auth).
61
+
62
+ ```txt
63
+ /connect
64
+ ```
65
+
66
+ 2. Sign in, add your billing details, and copy your API key.
67
+
68
+ 3. Paste your API key.
69
+
70
+ ```txt
71
+ ┌ API key
72
+
73
+
74
+ └ enter
75
+ ```
76
+
77
+ 4. Run `/models` in the TUI to see the list of models we recommend.
78
+
79
+ ```txt
80
+ /models
81
+ ```
82
+
83
+ It works like any other provider in NeoCode and is completely optional to use.
84
+
85
+ ---
86
+
87
+ ## Directory
88
+
89
+ Let's look at some of the providers in detail. If you'd like to add a provider to the
90
+ list, feel free to open a PR.
91
+
92
+ :::note
93
+ Don't see a provider here? Submit a PR.
94
+ :::
95
+
96
+ ---
97
+
98
+ ### 302.AI
99
+
100
+ 1. Head over to the [302.AI console](https://302.ai/), create an account, and generate an API key.
101
+
102
+ 2. Run the `/connect` command and search for **302.AI**.
103
+
104
+ ```txt
105
+ /connect
106
+ ```
107
+
108
+ 3. Enter your 302.AI API key.
109
+
110
+ ```txt
111
+ ┌ API key
112
+
113
+
114
+ └ enter
115
+ ```
116
+
117
+ 4. Run the `/models` command to select a model.
118
+
119
+ ```txt
120
+ /models
121
+ ```
122
+
123
+ ---
124
+
125
+ ### Amazon Bedrock
126
+
127
+ To use Amazon Bedrock with NeoCode:
128
+
129
+ 1. Head over to the **Model catalog** in the Amazon Bedrock console and request
130
+ access to the models you want.
131
+
132
+ :::tip
133
+ You need to have access to the model you want in Amazon Bedrock.
134
+ :::
135
+
136
+ 2. **Configure authentication** using one of the following methods:
137
+
138
+ #### Environment Variables (Quick Start)
139
+
140
+ Set one of these environment variables while running neocode:
141
+
142
+ ```bash
143
+ # Option 1: Using AWS access keys
144
+ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY neocode
145
+
146
+ # Option 2: Using named AWS profile
147
+ AWS_PROFILE=my-profile neocode
148
+
149
+ # Option 3: Using Bedrock bearer token
150
+ AWS_BEARER_TOKEN_BEDROCK=XXX neocode
151
+ ```
152
+
153
+ Or add them to your bash profile:
154
+
155
+ ```bash title="~/.bash_profile"
156
+ export AWS_PROFILE=my-dev-profile
157
+ export AWS_REGION=us-east-1
158
+ ```
159
+
160
+ #### Configuration File (Recommended)
161
+
162
+ For project-specific or persistent configuration, use `neocode.json`:
163
+
164
+ ```json title="neocode.json"
165
+ {
166
+ "$schema": "https://neo.khulnasoft.com/config.json",
167
+ "provider": {
168
+ "amazon-bedrock": {
169
+ "options": {
170
+ "region": "us-east-1",
171
+ "profile": "my-aws-profile"
172
+ }
173
+ }
174
+ }
175
+ }
176
+ ```
177
+
178
+ **Available options:**
179
+ - `region` - AWS region (e.g., `us-east-1`, `eu-west-1`)
180
+ - `profile` - AWS named profile from `~/.aws/credentials`
181
+ - `endpoint` - Custom endpoint URL for VPC endpoints (alias for generic `baseURL` option)
182
+
183
+ :::tip
184
+ Configuration file options take precedence over environment variables.
185
+ :::
186
+
187
+ #### Advanced: VPC Endpoints
188
+
189
+ If you're using VPC endpoints for Bedrock:
190
+
191
+ ```json title="neocode.json"
192
+ {
193
+ "$schema": "https://neo.khulnasoft.com/config.json",
194
+ "provider": {
195
+ "amazon-bedrock": {
196
+ "options": {
197
+ "region": "us-east-1",
198
+ "profile": "production",
199
+ "endpoint": "https://bedrock-runtime.us-east-1.vpce-xxxxx.amazonaws.com"
200
+ }
201
+ }
202
+ }
203
+ }
204
+ ```
205
+
206
+ :::note
207
+ The `endpoint` option is an alias for the generic `baseURL` option, using AWS-specific terminology. If both `endpoint` and `baseURL` are specified, `endpoint` takes precedence.
208
+ :::
209
+
210
+ #### Authentication Methods
211
+ - **`AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY`**: Create an IAM user and generate access keys in the AWS Console
212
+ - **`AWS_PROFILE`**: Use named profiles from `~/.aws/credentials`. First configure with `aws configure --profile my-profile` or `aws sso login`
213
+ - **`AWS_BEARER_TOKEN_BEDROCK`**: Generate long-term API keys from the Amazon Bedrock console
214
+ - **`AWS_WEB_IDENTITY_TOKEN_FILE` / `AWS_ROLE_ARN`**: For EKS IRSA (IAM Roles for Service Accounts) or other Kubernetes environments with OIDC federation. These environment variables are automatically injected by Kubernetes when using service account annotations.
215
+
216
+ #### Authentication Precedence
217
+
218
+ Amazon Bedrock uses the following authentication priority:
219
+ 1. **Bearer Token** - `AWS_BEARER_TOKEN_BEDROCK` environment variable or token from `/connect` command
220
+ 2. **AWS Credential Chain** - Profile, access keys, shared credentials, IAM roles, Web Identity Tokens (EKS IRSA), instance metadata
221
+
222
+ :::note
223
+ When a bearer token is set (via `/connect` or `AWS_BEARER_TOKEN_BEDROCK`), it takes precedence over all AWS credential methods including configured profiles.
224
+ :::
225
+
226
+ 3. Run the `/models` command to select the model you want.
227
+
228
+ ```txt
229
+ /models
230
+ ```
231
+
232
+ :::note
233
+ For custom inference profiles, use the model and provider name in the key and set the `id` property to the arn. This ensures correct caching:
234
+
235
+ ```json title="neocode.json"
236
+ {
237
+ "$schema": "https://neo.khulnasoft.com/config.json",
238
+ "provider": {
239
+ "amazon-bedrock": {
240
+ // ...
241
+ "models": {
242
+ "anthropic-claude-sonnet-4.5": {
243
+ "id": "arn:aws:bedrock:us-east-1:xxx:application-inference-profile/yyy"
244
+ }
245
+ }
246
+ }
247
+ }
248
+ }
249
+ ```
250
+
251
+ :::
252
+
253
+ ---
254
+
255
+ ### Anthropic
256
+
257
+ 1. Once you've signed up, run the `/connect` command and select Anthropic.
258
+
259
+ ```txt
260
+ /connect
261
+ ```
262
+
263
+ 2. Here you can select the **Claude Pro/Max** option and it'll open your browser
264
+ and ask you to authenticate.
265
+
266
+ ```txt
267
+ ┌ Select auth method
268
+
269
+ │ Claude Pro/Max
270
+ │ Create an API Key
271
+ │ Manually enter API Key
272
+
273
+ ```
274
+
275
+ 3. Now all the Anthropic models should be available when you use the `/models` command.
276
+
277
+ ```txt
278
+ /models
279
+ ```
280
+
281
+ :::info
282
+ Using your Claude Pro/Max subscription in NeoCode is not officially supported by [Anthropic](https://anthropic.com).
283
+ :::
284
+
285
+ ##### Using API keys
286
+
287
+ You can also select **Create an API Key** if you don't have a Pro/Max subscription. It'll also open your browser and ask you to login to Anthropic and give you a code you can paste in your terminal.
288
+
289
+ Or if you already have an API key, you can select **Manually enter API Key** and paste it in your terminal.
290
+
291
+ ---
292
+
293
+ ### Azure OpenAI
294
+
295
+ :::note
296
+ If you encounter "I'm sorry, but I cannot assist with that request" errors, try changing the content filter from **DefaultV2** to **Default** in your Azure resource.
297
+ :::
298
+
299
+ 1. Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need:
300
+ - **Resource name**: This becomes part of your API endpoint (`https://RESOURCE_NAME.openai.azure.com/`)
301
+ - **API key**: Either `KEY 1` or `KEY 2` from your resource
302
+
303
+ 2. Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model.
304
+
305
+ :::note
306
+ The deployment name must match the model name for neocode to work properly.
307
+ :::
308
+
309
+ 3. Run the `/connect` command and search for **Azure**.
310
+
311
+ ```txt
312
+ /connect
313
+ ```
314
+
315
+ 4. Enter your API key.
316
+
317
+ ```txt
318
+ ┌ API key
319
+
320
+
321
+ └ enter
322
+ ```
323
+
324
+ 5. Set your resource name as an environment variable:
325
+
326
+ ```bash
327
+ AZURE_RESOURCE_NAME=XXX neocode
328
+ ```
329
+
330
+ Or add it to your bash profile:
331
+
332
+ ```bash title="~/.bash_profile"
333
+ export AZURE_RESOURCE_NAME=XXX
334
+ ```
335
+
336
+ 6. Run the `/models` command to select your deployed model.
337
+
338
+ ```txt
339
+ /models
340
+ ```
341
+
342
+ ---
343
+
344
+ ### Azure Cognitive Services
345
+
346
+ 1. Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need:
347
+ - **Resource name**: This becomes part of your API endpoint (`https://AZURE_COGNITIVE_SERVICES_RESOURCE_NAME.cognitiveservices.azure.com/`)
348
+ - **API key**: Either `KEY 1` or `KEY 2` from your resource
349
+
350
+ 2. Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model.
351
+
352
+ :::note
353
+ The deployment name must match the model name for neocode to work properly.
354
+ :::
355
+
356
+ 3. Run the `/connect` command and search for **Azure Cognitive Services**.
357
+
358
+ ```txt
359
+ /connect
360
+ ```
361
+
362
+ 4. Enter your API key.
363
+
364
+ ```txt
365
+ ┌ API key
366
+
367
+
368
+ └ enter
369
+ ```
370
+
371
+ 5. Set your resource name as an environment variable:
372
+
373
+ ```bash
374
+ AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX neocode
375
+ ```
376
+
377
+ Or add it to your bash profile:
378
+
379
+ ```bash title="~/.bash_profile"
380
+ export AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX
381
+ ```
382
+
383
+ 6. Run the `/models` command to select your deployed model.
384
+
385
+ ```txt
386
+ /models
387
+ ```
388
+
389
+ ---
390
+
391
+ ### Baseten
392
+
393
+ 1. Head over to the [Baseten](https://app.baseten.co/), create an account, and generate an API key.
394
+
395
+ 2. Run the `/connect` command and search for **Baseten**.
396
+
397
+ ```txt
398
+ /connect
399
+ ```
400
+
401
+ 3. Enter your Baseten API key.
402
+
403
+ ```txt
404
+ ┌ API key
405
+
406
+
407
+ └ enter
408
+ ```
409
+
410
+ 4. Run the `/models` command to select a model.
411
+
412
+ ```txt
413
+ /models
414
+ ```
415
+
416
+ ---
417
+
418
+ ### Cerebras
419
+
420
+ 1. Head over to the [Cerebras console](https://inference.cerebras.ai/), create an account, and generate an API key.
421
+
422
+ 2. Run the `/connect` command and search for **Cerebras**.
423
+
424
+ ```txt
425
+ /connect
426
+ ```
427
+
428
+ 3. Enter your Cerebras API key.
429
+
430
+ ```txt
431
+ ┌ API key
432
+
433
+
434
+ └ enter
435
+ ```
436
+
437
+ 4. Run the `/models` command to select a model like _Qwen 3 Coder 480B_.
438
+
439
+ ```txt
440
+ /models
441
+ ```
442
+
443
+ ---
444
+
445
+ ### Cloudflare AI Gateway
446
+
447
+ Cloudflare AI Gateway lets you access models from OpenAI, Anthropic, Workers AI, and more through a unified endpoint. With [Unified Billing](https://developers.cloudflare.com/ai-gateway/features/unified-billing/) you don't need separate API keys for each provider.
448
+
449
+ 1. Head over to the [Cloudflare dashboard](https://dash.cloudflare.com/), navigate to **AI** > **AI Gateway**, and create a new gateway.
450
+
451
+ 2. Set your Account ID and Gateway ID as environment variables.
452
+
453
+ ```bash title="~/.bash_profile"
454
+ export CLOUDFLARE_ACCOUNT_ID=your-32-character-account-id
455
+ export CLOUDFLARE_GATEWAY_ID=your-gateway-id
456
+ ```
457
+
458
+ 3. Run the `/connect` command and search for **Cloudflare AI Gateway**.
459
+
460
+ ```txt
461
+ /connect
462
+ ```
463
+
464
+ 4. Enter your Cloudflare API token.
465
+
466
+ ```txt
467
+ ┌ API key
468
+
469
+
470
+ └ enter
471
+ ```
472
+
473
+ Or set it as an environment variable.
474
+
475
+ ```bash title="~/.bash_profile"
476
+ export CLOUDFLARE_API_TOKEN=your-api-token
477
+ ```
478
+
479
+ 5. Run the `/models` command to select a model.
480
+
481
+ ```txt
482
+ /models
483
+ ```
484
+
485
+ You can also add models through your neocode config.
486
+
487
+ ```json title="neocode.json"
488
+ {
489
+ "$schema": "https://neo.khulnasoft.com/config.json",
490
+ "provider": {
491
+ "cloudflare-ai-gateway": {
492
+ "models": {
493
+ "openai/gpt-4o": {},
494
+ "anthropic/claude-sonnet-4": {}
495
+ }
496
+ }
497
+ }
498
+ }
499
+ ```
500
+
501
+ ---
502
+
503
+ ### Cortecs
504
+
505
+ 1. Head over to the [Cortecs console](https://cortecs.ai/), create an account, and generate an API key.
506
+
507
+ 2. Run the `/connect` command and search for **Cortecs**.
508
+
509
+ ```txt
510
+ /connect
511
+ ```
512
+
513
+ 3. Enter your Cortecs API key.
514
+
515
+ ```txt
516
+ ┌ API key
517
+
518
+
519
+ └ enter
520
+ ```
521
+
522
+ 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
523
+
524
+ ```txt
525
+ /models
526
+ ```
527
+
528
+ ---
529
+
530
+ ### DeepSeek
531
+
532
+ 1. Head over to the [DeepSeek console](https://platform.deepseek.com/), create an account, and click **Create new API key**.
533
+
534
+ 2. Run the `/connect` command and search for **DeepSeek**.
535
+
536
+ ```txt
537
+ /connect
538
+ ```
539
+
540
+ 3. Enter your DeepSeek API key.
541
+
542
+ ```txt
543
+ ┌ API key
544
+
545
+
546
+ └ enter
547
+ ```
548
+
549
+ 4. Run the `/models` command to select a DeepSeek model like _DeepSeek Reasoner_.
550
+
551
+ ```txt
552
+ /models
553
+ ```
554
+
555
+ ---
556
+
557
+ ### Deep Infra
558
+
559
+ 1. Head over to the [Deep Infra dashboard](https://deepinfra.com/dash), create an account, and generate an API key.
560
+
561
+ 2. Run the `/connect` command and search for **Deep Infra**.
562
+
563
+ ```txt
564
+ /connect
565
+ ```
566
+
567
+ 3. Enter your Deep Infra API key.
568
+
569
+ ```txt
570
+ ┌ API key
571
+
572
+
573
+ └ enter
574
+ ```
575
+
576
+ 4. Run the `/models` command to select a model.
577
+
578
+ ```txt
579
+ /models
580
+ ```
581
+
582
+ ---
583
+
584
+ ### Firmware
585
+
586
+ 1. Head over to the [Firmware dashboard](https://app.firmware.ai/signup), create an account, and generate an API key.
587
+
588
+ 2. Run the `/connect` command and search for **Firmware**.
589
+
590
+ ```txt
591
+ /connect
592
+ ```
593
+
594
+ 3. Enter your Firmware API key.
595
+
596
+ ```txt
597
+ ┌ API key
598
+
599
+
600
+ └ enter
601
+ ```
602
+
603
+ 4. Run the `/models` command to select a model.
604
+
605
+ ```txt
606
+ /models
607
+ ```
608
+
609
+ ---
610
+
611
+ ### Fireworks AI
612
+
613
+ 1. Head over to the [Fireworks AI console](https://app.fireworks.ai/), create an account, and click **Create API Key**.
614
+
615
+ 2. Run the `/connect` command and search for **Fireworks AI**.
616
+
617
+ ```txt
618
+ /connect
619
+ ```
620
+
621
+ 3. Enter your Fireworks AI API key.
622
+
623
+ ```txt
624
+ ┌ API key
625
+
626
+
627
+ └ enter
628
+ ```
629
+
630
+ 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
631
+
632
+ ```txt
633
+ /models
634
+ ```
635
+
636
+ ---
637
+
638
+ ### GitLab Duo
639
+
640
+ GitLab Duo provides AI-powered agentic chat with native tool calling capabilities through GitLab's Anthropic proxy.
641
+
642
+ 1. Run the `/connect` command and select GitLab.
643
+
644
+ ```txt
645
+ /connect
646
+ ```
647
+
648
+ 2. Choose your authentication method:
649
+
650
+ ```txt
651
+ ┌ Select auth method
652
+
653
+ │ OAuth (Recommended)
654
+ │ Personal Access Token
655
+
656
+ ```
657
+
658
+ #### Using OAuth (Recommended)
659
+
660
+ Select **OAuth** and your browser will open for authorization.
661
+
662
+ #### Using Personal Access Token
663
+ 1. Go to [GitLab User Settings > Access Tokens](https://gitlab.com/-/user_settings/personal_access_tokens)
664
+ 2. Click **Add new token**
665
+ 3. Name: `NeoCode`, Scopes: `api`
666
+ 4. Copy the token (starts with `glpat-`)
667
+ 5. Enter it in the terminal
668
+
669
+ 3. Run the `/models` command to see available models.
670
+
671
+ ```txt
672
+ /models
673
+ ```
674
+
675
+ Three Claude-based models are available:
676
+ - **duo-chat-haiku-4-5** (Default) - Fast responses for quick tasks
677
+ - **duo-chat-sonnet-4-5** - Balanced performance for most workflows
678
+ - **duo-chat-opus-4-5** - Most capable for complex analysis
679
+
680
+ :::note
681
+ You can also specify 'GITLAB_TOKEN' environment variable if you don't want
682
+ to store token in neocode auth storage.
683
+ :::
684
+
685
+ ##### Self-Hosted GitLab
686
+
687
+ :::note[compliance note]
688
+ NeoCode uses a small model for some AI tasks like generating the session title.
689
+ It is configured to use gpt-5-nano by default, hosted by Zen. To lock NeoCode
690
+ to only use your own GitLab-hosted instance, add the following to your
691
+ `neocode.json` file. It is also recommended to disable session sharing.
692
+
693
+ ```json
694
+ {
695
+ "$schema": "https://neo.khulnasoft.com/config.json",
696
+ "small_model": "gitlab/duo-chat-haiku-4-5",
697
+ "share": "disabled"
698
+ }
699
+ ```
700
+
701
+ :::
702
+
703
+ For self-hosted GitLab instances:
704
+
705
+ ```bash
706
+ export GITLAB_INSTANCE_URL=https://gitlab.company.com
707
+ export GITLAB_TOKEN=glpat-...
708
+ ```
709
+
710
+ If your instance runs a custom AI Gateway:
711
+
712
+ ```bash
713
+ GITLAB_AI_GATEWAY_URL=https://ai-gateway.company.com
714
+ ```
715
+
716
+ Or add to your bash profile:
717
+
718
+ ```bash title="~/.bash_profile"
719
+ export GITLAB_INSTANCE_URL=https://gitlab.company.com
720
+ export GITLAB_AI_GATEWAY_URL=https://ai-gateway.company.com
721
+ export GITLAB_TOKEN=glpat-...
722
+ ```
723
+
724
+ :::note
725
+ Your GitLab administrator must enable the following:
726
+
727
+ 1. [Duo Agent Platform](https://docs.gitlab.com/user/gitlab_duo/turn_on_off/) for the user, group, or instance
728
+ 2. Feature flags (via Rails console):
729
+ - `agent_platform_claude_code`
730
+ - `third_party_agents_enabled`
731
+ :::
732
+
733
+ ##### OAuth for Self-Hosted instances
734
+
735
+ In order to make Oauth working for your self-hosted instance, you need to create
736
+ a new application (Settings → Applications) with the
737
+ callback URL `http://127.0.0.1:8080/callback` and following scopes:
738
+
739
+ - api (Access the API on your behalf)
740
+ - read_user (Read your personal information)
741
+ - read_repository (Allows read-only access to the repository)
742
+
743
+ Then expose application ID as environment variable:
744
+
745
+ ```bash
746
+ export GITLAB_OAUTH_CLIENT_ID=your_application_id_here
747
+ ```
748
+
749
+ More documentation on [neocode-gitlab-auth](https://www.npmjs.com/package/@gitlab/neocode-gitlab-auth) homepage.
750
+
751
+ ##### Configuration
752
+
753
+ Customize through `neocode.json`:
754
+
755
+ ```json title="neocode.json"
756
+ {
757
+ "$schema": "https://neo.khulnasoft.com/config.json",
758
+ "provider": {
759
+ "gitlab": {
760
+ "options": {
761
+ "instanceUrl": "https://gitlab.com",
762
+ "featureFlags": {
763
+ "duo_agent_platform_agentic_chat": true,
764
+ "duo_agent_platform": true
765
+ }
766
+ }
767
+ }
768
+ }
769
+ }
770
+ ```
771
+
772
+ ##### GitLab API Tools (Optional, but highly recommended)
773
+
774
+ To access GitLab tools (merge requests, issues, pipelines, CI/CD, etc.):
775
+
776
+ ```json title="neocode.json"
777
+ {
778
+ "$schema": "https://neo.khulnasoft.com/config.json",
779
+ "plugin": ["@gitlab/neocode-gitlab-plugin"]
780
+ }
781
+ ```
782
+
783
+ This plugin provides comprehensive GitLab repository management capabilities including MR reviews, issue tracking, pipeline monitoring, and more.
784
+
785
+ ---
786
+
787
+ ### GitHub Copilot
788
+
789
+ To use your GitHub Copilot subscription with neocode:
790
+
791
+ :::note
792
+ Some models might need a [Pro+
793
+ subscription](https://github.com/features/copilot/plans) to use.
794
+
795
+ Some models need to be manually enabled in your [GitHub Copilot settings](https://docs.github.com/en/copilot/how-tos/use-ai-models/configure-access-to-ai-models#setup-for-individual-use).
796
+ :::
797
+
798
+ 1. Run the `/connect` command and search for GitHub Copilot.
799
+
800
+ ```txt
801
+ /connect
802
+ ```
803
+
804
+ 2. Navigate to [github.com/login/device](https://github.com/login/device) and enter the code.
805
+
806
+ ```txt
807
+ ┌ Login with GitHub Copilot
808
+
809
+ │ https://github.com/login/device
810
+
811
+ │ Enter code: 8F43-6FCF
812
+
813
+ └ Waiting for authorization...
814
+ ```
815
+
816
+ 3. Now run the `/models` command to select the model you want.
817
+
818
+ ```txt
819
+ /models
820
+ ```
821
+
822
+ ---
823
+
824
+ ### Google Vertex AI
825
+
826
+ To use Google Vertex AI with NeoCode:
827
+
828
+ 1. Head over to the **Model Garden** in the Google Cloud Console and check the
829
+ models available in your region.
830
+
831
+ :::note
832
+ You need to have a Google Cloud project with Vertex AI API enabled.
833
+ :::
834
+
835
+ 2. Set the required environment variables:
836
+ - `GOOGLE_CLOUD_PROJECT`: Your Google Cloud project ID
837
+ - `VERTEX_LOCATION` (optional): The region for Vertex AI (defaults to `global`)
838
+ - Authentication (choose one):
839
+ - `GOOGLE_APPLICATION_CREDENTIALS`: Path to your service account JSON key file
840
+ - Authenticate using gcloud CLI: `gcloud auth application-default login`
841
+
842
+ Set them while running neocode.
843
+
844
+ ```bash
845
+ GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json GOOGLE_CLOUD_PROJECT=your-project-id neocode
846
+ ```
847
+
848
+ Or add them to your bash profile.
849
+
850
+ ```bash title="~/.bash_profile"
851
+ export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
852
+ export GOOGLE_CLOUD_PROJECT=your-project-id
853
+ export VERTEX_LOCATION=global
854
+ ```
855
+
856
+ :::tip
857
+ The `global` region improves availability and reduces errors at no extra cost. Use regional endpoints (e.g., `us-central1`) for data residency requirements. [Learn more](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-partner-models#regional_and_global_endpoints)
858
+ :::
859
+
860
+ 3. Run the `/models` command to select the model you want.
861
+
862
+ ```txt
863
+ /models
864
+ ```
865
+
866
+ ---
867
+
868
+ ### Groq
869
+
870
+ 1. Head over to the [Groq console](https://console.groq.com/), click **Create API Key**, and copy the key.
871
+
872
+ 2. Run the `/connect` command and search for Groq.
873
+
874
+ ```txt
875
+ /connect
876
+ ```
877
+
878
+ 3. Enter the API key for the provider.
879
+
880
+ ```txt
881
+ ┌ API key
882
+
883
+
884
+ └ enter
885
+ ```
886
+
887
+ 4. Run the `/models` command to select the one you want.
888
+
889
+ ```txt
890
+ /models
891
+ ```
892
+
893
+ ---
894
+
895
+ ### Hugging Face
896
+
897
+ [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers) provides access to open models supported by 17+ providers.
898
+
899
+ 1. Head over to [Hugging Face settings](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) to create a token with permission to make calls to Inference Providers.
900
+
901
+ 2. Run the `/connect` command and search for **Hugging Face**.
902
+
903
+ ```txt
904
+ /connect
905
+ ```
906
+
907
+ 3. Enter your Hugging Face token.
908
+
909
+ ```txt
910
+ ┌ API key
911
+
912
+
913
+ └ enter
914
+ ```
915
+
916
+ 4. Run the `/models` command to select a model like _Kimi-K2-Instruct_ or _GLM-4.6_.
917
+
918
+ ```txt
919
+ /models
920
+ ```
921
+
922
+ ---
923
+
924
+ ### Helicone
925
+
926
+ [Helicone](https://helicone.ai) is an LLM observability platform that provides logging, monitoring, and analytics for your AI applications. The Helicone AI Gateway routes your requests to the appropriate provider automatically based on the model.
927
+
928
+ 1. Head over to [Helicone](https://helicone.ai), create an account, and generate an API key from your dashboard.
929
+
930
+ 2. Run the `/connect` command and search for **Helicone**.
931
+
932
+ ```txt
933
+ /connect
934
+ ```
935
+
936
+ 3. Enter your Helicone API key.
937
+
938
+ ```txt
939
+ ┌ API key
940
+
941
+
942
+ └ enter
943
+ ```
944
+
945
+ 4. Run the `/models` command to select a model.
946
+
947
+ ```txt
948
+ /models
949
+ ```
950
+
951
+ For more providers and advanced features like caching and rate limiting, check the [Helicone documentation](https://docs.helicone.ai).
952
+
953
+ #### Optional Configs
954
+
955
+ In the event you see a feature or model from Helicone that isn't configured automatically through neocode, you can always configure it yourself.
956
+
957
+ Here's [Helicone's Model Directory](https://helicone.ai/models), you'll need this to grab the IDs of the models you want to add.
958
+
959
+ ```jsonc title="~/.config/neocode/neocode.jsonc"
960
+ {
961
+ "$schema": "https://neo.khulnasoft.com/config.json",
962
+ "provider": {
963
+ "helicone": {
964
+ "npm": "@ai-sdk/openai-compatible",
965
+ "name": "Helicone",
966
+ "options": {
967
+ "baseURL": "https://ai-gateway.helicone.ai",
968
+ },
969
+ "models": {
970
+ "gpt-4o": {
971
+ // Model ID (from Helicone's model directory page)
972
+ "name": "GPT-4o", // Your own custom name for the model
973
+ },
974
+ "claude-sonnet-4-20250514": {
975
+ "name": "Claude Sonnet 4",
976
+ },
977
+ },
978
+ },
979
+ },
980
+ }
981
+ ```
982
+
983
+ #### Custom Headers
984
+
985
+ Helicone supports custom headers for features like caching, user tracking, and session management. Add them to your provider config using `options.headers`:
986
+
987
+ ```jsonc title="~/.config/neocode/neocode.jsonc"
988
+ {
989
+ "$schema": "https://neo.khulnasoft.com/config.json",
990
+ "provider": {
991
+ "helicone": {
992
+ "npm": "@ai-sdk/openai-compatible",
993
+ "name": "Helicone",
994
+ "options": {
995
+ "baseURL": "https://ai-gateway.helicone.ai",
996
+ "headers": {
997
+ "Helicone-Cache-Enabled": "true",
998
+ "Helicone-User-Id": "neocode",
999
+ },
1000
+ },
1001
+ },
1002
+ },
1003
+ }
1004
+ ```
1005
+
1006
+ ##### Session tracking
1007
+
1008
+ Helicone's [Sessions](https://docs.helicone.ai/features/sessions) feature lets you group related LLM requests together. Use the [neocode-helicone-session](https://github.com/H2Shami/neocode-helicone-session) plugin to automatically log each NeoCode conversation as a session in Helicone.
1009
+
1010
+ ```bash
1011
+ npm install -g neocode-helicone-session
1012
+ ```
1013
+
1014
+ Add it to your config.
1015
+
1016
+ ```json title="neocode.json"
1017
+ {
1018
+ "plugin": ["neocode-helicone-session"]
1019
+ }
1020
+ ```
1021
+
1022
+ The plugin injects `Helicone-Session-Id` and `Helicone-Session-Name` headers into your requests. In Helicone's Sessions page, you'll see each NeoCode conversation listed as a separate session.
1023
+
1024
+ ##### Common Helicone headers
1025
+
1026
+ | Header | Description |
1027
+ | -------------------------- | ------------------------------------------------------------- |
1028
+ | `Helicone-Cache-Enabled` | Enable response caching (`true`/`false`) |
1029
+ | `Helicone-User-Id` | Track metrics by user |
1030
+ | `Helicone-Property-[Name]` | Add custom properties (e.g., `Helicone-Property-Environment`) |
1031
+ | `Helicone-Prompt-Id` | Associate requests with prompt versions |
1032
+
1033
+ See the [Helicone Header Directory](https://docs.helicone.ai/helicone-headers/header-directory) for all available headers.
1034
+
1035
+ ---
1036
+
1037
+ ### llama.cpp
1038
+
1039
+ You can configure neocode to use local models through [llama.cpp's](https://github.com/ggml-org/llama.cpp) llama-server utility
1040
+
1041
+ ```json title="neocode.json" "llama.cpp" {5, 6, 8, 10-15}
1042
+ {
1043
+ "$schema": "https://neo.khulnasoft.com/config.json",
1044
+ "provider": {
1045
+ "llama.cpp": {
1046
+ "npm": "@ai-sdk/openai-compatible",
1047
+ "name": "llama-server (local)",
1048
+ "options": {
1049
+ "baseURL": "http://127.0.0.1:8080/v1"
1050
+ },
1051
+ "models": {
1052
+ "qwen3-coder:a3b": {
1053
+ "name": "Qwen3-Coder: a3b-30b (local)",
1054
+ "limit": {
1055
+ "context": 128000,
1056
+ "output": 65536
1057
+ }
1058
+ }
1059
+ }
1060
+ }
1061
+ }
1062
+ }
1063
+ ```
1064
+
1065
+ In this example:
1066
+
1067
+ - `llama.cpp` is the custom provider ID. This can be any string you want.
1068
+ - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
1069
+ - `name` is the display name for the provider in the UI.
1070
+ - `options.baseURL` is the endpoint for the local server.
1071
+ - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
1072
+
1073
+ ---
1074
+
1075
+ ### IO.NET
1076
+
1077
+ IO.NET offers 17 models optimized for various use cases:
1078
+
1079
+ 1. Head over to the [IO.NET console](https://ai.io.net/), create an account, and generate an API key.
1080
+
1081
+ 2. Run the `/connect` command and search for **IO.NET**.
1082
+
1083
+ ```txt
1084
+ /connect
1085
+ ```
1086
+
1087
+ 3. Enter your IO.NET API key.
1088
+
1089
+ ```txt
1090
+ ┌ API key
1091
+
1092
+
1093
+ └ enter
1094
+ ```
1095
+
1096
+ 4. Run the `/models` command to select a model.
1097
+
1098
+ ```txt
1099
+ /models
1100
+ ```
1101
+
1102
+ ---
1103
+
1104
+ ### LM Studio
1105
+
1106
+ You can configure neocode to use local models through LM Studio.
1107
+
1108
+ ```json title="neocode.json" "lmstudio" {5, 6, 8, 10-14}
1109
+ {
1110
+ "$schema": "https://neo.khulnasoft.com/config.json",
1111
+ "provider": {
1112
+ "lmstudio": {
1113
+ "npm": "@ai-sdk/openai-compatible",
1114
+ "name": "LM Studio (local)",
1115
+ "options": {
1116
+ "baseURL": "http://127.0.0.1:1234/v1"
1117
+ },
1118
+ "models": {
1119
+ "google/gemma-3n-e4b": {
1120
+ "name": "Gemma 3n-e4b (local)"
1121
+ }
1122
+ }
1123
+ }
1124
+ }
1125
+ }
1126
+ ```
1127
+
1128
+ In this example:
1129
+
1130
+ - `lmstudio` is the custom provider ID. This can be any string you want.
1131
+ - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
1132
+ - `name` is the display name for the provider in the UI.
1133
+ - `options.baseURL` is the endpoint for the local server.
1134
+ - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
1135
+
1136
+ ---
1137
+
1138
+ ### Moonshot AI
1139
+
1140
+ To use Kimi K2 from Moonshot AI:
1141
+
1142
+ 1. Head over to the [Moonshot AI console](https://platform.moonshot.ai/console), create an account, and click **Create API key**.
1143
+
1144
+ 2. Run the `/connect` command and search for **Moonshot AI**.
1145
+
1146
+ ```txt
1147
+ /connect
1148
+ ```
1149
+
1150
+ 3. Enter your Moonshot API key.
1151
+
1152
+ ```txt
1153
+ ┌ API key
1154
+
1155
+
1156
+ └ enter
1157
+ ```
1158
+
1159
+ 4. Run the `/models` command to select _Kimi K2_.
1160
+
1161
+ ```txt
1162
+ /models
1163
+ ```
1164
+
1165
+ ---
1166
+
1167
+ ### MiniMax
1168
+
1169
+ 1. Head over to the [MiniMax API Console](https://platform.minimax.io/login), create an account, and generate an API key.
1170
+
1171
+ 2. Run the `/connect` command and search for **MiniMax**.
1172
+
1173
+ ```txt
1174
+ /connect
1175
+ ```
1176
+
1177
+ 3. Enter your MiniMax API key.
1178
+
1179
+ ```txt
1180
+ ┌ API key
1181
+
1182
+
1183
+ └ enter
1184
+ ```
1185
+
1186
+ 4. Run the `/models` command to select a model like _M2.1_.
1187
+
1188
+ ```txt
1189
+ /models
1190
+ ```
1191
+
1192
+ ---
1193
+
1194
+ ### Nebius Token Factory
1195
+
1196
+ 1. Head over to the [Nebius Token Factory console](https://tokenfactory.nebius.com/), create an account, and click **Add Key**.
1197
+
1198
+ 2. Run the `/connect` command and search for **Nebius Token Factory**.
1199
+
1200
+ ```txt
1201
+ /connect
1202
+ ```
1203
+
1204
+ 3. Enter your Nebius Token Factory API key.
1205
+
1206
+ ```txt
1207
+ ┌ API key
1208
+
1209
+
1210
+ └ enter
1211
+ ```
1212
+
1213
+ 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
1214
+
1215
+ ```txt
1216
+ /models
1217
+ ```
1218
+
1219
+ ---
1220
+
1221
+ ### Ollama
1222
+
1223
+ You can configure neocode to use local models through Ollama.
1224
+
1225
+ :::tip
1226
+ Ollama can automatically configure itself for NeoCode. See the [Ollama integration docs](https://docs.ollama.com/integrations/neocode) for details.
1227
+ :::
1228
+
1229
+ ```json title="neocode.json" "ollama" {5, 6, 8, 10-14}
1230
+ {
1231
+ "$schema": "https://neo.khulnasoft.com/config.json",
1232
+ "provider": {
1233
+ "ollama": {
1234
+ "npm": "@ai-sdk/openai-compatible",
1235
+ "name": "Ollama (local)",
1236
+ "options": {
1237
+ "baseURL": "http://localhost:11434/v1"
1238
+ },
1239
+ "models": {
1240
+ "llama2": {
1241
+ "name": "Llama 2"
1242
+ }
1243
+ }
1244
+ }
1245
+ }
1246
+ }
1247
+ ```
1248
+
1249
+ In this example:
1250
+
1251
+ - `ollama` is the custom provider ID. This can be any string you want.
1252
+ - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
1253
+ - `name` is the display name for the provider in the UI.
1254
+ - `options.baseURL` is the endpoint for the local server.
1255
+ - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
1256
+
1257
+ :::tip
1258
+ If tool calls aren't working, try increasing `num_ctx` in Ollama. Start around 16k - 32k.
1259
+ :::
1260
+
1261
+ ---
1262
+
1263
+ ### Ollama Cloud
1264
+
1265
+ To use Ollama Cloud with NeoCode:
1266
+
1267
+ 1. Head over to [https://ollama.com/](https://ollama.com/) and sign in or create an account.
1268
+
1269
+ 2. Navigate to **Settings** > **Keys** and click **Add API Key** to generate a new API key.
1270
+
1271
+ 3. Copy the API key for use in NeoCode.
1272
+
1273
+ 4. Run the `/connect` command and search for **Ollama Cloud**.
1274
+
1275
+ ```txt
1276
+ /connect
1277
+ ```
1278
+
1279
+ 5. Enter your Ollama Cloud API key.
1280
+
1281
+ ```txt
1282
+ ┌ API key
1283
+
1284
+
1285
+ └ enter
1286
+ ```
1287
+
1288
+ 6. **Important**: Before using cloud models in NeoCode, you must pull the model information locally:
1289
+
1290
+ ```bash
1291
+ ollama pull gpt-oss:20b-cloud
1292
+ ```
1293
+
1294
+ 7. Run the `/models` command to select your Ollama Cloud model.
1295
+
1296
+ ```txt
1297
+ /models
1298
+ ```
1299
+
1300
+ ---
1301
+
1302
+ ### OpenAI
1303
+
1304
+ We recommend signing up for [ChatGPT Plus or Pro](https://chatgpt.com/pricing).
1305
+
1306
+ 1. Once you've signed up, run the `/connect` command and select OpenAI.
1307
+
1308
+ ```txt
1309
+ /connect
1310
+ ```
1311
+
1312
+ 2. Here you can select the **ChatGPT Plus/Pro** option and it'll open your browser
1313
+ and ask you to authenticate.
1314
+
1315
+ ```txt
1316
+ ┌ Select auth method
1317
+
1318
+ │ ChatGPT Plus/Pro
1319
+ │ Manually enter API Key
1320
+
1321
+ ```
1322
+
1323
+ 3. Now all the OpenAI models should be available when you use the `/models` command.
1324
+
1325
+ ```txt
1326
+ /models
1327
+ ```
1328
+
1329
+ ##### Using API keys
1330
+
1331
+ If you already have an API key, you can select **Manually enter API Key** and paste it in your terminal.
1332
+
1333
+ ---
1334
+
1335
+ ### NeoCode Zen
1336
+
1337
+ NeoCode Zen is a list of tested and verified models provided by the NeoCode team. [Learn more](/docs/zen).
1338
+
1339
+ 1. Sign in to **<a href={console}>NeoCode Zen</a>** and click **Create API Key**.
1340
+
1341
+ 2. Run the `/connect` command and search for **NeoCode Zen**.
1342
+
1343
+ ```txt
1344
+ /connect
1345
+ ```
1346
+
1347
+ 3. Enter your NeoCode API key.
1348
+
1349
+ ```txt
1350
+ ┌ API key
1351
+
1352
+
1353
+ └ enter
1354
+ ```
1355
+
1356
+ 4. Run the `/models` command to select a model like _Qwen 3 Coder 480B_.
1357
+
1358
+ ```txt
1359
+ /models
1360
+ ```
1361
+
1362
+ ---
1363
+
1364
+ ### OpenRouter
1365
+
1366
+ 1. Head over to the [OpenRouter dashboard](https://openrouter.ai/settings/keys), click **Create API Key**, and copy the key.
1367
+
1368
+ 2. Run the `/connect` command and search for OpenRouter.
1369
+
1370
+ ```txt
1371
+ /connect
1372
+ ```
1373
+
1374
+ 3. Enter the API key for the provider.
1375
+
1376
+ ```txt
1377
+ ┌ API key
1378
+
1379
+
1380
+ └ enter
1381
+ ```
1382
+
1383
+ 4. Many OpenRouter models are preloaded by default, run the `/models` command to select the one you want.
1384
+
1385
+ ```txt
1386
+ /models
1387
+ ```
1388
+
1389
+ You can also add additional models through your neocode config.
1390
+
1391
+ ```json title="neocode.json" {6}
1392
+ {
1393
+ "$schema": "https://neo.khulnasoft.com/config.json",
1394
+ "provider": {
1395
+ "openrouter": {
1396
+ "models": {
1397
+ "somecoolnewmodel": {}
1398
+ }
1399
+ }
1400
+ }
1401
+ }
1402
+ ```
1403
+
1404
+ 5. You can also customize them through your neocode config. Here's an example of specifying a provider
1405
+
1406
+ ```json title="neocode.json"
1407
+ {
1408
+ "$schema": "https://neo.khulnasoft.com/config.json",
1409
+ "provider": {
1410
+ "openrouter": {
1411
+ "models": {
1412
+ "moonshotai/kimi-k2": {
1413
+ "options": {
1414
+ "provider": {
1415
+ "order": ["baseten"],
1416
+ "allow_fallbacks": false
1417
+ }
1418
+ }
1419
+ }
1420
+ }
1421
+ }
1422
+ }
1423
+ }
1424
+ ```
1425
+
1426
+ ---
1427
+
1428
+ ### SAP AI Core
1429
+
1430
+ SAP AI Core provides access to 40+ models from OpenAI, Anthropic, Google, Amazon, Meta, Mistral, and AI21 through a unified platform.
1431
+
1432
+ 1. Go to your [SAP BTP Cockpit](https://account.hana.ondemand.com/), navigate to your SAP AI Core service instance, and create a service key.
1433
+
1434
+ :::tip
1435
+ The service key is a JSON object containing `clientid`, `clientsecret`, `url`, and `serviceurls.AI_API_URL`. You can find your AI Core instance under **Services** > **Instances and Subscriptions** in the BTP Cockpit.
1436
+ :::
1437
+
1438
+ 2. Run the `/connect` command and search for **SAP AI Core**.
1439
+
1440
+ ```txt
1441
+ /connect
1442
+ ```
1443
+
1444
+ 3. Enter your service key JSON.
1445
+
1446
+ ```txt
1447
+ ┌ Service key
1448
+
1449
+
1450
+ └ enter
1451
+ ```
1452
+
1453
+ Or set the `AICORE_SERVICE_KEY` environment variable:
1454
+
1455
+ ```bash
1456
+ AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' neocode
1457
+ ```
1458
+
1459
+ Or add it to your bash profile:
1460
+
1461
+ ```bash title="~/.bash_profile"
1462
+ export AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}'
1463
+ ```
1464
+
1465
+ 4. Optionally set deployment ID and resource group:
1466
+
1467
+ ```bash
1468
+ AICORE_DEPLOYMENT_ID=your-deployment-id AICORE_RESOURCE_GROUP=your-resource-group neocode
1469
+ ```
1470
+
1471
+ :::note
1472
+ These settings are optional and should be configured according to your SAP AI Core setup.
1473
+ :::
1474
+
1475
+ 5. Run the `/models` command to select from 40+ available models.
1476
+
1477
+ ```txt
1478
+ /models
1479
+ ```
1480
+
1481
+ ---
1482
+
1483
+ ### OVHcloud AI Endpoints
1484
+
1485
+ 1. Head over to the [OVHcloud panel](https://ovh.com/manager). Navigate to the `Public Cloud` section, `AI & Machine Learning` > `AI Endpoints` and in `API Keys` tab, click **Create a new API key**.
1486
+
1487
+ 2. Run the `/connect` command and search for **OVHcloud AI Endpoints**.
1488
+
1489
+ ```txt
1490
+ /connect
1491
+ ```
1492
+
1493
+ 3. Enter your OVHcloud AI Endpoints API key.
1494
+
1495
+ ```txt
1496
+ ┌ API key
1497
+
1498
+
1499
+ └ enter
1500
+ ```
1501
+
1502
+ 4. Run the `/models` command to select a model like _gpt-oss-120b_.
1503
+
1504
+ ```txt
1505
+ /models
1506
+ ```
1507
+
1508
+ ---
1509
+
1510
+ ### Scaleway
1511
+
1512
+ To use [Scaleway Generative APIs](https://www.scaleway.com/en/docs/generative-apis/) with Neocode:
1513
+
1514
+ 1. Head over to the [Scaleway Console IAM settings](https://console.scaleway.com/iam/api-keys) to generate a new API key.
1515
+
1516
+ 2. Run the `/connect` command and search for **Scaleway**.
1517
+
1518
+ ```txt
1519
+ /connect
1520
+ ```
1521
+
1522
+ 3. Enter your Scaleway API key.
1523
+
1524
+ ```txt
1525
+ ┌ API key
1526
+
1527
+
1528
+ └ enter
1529
+ ```
1530
+
1531
+ 4. Run the `/models` command to select a model like _devstral-2-123b-instruct-2512_ or _gpt-oss-120b_.
1532
+
1533
+ ```txt
1534
+ /models
1535
+ ```
1536
+
1537
+ ---
1538
+
1539
+ ### Together AI
1540
+
1541
+ 1. Head over to the [Together AI console](https://api.together.ai), create an account, and click **Add Key**.
1542
+
1543
+ 2. Run the `/connect` command and search for **Together AI**.
1544
+
1545
+ ```txt
1546
+ /connect
1547
+ ```
1548
+
1549
+ 3. Enter your Together AI API key.
1550
+
1551
+ ```txt
1552
+ ┌ API key
1553
+
1554
+
1555
+ └ enter
1556
+ ```
1557
+
1558
+ 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
1559
+
1560
+ ```txt
1561
+ /models
1562
+ ```
1563
+
1564
+ ---
1565
+
1566
+ ### Venice AI
1567
+
1568
+ 1. Head over to the [Venice AI console](https://venice.ai), create an account, and generate an API key.
1569
+
1570
+ 2. Run the `/connect` command and search for **Venice AI**.
1571
+
1572
+ ```txt
1573
+ /connect
1574
+ ```
1575
+
1576
+ 3. Enter your Venice AI API key.
1577
+
1578
+ ```txt
1579
+ ┌ API key
1580
+
1581
+
1582
+ └ enter
1583
+ ```
1584
+
1585
+ 4. Run the `/models` command to select a model like _Llama 3.3 70B_.
1586
+
1587
+ ```txt
1588
+ /models
1589
+ ```
1590
+
1591
+ ---
1592
+
1593
+ ### Vercel AI Gateway
1594
+
1595
+ Vercel AI Gateway lets you access models from OpenAI, Anthropic, Google, xAI, and more through a unified endpoint. Models are offered at list price with no markup.
1596
+
1597
+ 1. Head over to the [Vercel dashboard](https://vercel.com/), navigate to the **AI Gateway** tab, and click **API keys** to create a new API key.
1598
+
1599
+ 2. Run the `/connect` command and search for **Vercel AI Gateway**.
1600
+
1601
+ ```txt
1602
+ /connect
1603
+ ```
1604
+
1605
+ 3. Enter your Vercel AI Gateway API key.
1606
+
1607
+ ```txt
1608
+ ┌ API key
1609
+
1610
+
1611
+ └ enter
1612
+ ```
1613
+
1614
+ 4. Run the `/models` command to select a model.
1615
+
1616
+ ```txt
1617
+ /models
1618
+ ```
1619
+
1620
+ You can also customize models through your neocode config. Here's an example of specifying provider routing order.
1621
+
1622
+ ```json title="neocode.json"
1623
+ {
1624
+ "$schema": "https://neo.khulnasoft.com/config.json",
1625
+ "provider": {
1626
+ "vercel": {
1627
+ "models": {
1628
+ "anthropic/claude-sonnet-4": {
1629
+ "options": {
1630
+ "order": ["anthropic", "vertex"]
1631
+ }
1632
+ }
1633
+ }
1634
+ }
1635
+ }
1636
+ }
1637
+ ```
1638
+
1639
+ Some useful routing options:
1640
+
1641
+ | Option | Description |
1642
+ | ------------------- | ---------------------------------------------------- |
1643
+ | `order` | Provider sequence to try |
1644
+ | `only` | Restrict to specific providers |
1645
+ | `zeroDataRetention` | Only use providers with zero data retention policies |
1646
+
1647
+ ---
1648
+
1649
+ ### xAI
1650
+
1651
+ 1. Head over to the [xAI console](https://console.x.ai/), create an account, and generate an API key.
1652
+
1653
+ 2. Run the `/connect` command and search for **xAI**.
1654
+
1655
+ ```txt
1656
+ /connect
1657
+ ```
1658
+
1659
+ 3. Enter your xAI API key.
1660
+
1661
+ ```txt
1662
+ ┌ API key
1663
+
1664
+
1665
+ └ enter
1666
+ ```
1667
+
1668
+ 4. Run the `/models` command to select a model like _Grok Beta_.
1669
+
1670
+ ```txt
1671
+ /models
1672
+ ```
1673
+
1674
+ ---
1675
+
1676
+ ### Z.AI
1677
+
1678
+ 1. Head over to the [Z.AI API console](https://z.ai/manage-apikey/apikey-list), create an account, and click **Create a new API key**.
1679
+
1680
+ 2. Run the `/connect` command and search for **Z.AI**.
1681
+
1682
+ ```txt
1683
+ /connect
1684
+ ```
1685
+
1686
+ If you are subscribed to the **GLM Coding Plan**, select **Z.AI Coding Plan**.
1687
+
1688
+ 3. Enter your Z.AI API key.
1689
+
1690
+ ```txt
1691
+ ┌ API key
1692
+
1693
+
1694
+ └ enter
1695
+ ```
1696
+
1697
+ 4. Run the `/models` command to select a model like _GLM-4.7_.
1698
+
1699
+ ```txt
1700
+ /models
1701
+ ```
1702
+
1703
+ ---
1704
+
1705
+ ### ZenMux
1706
+
1707
+ 1. Head over to the [ZenMux dashboard](https://zenmux.ai/settings/keys), click **Create API Key**, and copy the key.
1708
+
1709
+ 2. Run the `/connect` command and search for ZenMux.
1710
+
1711
+ ```txt
1712
+ /connect
1713
+ ```
1714
+
1715
+ 3. Enter the API key for the provider.
1716
+
1717
+ ```txt
1718
+ ┌ API key
1719
+
1720
+
1721
+ └ enter
1722
+ ```
1723
+
1724
+ 4. Many ZenMux models are preloaded by default, run the `/models` command to select the one you want.
1725
+
1726
+ ```txt
1727
+ /models
1728
+ ```
1729
+
1730
+ You can also add additional models through your neocode config.
1731
+
1732
+ ```json title="neocode.json" {6}
1733
+ {
1734
+ "$schema": "https://neo.khulnasoft.com/config.json",
1735
+ "provider": {
1736
+ "zenmux": {
1737
+ "models": {
1738
+ "somecoolnewmodel": {}
1739
+ }
1740
+ }
1741
+ }
1742
+ }
1743
+ ```
1744
+
1745
+ ---
1746
+
1747
+ ## Custom provider
1748
+
1749
+ To add any **OpenAI-compatible** provider that's not listed in the `/connect` command:
1750
+
1751
+ :::tip
1752
+ You can use any OpenAI-compatible provider with neocode. Most modern AI providers offer OpenAI-compatible APIs.
1753
+ :::
1754
+
1755
+ 1. Run the `/connect` command and scroll down to **Other**.
1756
+
1757
+ ```bash
1758
+ $ /connect
1759
+
1760
+ ┌ Add credential
1761
+
1762
+ ◆ Select provider
1763
+ │ ...
1764
+ │ ● Other
1765
+
1766
+ ```
1767
+
1768
+ 2. Enter a unique ID for the provider.
1769
+
1770
+ ```bash
1771
+ $ /connect
1772
+
1773
+ ┌ Add credential
1774
+
1775
+ ◇ Enter provider id
1776
+ │ myprovider
1777
+
1778
+ ```
1779
+
1780
+ :::note
1781
+ Choose a memorable ID, you'll use this in your config file.
1782
+ :::
1783
+
1784
+ 3. Enter your API key for the provider.
1785
+
1786
+ ```bash
1787
+ $ /connect
1788
+
1789
+ ┌ Add credential
1790
+
1791
+ ▲ This only stores a credential for myprovider - you will need to configure it in neocode.json, check the docs for examples.
1792
+
1793
+ ◇ Enter your API key
1794
+ │ sk-...
1795
+
1796
+ ```
1797
+
1798
+ 4. Create or update your `neocode.json` file in your project directory:
1799
+
1800
+ ```json title="neocode.json" ""myprovider"" {5-15}
1801
+ {
1802
+ "$schema": "https://neo.khulnasoft.com/config.json",
1803
+ "provider": {
1804
+ "myprovider": {
1805
+ "npm": "@ai-sdk/openai-compatible",
1806
+ "name": "My AI ProviderDisplay Name",
1807
+ "options": {
1808
+ "baseURL": "https://api.myprovider.com/v1"
1809
+ },
1810
+ "models": {
1811
+ "my-model-name": {
1812
+ "name": "My Model Display Name"
1813
+ }
1814
+ }
1815
+ }
1816
+ }
1817
+ }
1818
+ ```
1819
+
1820
+ Here are the configuration options:
1821
+ - **npm**: AI SDK package to use, `@ai-sdk/openai-compatible` for OpenAI-compatible providers
1822
+ - **name**: Display name in UI.
1823
+ - **models**: Available models.
1824
+ - **options.baseURL**: API endpoint URL.
1825
+ - **options.apiKey**: Optionally set the API key, if not using auth.
1826
+ - **options.headers**: Optionally set custom headers.
1827
+
1828
+ More on the advanced options in the example below.
1829
+
1830
+ 5. Run the `/models` command and your custom provider and models will appear in the selection list.
1831
+
1832
+ ---
1833
+
1834
+ ##### Example
1835
+
1836
+ Here's an example setting the `apiKey`, `headers`, and model `limit` options.
1837
+
1838
+ ```json title="neocode.json" {9,11,17-20}
1839
+ {
1840
+ "$schema": "https://neo.khulnasoft.com/config.json",
1841
+ "provider": {
1842
+ "myprovider": {
1843
+ "npm": "@ai-sdk/openai-compatible",
1844
+ "name": "My AI ProviderDisplay Name",
1845
+ "options": {
1846
+ "baseURL": "https://api.myprovider.com/v1",
1847
+ "apiKey": "{env:ANTHROPIC_API_KEY}",
1848
+ "headers": {
1849
+ "Authorization": "Bearer custom-token"
1850
+ }
1851
+ },
1852
+ "models": {
1853
+ "my-model-name": {
1854
+ "name": "My Model Display Name",
1855
+ "limit": {
1856
+ "context": 200000,
1857
+ "output": 65536
1858
+ }
1859
+ }
1860
+ }
1861
+ }
1862
+ }
1863
+ }
1864
+ ```
1865
+
1866
+ Configuration details:
1867
+
1868
+ - **apiKey**: Set using `env` variable syntax, [learn more](/docs/config#env-vars).
1869
+ - **headers**: Custom headers sent with each request.
1870
+ - **limit.context**: Maximum input tokens the model accepts.
1871
+ - **limit.output**: Maximum tokens the model can generate.
1872
+
1873
+ The `limit` fields allow NeoCode to understand how much context you have left. Standard providers pull these from models.dev automatically.
1874
+
1875
+ ---
1876
+
1877
+ ## Troubleshooting
1878
+
1879
+ If you are having trouble with configuring a provider, check the following:
1880
+
1881
+ 1. **Check the auth setup**: Run `neocode auth list` to see if the credentials
1882
+ for the provider are added to your config.
1883
+
1884
+ This doesn't apply to providers like Amazon Bedrock, that rely on environment variables for their auth.
1885
+
1886
+ 2. For custom providers, check the neocode config and:
1887
+ - Make sure the provider ID used in the `/connect` command matches the ID in your neocode config.
1888
+ - The right npm package is used for the provider. For example, use `@ai-sdk/cerebras` for Cerebras. And for all other OpenAI-compatible providers, use `@ai-sdk/openai-compatible`.
1889
+ - Check correct API endpoint is used in the `options.baseURL` field.