@theclawlab/pai 1.0.0 → 1.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +8 -0
- package/USAGE.md +920 -0
- package/package.json +3 -2
package/README.md
CHANGED
package/USAGE.md
ADDED
|
@@ -0,0 +1,920 @@
|
|
|
1
|
+
# PAI Usage Guide
|
|
2
|
+
|
|
3
|
+
PAI is a Unix-style command-line tool for interacting with Large Language Models (LLMs).
|
|
4
|
+
|
|
5
|
+
## Installation
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
npm install
|
|
9
|
+
npm run build
|
|
10
|
+
npm link # or use: node dist/index.js
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
## Quick Start
|
|
14
|
+
|
|
15
|
+
### 1. Configure a Provider
|
|
16
|
+
|
|
17
|
+
PAI supports many providers via `@mariozechner/pi-ai`. Different providers use different authentication methods.
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
# API Key providers (e.g. OpenAI)
|
|
21
|
+
pai model config --add --name openai --provider openai --set apiKey=sk-...
|
|
22
|
+
|
|
23
|
+
# OAuth providers (e.g. GitHub Copilot) — interactive browser login
|
|
24
|
+
pai model login --name github-copilot
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
### 2. List Available Providers
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
# List configured providers
|
|
31
|
+
pai model list
|
|
32
|
+
|
|
33
|
+
# List all supported providers
|
|
34
|
+
pai model list --all
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
### 3. Set a Default Provider
|
|
38
|
+
|
|
39
|
+
```bash
|
|
40
|
+
# Set default provider (so you don't need --provider every time)
|
|
41
|
+
pai model default --name openai
|
|
42
|
+
|
|
43
|
+
# Or set it when adding a provider
|
|
44
|
+
pai model config --add --name openai --provider openai --set apiKey=sk-... --default
|
|
45
|
+
|
|
46
|
+
# View current default
|
|
47
|
+
pai model default
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
### 4. Chat with an LLM
|
|
51
|
+
|
|
52
|
+
```bash
|
|
53
|
+
# Simple chat
|
|
54
|
+
pai chat "Hello, how are you?" --provider openai --model gpt-4o-mini
|
|
55
|
+
|
|
56
|
+
# Using stdin
|
|
57
|
+
echo "Explain quantum computing" | pai chat --provider openai --model gpt-4o-mini
|
|
58
|
+
|
|
59
|
+
# With streaming output
|
|
60
|
+
pai chat "Write a story" --stream --provider openai --model gpt-4o
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
## Provider Configuration
|
|
64
|
+
|
|
65
|
+
PAI providers fall into several authentication categories. All configuration is done through `pai` CLI commands — no external config files needed.
|
|
66
|
+
|
|
67
|
+
### Category 1: API Key Providers
|
|
68
|
+
|
|
69
|
+
These providers require a simple API key. Configure with `--set apiKey=<key>`.
|
|
70
|
+
|
|
71
|
+
#### OpenAI
|
|
72
|
+
|
|
73
|
+
```bash
|
|
74
|
+
pai model config --add --name openai --provider openai \
|
|
75
|
+
--set apiKey=sk-... \
|
|
76
|
+
--set defaultModel=gpt-4o-mini
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
Environment variable alternative: `OPENAI_API_KEY`
|
|
80
|
+
|
|
81
|
+
#### Anthropic (API Key)
|
|
82
|
+
|
|
83
|
+
```bash
|
|
84
|
+
pai model config --add --name anthropic --provider anthropic \
|
|
85
|
+
--set apiKey=sk-ant-... \
|
|
86
|
+
--set defaultModel=claude-sonnet-4-20250514
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
Environment variable alternative: `ANTHROPIC_API_KEY`
|
|
90
|
+
|
|
91
|
+
#### Google Gemini (API Key)
|
|
92
|
+
|
|
93
|
+
```bash
|
|
94
|
+
pai model config --add --name google --provider google \
|
|
95
|
+
--set apiKey=AIza... \
|
|
96
|
+
--set defaultModel=gemini-2.5-flash
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
Environment variable alternative: `GEMINI_API_KEY`
|
|
100
|
+
|
|
101
|
+
#### Groq
|
|
102
|
+
|
|
103
|
+
```bash
|
|
104
|
+
pai model config --add --name groq --provider groq \
|
|
105
|
+
--set apiKey=gsk_... \
|
|
106
|
+
--set defaultModel=llama-3.3-70b-versatile
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
Environment variable alternative: `GROQ_API_KEY`
|
|
110
|
+
|
|
111
|
+
#### xAI (Grok)
|
|
112
|
+
|
|
113
|
+
```bash
|
|
114
|
+
pai model config --add --name xai --provider xai \
|
|
115
|
+
--set apiKey=xai-... \
|
|
116
|
+
--set defaultModel=grok-3-mini
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
Environment variable alternative: `XAI_API_KEY`
|
|
120
|
+
|
|
121
|
+
#### Cerebras
|
|
122
|
+
|
|
123
|
+
```bash
|
|
124
|
+
pai model config --add --name cerebras --provider cerebras \
|
|
125
|
+
--set apiKey=csk-...
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
Environment variable alternative: `CEREBRAS_API_KEY`
|
|
129
|
+
|
|
130
|
+
#### Mistral
|
|
131
|
+
|
|
132
|
+
```bash
|
|
133
|
+
pai model config --add --name mistral --provider mistral \
|
|
134
|
+
--set apiKey=... \
|
|
135
|
+
--set defaultModel=mistral-large-latest
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
Environment variable alternative: `MISTRAL_API_KEY`
|
|
139
|
+
|
|
140
|
+
#### OpenRouter
|
|
141
|
+
|
|
142
|
+
```bash
|
|
143
|
+
pai model config --add --name openrouter --provider openrouter \
|
|
144
|
+
--set apiKey=sk-or-... \
|
|
145
|
+
--set defaultModel=anthropic/claude-sonnet-4
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
Environment variable alternative: `OPENROUTER_API_KEY`
|
|
149
|
+
|
|
150
|
+
#### HuggingFace
|
|
151
|
+
|
|
152
|
+
```bash
|
|
153
|
+
pai model config --add --name huggingface --provider huggingface \
|
|
154
|
+
--set apiKey=hf_...
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
Environment variable alternative: `HF_TOKEN`
|
|
158
|
+
|
|
159
|
+
#### MiniMax / MiniMax CN
|
|
160
|
+
|
|
161
|
+
```bash
|
|
162
|
+
pai model config --add --name minimax --provider minimax \
|
|
163
|
+
--set apiKey=...
|
|
164
|
+
|
|
165
|
+
pai model config --add --name minimax-cn --provider minimax-cn \
|
|
166
|
+
--set apiKey=...
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
Environment variable alternatives: `MINIMAX_API_KEY` / `MINIMAX_CN_API_KEY`
|
|
170
|
+
|
|
171
|
+
#### Kimi Coding
|
|
172
|
+
|
|
173
|
+
```bash
|
|
174
|
+
pai model config --add --name kimi-coding --provider kimi-coding \
|
|
175
|
+
--set apiKey=...
|
|
176
|
+
```
|
|
177
|
+
|
|
178
|
+
Environment variable alternative: `KIMI_API_KEY`
|
|
179
|
+
|
|
180
|
+
---
|
|
181
|
+
|
|
182
|
+
### Category 2: OAuth Providers (Interactive Login)
|
|
183
|
+
|
|
184
|
+
These providers use OAuth device code or browser-based login flows. Use `pai model login` to authenticate interactively. Credentials (refresh token, access token, expiry) are stored in the PAI config file and automatically refreshed when expired.
|
|
185
|
+
|
|
186
|
+
#### GitHub Copilot
|
|
187
|
+
|
|
188
|
+
Requires a GitHub Copilot subscription. Uses OAuth device code flow — a browser window opens for GitHub authorization.
|
|
189
|
+
|
|
190
|
+
```bash
|
|
191
|
+
# Step 1: Login (interactive — opens browser)
|
|
192
|
+
pai model login --name github-copilot
|
|
193
|
+
|
|
194
|
+
# The flow will:
|
|
195
|
+
# 1. Ask for GitHub Enterprise URL (press Enter for github.com)
|
|
196
|
+
# 2. Open a browser URL for device code authorization
|
|
197
|
+
# 3. Display a user code to enter in the browser
|
|
198
|
+
# 4. After authorization, save credentials to PAI config
|
|
199
|
+
|
|
200
|
+
# Step 2: Chat
|
|
201
|
+
pai chat "Hello" --provider github-copilot --model gpt-4o
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
After login, the config file will contain the OAuth credentials under the `github-copilot` provider:
|
|
205
|
+
|
|
206
|
+
```json
|
|
207
|
+
{
|
|
208
|
+
"schema_version": "1.0.0",
|
|
209
|
+
"providers": [
|
|
210
|
+
{
|
|
211
|
+
"name": "github-copilot",
|
|
212
|
+
"oauth": {
|
|
213
|
+
"refresh": "ghu_...",
|
|
214
|
+
"access": "tid=...;exp=...;proxy-ep=...",
|
|
215
|
+
"expires": 1773219099000
|
|
216
|
+
}
|
|
217
|
+
}
|
|
218
|
+
]
|
|
219
|
+
}
|
|
220
|
+
```
|
|
221
|
+
|
|
222
|
+
For GitHub Enterprise:
|
|
223
|
+
|
|
224
|
+
```bash
|
|
225
|
+
pai model login --name github-copilot
|
|
226
|
+
# When prompted for "GitHub Enterprise URL/domain", enter: company.ghe.com
|
|
227
|
+
```
|
|
228
|
+
|
|
229
|
+
Environment variable alternative: `COPILOT_GITHUB_TOKEN` or `GH_TOKEN` or `GITHUB_TOKEN`
|
|
230
|
+
|
|
231
|
+
#### Anthropic (Claude Pro/Max via OAuth)
|
|
232
|
+
|
|
233
|
+
For Claude Pro/Max subscription users. Uses PKCE OAuth flow — opens a browser for Anthropic authorization, then you paste the authorization code back.
|
|
234
|
+
|
|
235
|
+
```bash
|
|
236
|
+
# Step 1: Login (interactive — opens browser)
|
|
237
|
+
pai model login --name anthropic
|
|
238
|
+
|
|
239
|
+
# The flow will:
|
|
240
|
+
# 1. Open a browser URL for Anthropic OAuth authorization
|
|
241
|
+
# 2. After authorizing, you'll get a code (format: code#state)
|
|
242
|
+
# 3. Paste the code back into the terminal
|
|
243
|
+
# 4. Credentials saved to PAI config
|
|
244
|
+
|
|
245
|
+
# Step 2: Chat
|
|
246
|
+
pai chat "Hello" --provider anthropic --model claude-sonnet-4-20250514
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
Note: If you have an Anthropic API key (not a subscription), use the API Key method instead:
|
|
250
|
+
|
|
251
|
+
```bash
|
|
252
|
+
pai model config --add --name anthropic --provider anthropic --set apiKey=sk-ant-...
|
|
253
|
+
```
|
|
254
|
+
|
|
255
|
+
Environment variable alternative: `ANTHROPIC_OAUTH_TOKEN` (takes precedence over `ANTHROPIC_API_KEY`)
|
|
256
|
+
|
|
257
|
+
#### Google Gemini CLI (Google Cloud Code Assist)
|
|
258
|
+
|
|
259
|
+
Free tier available. Uses Google OAuth with a local callback server — a browser window opens for Google account authorization.
|
|
260
|
+
|
|
261
|
+
```bash
|
|
262
|
+
# Step 1: Login (interactive — opens browser)
|
|
263
|
+
pai model login --name google-gemini-cli
|
|
264
|
+
|
|
265
|
+
# The flow will:
|
|
266
|
+
# 1. Start a local server on port 8085 for OAuth callback
|
|
267
|
+
# 2. Open a browser URL for Google account authorization
|
|
268
|
+
# 3. After authorization, automatically discover/provision a Cloud project
|
|
269
|
+
# 4. Credentials (including projectId) saved to PAI config
|
|
270
|
+
|
|
271
|
+
# Step 2: Chat
|
|
272
|
+
pai chat "Hello" --provider google-gemini-cli --model gemini-2.5-flash
|
|
273
|
+
```
|
|
274
|
+
|
|
275
|
+
For workspace/enterprise users, set the project ID via environment variable before login:
|
|
276
|
+
|
|
277
|
+
```bash
|
|
278
|
+
export GOOGLE_CLOUD_PROJECT=my-project-id
|
|
279
|
+
pai model login --name google-gemini-cli
|
|
280
|
+
```
|
|
281
|
+
|
|
282
|
+
#### Google Antigravity (Gemini 3, Claude, GPT-OSS via Google Cloud)
|
|
283
|
+
|
|
284
|
+
Access to additional models (Gemini 3, Claude, GPT-OSS) through Google Cloud. Uses a different OAuth flow than google-gemini-cli.
|
|
285
|
+
|
|
286
|
+
```bash
|
|
287
|
+
# Step 1: Login (interactive — opens browser)
|
|
288
|
+
pai model login --name google-antigravity
|
|
289
|
+
|
|
290
|
+
# The flow will:
|
|
291
|
+
# 1. Start a local server on port 51121 for OAuth callback
|
|
292
|
+
# 2. Open a browser URL for Google account authorization
|
|
293
|
+
# 3. Discover/provision a Cloud project
|
|
294
|
+
# 4. Credentials saved to PAI config
|
|
295
|
+
|
|
296
|
+
# Step 2: Chat
|
|
297
|
+
pai chat "Hello" --provider google-antigravity --model <model-id>
|
|
298
|
+
```
|
|
299
|
+
|
|
300
|
+
#### OpenAI Codex (ChatGPT Plus/Pro Subscription)
|
|
301
|
+
|
|
302
|
+
For ChatGPT Plus/Pro subscribers. Uses PKCE OAuth with a local callback server.
|
|
303
|
+
|
|
304
|
+
```bash
|
|
305
|
+
# Step 1: Login (interactive — opens browser)
|
|
306
|
+
pai model login --name openai-codex
|
|
307
|
+
|
|
308
|
+
# The flow will:
|
|
309
|
+
# 1. Start a local server on port 1455 for OAuth callback
|
|
310
|
+
# 2. Open a browser URL for OpenAI authorization
|
|
311
|
+
# 3. After authorization, extract accountId from JWT token
|
|
312
|
+
# 4. Credentials saved to PAI config
|
|
313
|
+
|
|
314
|
+
# Step 2: Chat
|
|
315
|
+
pai chat "Hello" --provider openai-codex --model codex-mini
|
|
316
|
+
```
|
|
317
|
+
|
|
318
|
+
#### List All OAuth Providers
|
|
319
|
+
|
|
320
|
+
```bash
|
|
321
|
+
# See which providers support OAuth login
|
|
322
|
+
pai model login --name help
|
|
323
|
+
# Supported: github-copilot, anthropic, google-gemini-cli, google-antigravity, openai-codex
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
---
|
|
327
|
+
|
|
328
|
+
### Category 3: Azure OpenAI (API Key + Endpoint Configuration)
|
|
329
|
+
|
|
330
|
+
Azure OpenAI requires an API key plus Azure-specific endpoint configuration (resource name or base URL, deployment name, API version).
|
|
331
|
+
|
|
332
|
+
```bash
|
|
333
|
+
# Method 1: Using azureBaseUrl (recommended)
|
|
334
|
+
pai model config --add --name my-azure --provider azure-openai-responses \
|
|
335
|
+
--set apiKey=your-azure-api-key \
|
|
336
|
+
--set defaultModel=gpt-4o \
|
|
337
|
+
--set api=azure-openai-responses \
|
|
338
|
+
--set baseUrl=https://my-resource.openai.azure.com/openai/v1 \
|
|
339
|
+
--set "providerOptions.azureApiVersion=v1" \
|
|
340
|
+
--set "providerOptions.azureDeploymentName=gpt-4o"
|
|
341
|
+
|
|
342
|
+
# Method 2: Using environment variables
|
|
343
|
+
export AZURE_OPENAI_API_KEY=your-azure-api-key
|
|
344
|
+
export AZURE_OPENAI_BASE_URL=https://my-resource.openai.azure.com/openai/v1
|
|
345
|
+
export AZURE_OPENAI_API_VERSION=v1
|
|
346
|
+
|
|
347
|
+
pai model config --add --name my-azure --provider azure-openai-responses \
|
|
348
|
+
--set defaultModel=gpt-4o \
|
|
349
|
+
--set api=azure-openai-responses \
|
|
350
|
+
--set "providerOptions.azureDeploymentName=gpt-4o"
|
|
351
|
+
|
|
352
|
+
# Chat
|
|
353
|
+
pai chat "Hello" --provider my-azure --model gpt-4o
|
|
354
|
+
```
|
|
355
|
+
|
|
356
|
+
Full Azure config file example:
|
|
357
|
+
|
|
358
|
+
```json
|
|
359
|
+
{
|
|
360
|
+
"schema_version": "1.0.0",
|
|
361
|
+
"defaultProvider": "my-azure",
|
|
362
|
+
"providers": [
|
|
363
|
+
{
|
|
364
|
+
"name": "my-azure",
|
|
365
|
+
"defaultModel": "gpt-4o",
|
|
366
|
+
"apiKey": "your-azure-api-key",
|
|
367
|
+
"api": "azure-openai-responses",
|
|
368
|
+
"baseUrl": "https://my-resource.openai.azure.com/openai/v1",
|
|
369
|
+
"reasoning": false,
|
|
370
|
+
"input": ["text", "image"],
|
|
371
|
+
"contextWindow": 128000,
|
|
372
|
+
"maxTokens": 16384,
|
|
373
|
+
"providerOptions": {
|
|
374
|
+
"azureApiVersion": "v1",
|
|
375
|
+
"azureDeploymentName": "gpt-4o"
|
|
376
|
+
}
|
|
377
|
+
}
|
|
378
|
+
]
|
|
379
|
+
}
|
|
380
|
+
```
|
|
381
|
+
|
|
382
|
+
Azure-specific environment variables:
|
|
383
|
+
- `AZURE_OPENAI_API_KEY` — API key
|
|
384
|
+
- `AZURE_OPENAI_BASE_URL` — Full base URL (e.g. `https://my-resource.openai.azure.com/openai/v1`)
|
|
385
|
+
- `AZURE_OPENAI_RESOURCE_NAME` — Resource name (alternative to base URL, constructs `https://<name>.openai.azure.com/openai/v1`)
|
|
386
|
+
- `AZURE_OPENAI_API_VERSION` — API version (default: `v1`)
|
|
387
|
+
- `AZURE_OPENAI_DEPLOYMENT_NAME_MAP` — Comma-separated model-to-deployment mapping (e.g. `gpt-4o=my-gpt4o-deployment,gpt-4o-mini=my-mini-deployment`)
|
|
388
|
+
|
|
389
|
+
---
|
|
390
|
+
|
|
391
|
+
### Category 4: Amazon Bedrock (AWS Credentials)
|
|
392
|
+
|
|
393
|
+
Amazon Bedrock uses AWS IAM credentials instead of API keys. No `apiKey` is needed — authentication is handled through standard AWS credential mechanisms.
|
|
394
|
+
|
|
395
|
+
```bash
|
|
396
|
+
# Method 1: AWS Profile
|
|
397
|
+
export AWS_PROFILE=my-profile
|
|
398
|
+
export AWS_REGION=us-east-1
|
|
399
|
+
pai model config --add --name bedrock --provider amazon-bedrock \
|
|
400
|
+
--set defaultModel=anthropic.claude-sonnet-4-20250514-v1:0
|
|
401
|
+
|
|
402
|
+
# Method 2: IAM Access Keys
|
|
403
|
+
export AWS_ACCESS_KEY_ID=AKIA...
|
|
404
|
+
export AWS_SECRET_ACCESS_KEY=...
|
|
405
|
+
export AWS_REGION=us-east-1
|
|
406
|
+
pai model config --add --name bedrock --provider amazon-bedrock \
|
|
407
|
+
--set defaultModel=anthropic.claude-sonnet-4-20250514-v1:0
|
|
408
|
+
|
|
409
|
+
# Method 3: Bedrock API Keys (Bearer Token)
|
|
410
|
+
export AWS_BEARER_TOKEN_BEDROCK=...
|
|
411
|
+
pai model config --add --name bedrock --provider amazon-bedrock \
|
|
412
|
+
--set defaultModel=anthropic.claude-sonnet-4-20250514-v1:0
|
|
413
|
+
|
|
414
|
+
# Chat
|
|
415
|
+
pai chat "Hello" --provider bedrock --model anthropic.claude-sonnet-4-20250514-v1:0
|
|
416
|
+
```
|
|
417
|
+
|
|
418
|
+
Supported AWS credential sources (checked in order):
|
|
419
|
+
1. `AWS_PROFILE` — Named profile from `~/.aws/credentials`
|
|
420
|
+
2. `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY` — Standard IAM keys
|
|
421
|
+
3. `AWS_BEARER_TOKEN_BEDROCK` — Bedrock API keys (bearer token)
|
|
422
|
+
4. `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` — ECS task roles
|
|
423
|
+
5. `AWS_CONTAINER_CREDENTIALS_FULL_URI` — ECS task roles (full URI)
|
|
424
|
+
6. `AWS_WEB_IDENTITY_TOKEN_FILE` — IRSA (IAM Roles for Service Accounts)
|
|
425
|
+
|
|
426
|
+
---
|
|
427
|
+
|
|
428
|
+
### Category 5: Google Vertex AI (Application Default Credentials)
|
|
429
|
+
|
|
430
|
+
Google Vertex AI uses Google Cloud Application Default Credentials (ADC) instead of API keys.
|
|
431
|
+
|
|
432
|
+
```bash
|
|
433
|
+
# Step 1: Set up ADC (one-time)
|
|
434
|
+
gcloud auth application-default login
|
|
435
|
+
|
|
436
|
+
# Step 2: Set required environment variables
|
|
437
|
+
export GOOGLE_CLOUD_PROJECT=my-project-id
|
|
438
|
+
export GOOGLE_CLOUD_LOCATION=us-central1
|
|
439
|
+
|
|
440
|
+
# Step 3: Configure provider
|
|
441
|
+
pai model config --add --name vertex --provider google-vertex \
|
|
442
|
+
--set defaultModel=gemini-2.5-flash
|
|
443
|
+
|
|
444
|
+
# Chat
|
|
445
|
+
pai chat "Hello" --provider vertex --model gemini-2.5-flash
|
|
446
|
+
```
|
|
447
|
+
|
|
448
|
+
Alternatively, use a service account key file:
|
|
449
|
+
|
|
450
|
+
```bash
|
|
451
|
+
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
|
|
452
|
+
export GOOGLE_CLOUD_PROJECT=my-project-id
|
|
453
|
+
export GOOGLE_CLOUD_LOCATION=us-central1
|
|
454
|
+
pai model config --add --name vertex --provider google-vertex
|
|
455
|
+
```
|
|
456
|
+
|
|
457
|
+
Required environment variables:
|
|
458
|
+
- `GOOGLE_CLOUD_PROJECT` or `GCLOUD_PROJECT` — GCP project ID
|
|
459
|
+
- `GOOGLE_CLOUD_LOCATION` — GCP region (e.g. `us-central1`)
|
|
460
|
+
- ADC credentials via `gcloud auth application-default login` or `GOOGLE_APPLICATION_CREDENTIALS`
|
|
461
|
+
|
|
462
|
+
---
|
|
463
|
+
|
|
464
|
+
### Provider Authentication Summary
|
|
465
|
+
|
|
466
|
+
| Provider | Auth Type | Config Method |
|
|
467
|
+
|---|---|---|
|
|
468
|
+
| `openai` | API Key | `--set apiKey=sk-...` |
|
|
469
|
+
| `anthropic` | API Key or OAuth | `--set apiKey=...` or `pai model login` |
|
|
470
|
+
| `google` | API Key | `--set apiKey=AIza...` |
|
|
471
|
+
| `github-copilot` | OAuth | `pai model login --name github-copilot` |
|
|
472
|
+
| `google-gemini-cli` | OAuth | `pai model login --name google-gemini-cli` |
|
|
473
|
+
| `google-antigravity` | OAuth | `pai model login --name google-antigravity` |
|
|
474
|
+
| `openai-codex` | OAuth | `pai model login --name openai-codex` |
|
|
475
|
+
| `azure-openai-responses` | API Key + Endpoint | `--set apiKey=... --set baseUrl=...` |
|
|
476
|
+
| `amazon-bedrock` | AWS Credentials | AWS env vars (`AWS_PROFILE`, etc.) |
|
|
477
|
+
| `google-vertex` | ADC | `gcloud auth` + env vars |
|
|
478
|
+
| `groq` | API Key | `--set apiKey=gsk_...` |
|
|
479
|
+
| `cerebras` | API Key | `--set apiKey=csk-...` |
|
|
480
|
+
| `xai` | API Key | `--set apiKey=xai-...` |
|
|
481
|
+
| `mistral` | API Key | `--set apiKey=...` |
|
|
482
|
+
| `openrouter` | API Key | `--set apiKey=sk-or-...` |
|
|
483
|
+
| `huggingface` | API Key | `--set apiKey=hf_...` |
|
|
484
|
+
| `minimax` | API Key | `--set apiKey=...` |
|
|
485
|
+
| `minimax-cn` | API Key | `--set apiKey=...` |
|
|
486
|
+
| `kimi-coding` | API Key | `--set apiKey=...` |
|
|
487
|
+
| `opencode` | API Key | `--set apiKey=...` |
|
|
488
|
+
| `opencode-go` | API Key | `--set apiKey=...` |
|
|
489
|
+
| `vercel-ai-gateway` | API Key | `--set apiKey=...` |
|
|
490
|
+
| `zai` | API Key | `--set apiKey=...` |
|
|
491
|
+
|
|
492
|
+
## Commands
|
|
493
|
+
|
|
494
|
+
### `pai chat`
|
|
495
|
+
|
|
496
|
+
Chat with an LLM. Supports tool calling (bash_exec is built-in).
|
|
497
|
+
|
|
498
|
+
**Options:**
|
|
499
|
+
- `--config <path>` - Config file path (default: ~/.config/pai/default.json)
|
|
500
|
+
- `--session <path>` - Session file for conversation history (JSONL format)
|
|
501
|
+
- `--system <text>` - System instruction
|
|
502
|
+
- `--system-file <path>` - System instruction from file
|
|
503
|
+
- `--input-file <path>` - User input from file
|
|
504
|
+
- `--image <path...>` - Image file(s) to include
|
|
505
|
+
- `--provider <name>` - Provider name
|
|
506
|
+
- `--model <name>` - Model name
|
|
507
|
+
- `--temperature <number>` - Temperature (0-2)
|
|
508
|
+
- `--max-tokens <number>` - Max tokens
|
|
509
|
+
- `--stream` - Enable streaming output
|
|
510
|
+
- `--no-append` - Do not append to session file
|
|
511
|
+
- `--json` - Output progress as NDJSON
|
|
512
|
+
- `--quiet` - Suppress progress output
|
|
513
|
+
- `--log <path>` - Log file path (Markdown)
|
|
514
|
+
- `--dry-run` - Show resolved config (provider, model, etc.) without calling LLM
|
|
515
|
+
|
|
516
|
+
### `pai embed`
|
|
517
|
+
|
|
518
|
+
Generate text embeddings using a provider's embedding API. Supports single and batch input, with plain text or JSON output.
|
|
519
|
+
|
|
520
|
+
**Options:**
|
|
521
|
+
- `--provider <name>` - Provider name
|
|
522
|
+
- `--model <name>` - Embedding model name
|
|
523
|
+
- `--config <path>` - Config file path
|
|
524
|
+
- `--json` - Output as JSON (includes model and usage metadata)
|
|
525
|
+
- `--quiet` - Suppress progress output
|
|
526
|
+
- `--batch` - Batch mode (input is a JSON string array)
|
|
527
|
+
- `--input-file <path>` - Read input from file
|
|
528
|
+
|
|
529
|
+
**Input sources** (mutually exclusive — use only one):
|
|
530
|
+
1. Positional argument: `pai embed "hello world"`
|
|
531
|
+
2. stdin: `echo "hello" | pai embed`
|
|
532
|
+
3. File: `pai embed --input-file document.txt`
|
|
533
|
+
|
|
534
|
+
**Examples:**
|
|
535
|
+
|
|
536
|
+
```bash
|
|
537
|
+
# Single text embedding
|
|
538
|
+
pai embed "hello world" --provider openai --model text-embedding-3-small
|
|
539
|
+
|
|
540
|
+
# From stdin
|
|
541
|
+
echo "hello world" | pai embed --provider openai --model text-embedding-3-small
|
|
542
|
+
|
|
543
|
+
# From file
|
|
544
|
+
pai embed --input-file document.txt --provider openai --model text-embedding-3-small
|
|
545
|
+
|
|
546
|
+
# Batch mode (JSON string array)
|
|
547
|
+
pai embed --batch '["hello","world","foo"]' --provider openai --model text-embedding-3-small
|
|
548
|
+
|
|
549
|
+
# JSON output (includes model and usage info)
|
|
550
|
+
pai embed "hello" --json --provider openai --model text-embedding-3-small
|
|
551
|
+
|
|
552
|
+
# Using default embed provider/model (after configuring)
|
|
553
|
+
pai model default --embed-provider openai --embed-model text-embedding-3-small
|
|
554
|
+
pai embed "hello world"
|
|
555
|
+
```
|
|
556
|
+
|
|
557
|
+
**Output formats:**
|
|
558
|
+
|
|
559
|
+
Plain text (default) — one JSON array per line:
|
|
560
|
+
```
|
|
561
|
+
[0.0023064255,-0.009327292,0.015797347,...]
|
|
562
|
+
```
|
|
563
|
+
|
|
564
|
+
JSON mode (`--json`) — single:
|
|
565
|
+
```json
|
|
566
|
+
{"embedding":[0.0023,-0.0094,...],"model":"text-embedding-3-small","usage":{"prompt_tokens":2,"total_tokens":2}}
|
|
567
|
+
```
|
|
568
|
+
|
|
569
|
+
JSON mode (`--json`) — batch:
|
|
570
|
+
```json
|
|
571
|
+
{"embeddings":[[0.0023,...],[0.0112,...]],"model":"text-embedding-3-small","usage":{"prompt_tokens":4,"total_tokens":4}}
|
|
572
|
+
```
|
|
573
|
+
|
|
574
|
+
**Text truncation:** If input text exceeds the model's token limit, it is automatically truncated with a warning on stderr.
|
|
575
|
+
|
|
576
|
+
### `pai model list`
|
|
577
|
+
|
|
578
|
+
List providers and models.
|
|
579
|
+
|
|
580
|
+
**Options:**
|
|
581
|
+
- `--all` - Show all supported providers
|
|
582
|
+
- `--json` - Output as JSON
|
|
583
|
+
|
|
584
|
+
### `pai model config`
|
|
585
|
+
|
|
586
|
+
Configure providers (add/update/delete/show).
|
|
587
|
+
|
|
588
|
+
**Options:**
|
|
589
|
+
- `--add` - Add or replace a provider (requires `--provider`)
|
|
590
|
+
- `--update` - Update fields on an existing provider (no `--provider` needed)
|
|
591
|
+
- `--delete` - Delete provider
|
|
592
|
+
- `--show` - Show provider configuration (sensitive fields masked)
|
|
593
|
+
- `--name <name>` - Provider name
|
|
594
|
+
- `--provider <type>` - Provider type (required for `--add`)
|
|
595
|
+
- `--set <key=value...>` - Set configuration values (known keys: apiKey, defaultModel, models, temperature, maxTokens, api, baseUrl, reasoning, input, contextWindow, providerOptions)
|
|
596
|
+
- `--default` - Set as default provider (used with `--add` or `--update`)
|
|
597
|
+
- `--json` - Output as JSON (for --show)
|
|
598
|
+
|
|
599
|
+
**Examples:**
|
|
600
|
+
|
|
601
|
+
```bash
|
|
602
|
+
# Add a new provider
|
|
603
|
+
pai model config --add --name openai --provider openai --set apiKey=sk-...
|
|
604
|
+
|
|
605
|
+
# Update an existing provider's default model
|
|
606
|
+
pai model config --update --name openai --set defaultModel=gpt-4o
|
|
607
|
+
|
|
608
|
+
# Update multiple fields at once
|
|
609
|
+
pai model config --update --name openai --set defaultModel=gpt-4o --set temperature=0.5
|
|
610
|
+
|
|
611
|
+
# Update and set as default provider
|
|
612
|
+
pai model config --update --name openai --set defaultModel=gpt-4o --default
|
|
613
|
+
```
|
|
614
|
+
|
|
615
|
+
### `pai model default`
|
|
616
|
+
|
|
617
|
+
View or set the default provider and embedding model. When no options are given, shows the current defaults. When `--name` is provided, sets that provider as the default chat provider.
|
|
618
|
+
|
|
619
|
+
```bash
|
|
620
|
+
# View current defaults (provider + embed)
|
|
621
|
+
pai model default
|
|
622
|
+
|
|
623
|
+
# Set default provider
|
|
624
|
+
pai model default --name openai
|
|
625
|
+
|
|
626
|
+
# Set default embedding provider and model
|
|
627
|
+
pai model default --embed-provider openai --embed-model text-embedding-3-small
|
|
628
|
+
|
|
629
|
+
# Set both at once
|
|
630
|
+
pai model default --name openai --embed-provider openai --embed-model text-embedding-3-small
|
|
631
|
+
|
|
632
|
+
# Output as JSON
|
|
633
|
+
pai model default --json
|
|
634
|
+
```
|
|
635
|
+
|
|
636
|
+
**Options:**
|
|
637
|
+
- `--name <name>` - Provider name to set as default (must already be configured)
|
|
638
|
+
- `--embed-provider <name>` - Set default embedding provider
|
|
639
|
+
- `--embed-model <model>` - Set default embedding model
|
|
640
|
+
- `--json` - Output as JSON
|
|
641
|
+
- `--config <path>` - Config file path
|
|
642
|
+
|
|
643
|
+
### `pai model login`
|
|
644
|
+
|
|
645
|
+
Interactive OAuth login for providers that require browser-based authentication.
|
|
646
|
+
|
|
647
|
+
**Options:**
|
|
648
|
+
- `--name <name>` - Provider name (required)
|
|
649
|
+
- `--config <path>` - Config file path
|
|
650
|
+
|
|
651
|
+
**Supported providers:** `github-copilot`, `anthropic`, `google-gemini-cli`, `google-antigravity`, `openai-codex`
|
|
652
|
+
|
|
653
|
+
## Configuration
|
|
654
|
+
|
|
655
|
+
### Config File
|
|
656
|
+
|
|
657
|
+
Default location: `~/.config/pai/default.json`
|
|
658
|
+
|
|
659
|
+
```json
|
|
660
|
+
{
|
|
661
|
+
"schema_version": "1.0.0",
|
|
662
|
+
"defaultProvider": "openai",
|
|
663
|
+
"defaultEmbedProvider": "openai",
|
|
664
|
+
"defaultEmbedModel": "text-embedding-3-small",
|
|
665
|
+
"providers": [
|
|
666
|
+
{
|
|
667
|
+
"name": "openai",
|
|
668
|
+
"apiKey": "sk-...",
|
|
669
|
+
"models": ["gpt-4o", "gpt-4o-mini"],
|
|
670
|
+
"defaultModel": "gpt-4o-mini",
|
|
671
|
+
"temperature": 0.7,
|
|
672
|
+
"maxTokens": 2000
|
|
673
|
+
}
|
|
674
|
+
]
|
|
675
|
+
}
|
|
676
|
+
```
|
|
677
|
+
|
|
678
|
+
### Provider Config Fields
|
|
679
|
+
|
|
680
|
+
| Field | Type | Description |
|
|
681
|
+
|---|---|---|
|
|
682
|
+
| `name` | string | Provider name (used with `--provider`) |
|
|
683
|
+
| `apiKey` | string | API key (for API key providers) |
|
|
684
|
+
| `oauth` | object | OAuth credentials (for OAuth providers, managed by `pai model login`) |
|
|
685
|
+
| `defaultModel` | string | Default model when `--model` is not specified |
|
|
686
|
+
| `models` | string[] | List of available models |
|
|
687
|
+
| `temperature` | number | Default temperature (0-2) |
|
|
688
|
+
| `maxTokens` | number | Default max tokens |
|
|
689
|
+
| `api` | string | pi-ai API type (e.g. `azure-openai-responses`) |
|
|
690
|
+
| `baseUrl` | string | Base URL for custom/self-hosted endpoints |
|
|
691
|
+
| `reasoning` | boolean | Whether the model supports reasoning/thinking |
|
|
692
|
+
| `input` | string[] | Input modalities: `["text"]` or `["text", "image"]` |
|
|
693
|
+
| `contextWindow` | number | Context window size in tokens |
|
|
694
|
+
| `providerOptions` | object | Provider-specific options (e.g. Azure deployment config) |
|
|
695
|
+
|
|
696
|
+
### Authentication Priority
|
|
697
|
+
|
|
698
|
+
Credentials are resolved in this order (highest priority first):
|
|
699
|
+
|
|
700
|
+
1. CLI parameters
|
|
701
|
+
2. Environment variables (`PAI_<PROVIDER>_API_KEY` or provider-specific env vars)
|
|
702
|
+
3. Config file (`apiKey` field or `oauth` credentials)
|
|
703
|
+
|
|
704
|
+
Provider-specific environment variables:
|
|
705
|
+
|
|
706
|
+
| Provider | Environment Variable |
|
|
707
|
+
|---|---|
|
|
708
|
+
| `openai` | `OPENAI_API_KEY` |
|
|
709
|
+
| `anthropic` | `ANTHROPIC_OAUTH_TOKEN` or `ANTHROPIC_API_KEY` |
|
|
710
|
+
| `google` | `GEMINI_API_KEY` |
|
|
711
|
+
| `github-copilot` | `COPILOT_GITHUB_TOKEN` or `GH_TOKEN` or `GITHUB_TOKEN` |
|
|
712
|
+
| `azure-openai-responses` | `AZURE_OPENAI_API_KEY` |
|
|
713
|
+
| `groq` | `GROQ_API_KEY` |
|
|
714
|
+
| `cerebras` | `CEREBRAS_API_KEY` |
|
|
715
|
+
| `xai` | `XAI_API_KEY` |
|
|
716
|
+
| `openrouter` | `OPENROUTER_API_KEY` |
|
|
717
|
+
| `mistral` | `MISTRAL_API_KEY` |
|
|
718
|
+
| `huggingface` | `HF_TOKEN` |
|
|
719
|
+
| `vercel-ai-gateway` | `AI_GATEWAY_API_KEY` |
|
|
720
|
+
| `zai` | `ZAI_API_KEY` |
|
|
721
|
+
| `minimax` | `MINIMAX_API_KEY` |
|
|
722
|
+
| `minimax-cn` | `MINIMAX_CN_API_KEY` |
|
|
723
|
+
| `kimi-coding` | `KIMI_API_KEY` |
|
|
724
|
+
| `opencode` / `opencode-go` | `OPENCODE_API_KEY` |
|
|
725
|
+
|
|
726
|
+
### Session Files
|
|
727
|
+
|
|
728
|
+
Session files use JSONL format (one JSON object per line):
|
|
729
|
+
|
|
730
|
+
```jsonl
|
|
731
|
+
{"role":"system","content":"You are helpful","timestamp":"2024-01-01T00:00:00Z"}
|
|
732
|
+
{"role":"user","content":"Hello","timestamp":"2024-01-01T00:00:01Z"}
|
|
733
|
+
{"role":"assistant","content":"Hi there!","timestamp":"2024-01-01T00:00:02Z"}
|
|
734
|
+
```
|
|
735
|
+
|
|
736
|
+
## Built-in Tools
|
|
737
|
+
|
|
738
|
+
### bash_exec
|
|
739
|
+
|
|
740
|
+
The LLM can execute shell commands using the `bash_exec` tool.
|
|
741
|
+
|
|
742
|
+
```bash
|
|
743
|
+
pai chat "What files are in the current directory?" --provider openai --model gpt-4o
|
|
744
|
+
```
|
|
745
|
+
|
|
746
|
+
The LLM will use `bash_exec` to run `ls` and return the results.
|
|
747
|
+
|
|
748
|
+
**Security Note:** bash_exec has no security restrictions. The LLM can execute any command. Use with caution.
|
|
749
|
+
|
|
750
|
+
## Output Modes
|
|
751
|
+
|
|
752
|
+
### Human-Readable (Default)
|
|
753
|
+
|
|
754
|
+
```bash
|
|
755
|
+
pai chat "Hello"
|
|
756
|
+
# Output: Hi there! How can I help you today?
|
|
757
|
+
```
|
|
758
|
+
|
|
759
|
+
### JSON Mode
|
|
760
|
+
|
|
761
|
+
```bash
|
|
762
|
+
pai chat "Hello" --json
|
|
763
|
+
# stderr: {"type":"start","data":{},"timestamp":1234567890}
|
|
764
|
+
# stdout: Hi there! How can I help you today?
|
|
765
|
+
# stderr: {"type":"complete","data":{},"timestamp":1234567891}
|
|
766
|
+
```
|
|
767
|
+
|
|
768
|
+
### Streaming Mode
|
|
769
|
+
|
|
770
|
+
```bash
|
|
771
|
+
pai chat "Write a story" --stream
|
|
772
|
+
# Output appears incrementally as the model generates it
|
|
773
|
+
```
|
|
774
|
+
|
|
775
|
+
## Examples
|
|
776
|
+
|
|
777
|
+
### Simple Q&A
|
|
778
|
+
|
|
779
|
+
```bash
|
|
780
|
+
pai chat "What is the capital of France?" --provider openai --model gpt-4o-mini
|
|
781
|
+
```
|
|
782
|
+
|
|
783
|
+
### Multi-turn Conversation
|
|
784
|
+
|
|
785
|
+
```bash
|
|
786
|
+
pai chat "My name is Alice" --session chat.jsonl --provider openai --model gpt-4o
|
|
787
|
+
pai chat "What's my name?" --session chat.jsonl --provider openai --model gpt-4o
|
|
788
|
+
```
|
|
789
|
+
|
|
790
|
+
### Code Generation with Execution
|
|
791
|
+
|
|
792
|
+
```bash
|
|
793
|
+
pai chat "Write a Python script to calculate fibonacci numbers and run it" \
|
|
794
|
+
--provider openai --model gpt-4o
|
|
795
|
+
```
|
|
796
|
+
|
|
797
|
+
### Image Analysis
|
|
798
|
+
|
|
799
|
+
```bash
|
|
800
|
+
pai chat "Describe this image in detail" \
|
|
801
|
+
--image photo.jpg \
|
|
802
|
+
--provider openai --model gpt-4o
|
|
803
|
+
```
|
|
804
|
+
|
|
805
|
+
### Piping Input
|
|
806
|
+
|
|
807
|
+
```bash
|
|
808
|
+
cat document.txt | pai chat "Summarize this document" \
|
|
809
|
+
--provider openai --model gpt-4o-mini
|
|
810
|
+
```
|
|
811
|
+
|
|
812
|
+
### System Instructions
|
|
813
|
+
|
|
814
|
+
```bash
|
|
815
|
+
pai chat "What is 2+2?" \
|
|
816
|
+
--system "You are a math tutor. Explain your reasoning step by step." \
|
|
817
|
+
--provider openai --model gpt-4o
|
|
818
|
+
```
|
|
819
|
+
|
|
820
|
+
## Exit Codes
|
|
821
|
+
|
|
822
|
+
- `0` - Success
|
|
823
|
+
- `1` - Parameter or usage error
|
|
824
|
+
- `2` - Local runtime error
|
|
825
|
+
- `3` - External API/provider error
|
|
826
|
+
- `4` - IO/file error
|
|
827
|
+
|
|
828
|
+
## Troubleshooting
|
|
829
|
+
|
|
830
|
+
### "No credentials found"
|
|
831
|
+
|
|
832
|
+
Make sure you've configured authentication for your provider:
|
|
833
|
+
|
|
834
|
+
```bash
|
|
835
|
+
# For API Key providers
|
|
836
|
+
pai model config --add --name openai --provider openai --set apiKey=sk-...
|
|
837
|
+
|
|
838
|
+
# For OAuth providers
|
|
839
|
+
pai model login --name github-copilot
|
|
840
|
+
|
|
841
|
+
# Or use environment variables
|
|
842
|
+
export OPENAI_API_KEY=sk-...
|
|
843
|
+
```
|
|
844
|
+
|
|
845
|
+
### "Provider not found"
|
|
846
|
+
|
|
847
|
+
List available providers and configure one:
|
|
848
|
+
|
|
849
|
+
```bash
|
|
850
|
+
pai model list --all
|
|
851
|
+
pai model config --add --name <provider> --provider <provider> --set apiKey=...
|
|
852
|
+
|
|
853
|
+
# Set as default to avoid specifying --provider every time
|
|
854
|
+
pai model default --name <provider>
|
|
855
|
+
```
|
|
856
|
+
|
|
857
|
+
### "Model not specified"
|
|
858
|
+
|
|
859
|
+
Specify the model explicitly or set a default:
|
|
860
|
+
|
|
861
|
+
```bash
|
|
862
|
+
pai chat "Hello" --provider openai --model gpt-4o-mini
|
|
863
|
+
|
|
864
|
+
# Or set a default model on an existing provider
|
|
865
|
+
pai model config --update --name openai --set defaultModel=gpt-4o-mini
|
|
866
|
+
```
|
|
867
|
+
|
|
868
|
+
Note: If no `defaultModel` is configured, PAI will automatically use the first model known for that provider.
|
|
869
|
+
|
|
870
|
+
### OAuth token expired
|
|
871
|
+
|
|
872
|
+
Re-login to refresh credentials:
|
|
873
|
+
|
|
874
|
+
```bash
|
|
875
|
+
pai model login --name github-copilot
|
|
876
|
+
```
|
|
877
|
+
|
|
878
|
+
PAI will also attempt to automatically refresh expired OAuth tokens using the stored refresh token.
|
|
879
|
+
|
|
880
|
+
## Advanced Usage
|
|
881
|
+
|
|
882
|
+
### Custom Config Location
|
|
883
|
+
|
|
884
|
+
```bash
|
|
885
|
+
pai chat "Hello" --config /path/to/config.json --provider openai --model gpt-4o
|
|
886
|
+
```
|
|
887
|
+
|
|
888
|
+
Or use environment variable:
|
|
889
|
+
|
|
890
|
+
```bash
|
|
891
|
+
export PAI_CONFIG=/path/to/config.json
|
|
892
|
+
```
|
|
893
|
+
|
|
894
|
+
### Logging
|
|
895
|
+
|
|
896
|
+
```bash
|
|
897
|
+
pai chat "Hello" --log conversation.md --provider openai --model gpt-4o
|
|
898
|
+
```
|
|
899
|
+
|
|
900
|
+
### Quiet Mode
|
|
901
|
+
|
|
902
|
+
```bash
|
|
903
|
+
pai chat "Hello" --quiet --provider openai --model gpt-4o
|
|
904
|
+
```
|
|
905
|
+
|
|
906
|
+
### JSON Output for Scripting
|
|
907
|
+
|
|
908
|
+
```bash
|
|
909
|
+
pai chat "Hello" --json --provider openai --model gpt-4o 2>progress.jsonl 1>response.txt
|
|
910
|
+
```
|
|
911
|
+
|
|
912
|
+
## Supported Providers
|
|
913
|
+
|
|
914
|
+
Use `pai model list --all` to see the complete list. PAI supports all providers from `@mariozechner/pi-ai`:
|
|
915
|
+
|
|
916
|
+
- OpenAI, Anthropic, Google (Gemini), GitHub Copilot, Azure OpenAI
|
|
917
|
+
- Amazon Bedrock, Google Vertex AI, Google Gemini CLI, Google Antigravity
|
|
918
|
+
- OpenAI Codex (ChatGPT Plus/Pro)
|
|
919
|
+
- Groq, Cerebras, xAI, Mistral, OpenRouter, Vercel AI Gateway
|
|
920
|
+
- MiniMax, HuggingFace, OpenCode, Kimi Coding, ZAI
|
package/package.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@theclawlab/pai",
|
|
3
3
|
"type": "module",
|
|
4
|
-
"version": "1.0.
|
|
4
|
+
"version": "1.0.2",
|
|
5
5
|
"description": "pai is a linux command to interact with LLMs (with only one bash_exec tool and basic session support).",
|
|
6
6
|
"bin": {
|
|
7
7
|
"pai": "dist/index.js"
|
|
@@ -21,7 +21,8 @@
|
|
|
21
21
|
},
|
|
22
22
|
"files": [
|
|
23
23
|
"dist",
|
|
24
|
-
"README.md"
|
|
24
|
+
"README.md",
|
|
25
|
+
"USAGE.md"
|
|
25
26
|
],
|
|
26
27
|
"keywords": [],
|
|
27
28
|
"author": "",
|