primellm 0.1.0 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,198 +1,97 @@
1
- # PrimeLLM JavaScript SDK
1
+ # PrimeLLM CLI
2
2
 
3
- Official JavaScript/TypeScript SDK for the PrimeLLM unified AI API.
3
+ > Configure Claude Code and Codex to use PrimeLLM as the backend
4
4
 
5
- PrimeLLM lets you access multiple AI models (GPT-5.1, Claude, Gemini) through a single, simple API. This SDK makes it easy to call PrimeLLM from JavaScript or TypeScript.
5
+ A production-grade CLI installer that configures AI coding tools to use PrimeLLM's unified API.
6
6
 
7
- ## Installation
8
-
9
- ### Local Development (Not on npm yet)
7
+ ## Quick Start
10
8
 
11
9
  ```bash
12
- cd js-sdk
13
- npm install
14
- npm run build
10
+ npx primellm
15
11
  ```
16
12
 
17
- ### Future npm Usage (Coming Soon)
13
+ ## Features
18
14
 
19
- ```bash
20
- npm install primellm
21
- ```
22
-
23
- ## Quick Start
15
+ - 🎨 **Beautiful UI** - ASCII art banner and progress indicators
16
+ - 🔍 **System Detection** - Automatically detects OS, shell, and Node version
17
+ - 🛠 **Tool Selection** - Choose between Claude Code or Codex
18
+ - 📦 **Smart Installation** - Only installs tools if not already present
19
+ - 🔑 **Secure API Key Flow** - Masked input with validation
20
+ - 📁 **Flexible Scope** - System-level or project-level configuration
24
21
 
25
- ```javascript
26
- import PrimeLLMClient from "primellm";
27
-
28
- // Create a client with your API key
29
- const client = new PrimeLLMClient({
30
- apiKey: "primellm_live_XXX", // Get from https://primellm.in/dashboard
31
- });
32
-
33
- // Send a chat message
34
- const response = await client.chat({
35
- model: "gpt-5.1",
36
- messages: [
37
- { role: "system", content: "You are a helpful assistant." },
38
- { role: "user", content: "What is TypeScript?" },
39
- ],
40
- });
41
-
42
- // Access the response
43
- console.log(response.choices[0].message.content);
44
- console.log("Tokens used:", response.usage.total_tokens);
45
- console.log("Credits left:", response.credits.remaining);
46
- ```
22
+ ## Supported Tools
47
23
 
48
- ## Available Models
24
+ | Tool | Package | Command |
25
+ |------|---------|---------|
26
+ | Claude Code | `@anthropic-ai/claude-code` | `claude` |
27
+ | Codex | `@openai/codex` | `codex` |
49
28
 
50
- | Model | Description |
51
- |-------|-------------|
52
- | `gpt-5.1` | Latest GPT model (default) |
53
- | `claude-sonnet-4.5` | Claude Sonnet 4.5 |
54
- | `gemini-3.0` | Gemini 3.0 |
29
+ ## Usage
55
30
 
56
- ## API Reference
31
+ ### Interactive Mode
57
32
 
58
- ### Creating a Client
33
+ Simply run the CLI and follow the prompts:
59
34
 
60
- ```typescript
61
- const client = new PrimeLLMClient({
62
- apiKey: "primellm_live_XXX", // Required
63
- baseURL: "https://api.primellm.in", // Optional, this is the default
64
- timeoutMs: 60000, // Optional, 60 seconds default
65
- });
35
+ ```bash
36
+ npx primellm
66
37
  ```
67
38
 
68
- ### client.chat(request)
39
+ ### What it Does
69
40
 
70
- Send a chat completion request to `/v1/chat`. This is the recommended method.
71
-
72
- ```javascript
73
- const response = await client.chat({
74
- model: "gpt-5.1",
75
- messages: [
76
- { role: "user", content: "Hello!" }
77
- ],
78
- temperature: 0.7, // Optional
79
- max_tokens: 1000, // Optional
80
- });
81
- ```
41
+ 1. **Detects your system** - Shows OS, shell, and Node version
42
+ 2. **Asks which tool** - Claude Code or Codex
43
+ 3. **Checks installation** - Skips install if already present
44
+ 4. **Gets your API key** - Opens browser if you need to create one
45
+ 5. **Configures the tool** - Writes config with PrimeLLM backend
82
46
 
83
- **Response:**
84
- ```javascript
85
- {
86
- id: "chatcmpl_xxx",
87
- model: "gpt-5.1",
88
- choices: [{
89
- index: 0,
90
- message: { role: "assistant", content: "..." },
91
- finish_reason: "stop"
92
- }],
93
- usage: {
94
- prompt_tokens: 10,
95
- completion_tokens: 20,
96
- total_tokens: 30
97
- },
98
- credits: {
99
- remaining: 149.99,
100
- cost: 0.00006
101
- }
102
- }
103
- ```
47
+ ## Configuration
104
48
 
105
- ### client.completions(request)
49
+ ### System-level (recommended)
106
50
 
107
- Same as `chat()`, but uses the `/v1/chat/completions` endpoint.
108
- Use this for OpenAI API path compatibility.
51
+ Applies to all projects. Config stored in:
52
+ - Claude Code: `~/.claude/config.json`
53
+ - Codex: `~/.codex/config.json`
109
54
 
110
- ```javascript
111
- const response = await client.completions({
112
- model: "claude-sonnet-4.5",
113
- messages: [{ role: "user", content: "Hello!" }],
114
- });
115
- ```
55
+ ### Project-level
116
56
 
117
- ### client.generate(request)
57
+ Applies to current project only. Config stored in:
58
+ - Claude Code: `./.claude/config.json`
59
+ - Codex: `./.codex/config.json`
118
60
 
119
- Legacy endpoint using `/generate`. Returns a simpler response format.
61
+ ## API Key
120
62
 
121
- ```javascript
122
- const response = await client.generate({
123
- model: "gpt-5.1",
124
- messages: [{ role: "user", content: "Hello!" }],
125
- });
63
+ Your PrimeLLM API key:
64
+ - Must start with `primellm_`
65
+ - Can be created at: https://primellm.in/dashboard/api-keys
126
66
 
127
- // Response format is different:
128
- console.log(response.reply); // The AI's response
129
- console.log(response.tokens_used); // Total tokens
130
- console.log(response.credits_remaining); // Credits left
131
- ```
67
+ ## Requirements
132
68
 
133
- ## Examples
69
+ - Node.js >= 18.0.0
70
+ - npm or npx
134
71
 
135
- Run the included examples:
72
+ ## Development
136
73
 
137
74
  ```bash
138
- cd js-sdk
139
- npm install
140
- npm run build
141
-
142
- # Edit examples to add your API key, then:
143
- node ./examples/chat-basic.mjs
144
- node ./examples/completions-basic.mjs
145
- node ./examples/generate-basic.mjs
146
- ```
147
-
148
- ## Understanding the Response
149
-
150
- - **model**: Which AI model generated the response
151
- - **messages**: The conversation, including the AI's reply
152
- - **usage**: Token counts (how much "text" was processed)
153
- - `prompt_tokens`: Your input
154
- - `completion_tokens`: AI's output
155
- - `total_tokens`: Total
156
- - **credits**: Your PrimeLLM account balance
157
- - `remaining`: Credits left
158
- - `cost`: Cost of this request
159
-
160
- ## TypeScript Support
161
-
162
- This SDK is written in TypeScript and includes full type definitions.
163
-
164
- ```typescript
165
- import { PrimeLLMClient, ChatRequest, ChatResponse } from "primellm";
75
+ # Clone the repo
76
+ git clone https://github.com/rishuuu-codesss/primellm-backend.git
77
+ cd primellm-backend/primellm-cli
166
78
 
167
- const client = new PrimeLLMClient({ apiKey: "..." });
168
-
169
- const request: ChatRequest = {
170
- model: "gpt-5.1",
171
- messages: [{ role: "user", content: "Hello!" }],
172
- };
79
+ # Install dependencies
80
+ npm install
173
81
 
174
- const response: ChatResponse = await client.chat(request);
175
- ```
82
+ # Run in development
83
+ npm run dev
176
84
 
177
- ## Error Handling
178
-
179
- ```javascript
180
- try {
181
- const response = await client.chat({
182
- model: "gpt-5.1",
183
- messages: [{ role: "user", content: "Hello!" }],
184
- });
185
- } catch (error) {
186
- console.error("API Error:", error.message);
187
- // Example: "PrimeLLM API error: 401 Unauthorized - Invalid API key"
188
- }
85
+ # Build
86
+ npm run build
189
87
  ```
190
88
 
191
- ## Notes
192
-
193
- - **Streaming**: Not yet supported. Calling `streamChat()` will throw an error.
194
- - **Publishing**: This SDK will be published to npm as `primellm` in a future release.
195
-
196
89
  ## License
197
90
 
198
91
  MIT
92
+
93
+ ## Links
94
+
95
+ - [PrimeLLM Website](https://primellm.in)
96
+ - [API Documentation](https://primellm.in/docs)
97
+ - [Dashboard](https://primellm.in/dashboard)
package/dist/index.d.ts CHANGED
@@ -1,124 +1,3 @@
1
- /**
2
- * PrimeLLM JavaScript SDK - Main Client
3
- *
4
- * This is the main SDK file. Developers import this to talk to PrimeLLM
5
- * from JavaScript or TypeScript.
6
- *
7
- * Example usage:
8
- *
9
- * import { PrimeLLMClient } from "primellm";
10
- *
11
- * const client = new PrimeLLMClient({ apiKey: "primellm_live_XXX" });
12
- *
13
- * const response = await client.chat({
14
- * model: "gpt-5.1",
15
- * messages: [{ role: "user", content: "Hello!" }],
16
- * });
17
- *
18
- * console.log(response.choices[0].message.content);
19
- */
20
- import { ChatRequest, ChatResponse, GenerateRequest, GenerateResponse, PrimeLLMClientOptions } from "./types.js";
21
- export * from "./types.js";
22
- /**
23
- * PrimeLLM API Client
24
- *
25
- * This class handles all communication with the PrimeLLM API.
26
- * It provides methods for chat, completions, and the legacy generate endpoint.
27
- */
28
- export declare class PrimeLLMClient {
29
- private apiKey;
30
- private baseURL;
31
- private timeoutMs;
32
- /**
33
- * Create a new PrimeLLM client.
34
- *
35
- * @param options - Configuration options
36
- * @param options.apiKey - Your PrimeLLM API key (required)
37
- * @param options.baseURL - API base URL (default: "https://api.primellm.in")
38
- * @param options.timeoutMs - Request timeout in ms (default: 60000)
39
- *
40
- * @example
41
- * const client = new PrimeLLMClient({
42
- * apiKey: "primellm_live_XXX",
43
- * });
44
- */
45
- constructor(options: PrimeLLMClientOptions);
46
- /**
47
- * Internal helper to make API requests.
48
- * Handles authentication, JSON parsing, and error handling.
49
- */
50
- private request;
51
- /**
52
- * Send a chat completion request using /v1/chat endpoint.
53
- *
54
- * This is the recommended method for most use cases.
55
- * Returns an OpenAI-compatible response format.
56
- *
57
- * @param request - The chat request with model and messages
58
- * @returns The chat response with choices, usage, and credits
59
- *
60
- * @example
61
- * const response = await client.chat({
62
- * model: "gpt-5.1",
63
- * messages: [
64
- * { role: "system", content: "You are a helpful assistant." },
65
- * { role: "user", content: "What is TypeScript?" },
66
- * ],
67
- * });
68
- * console.log(response.choices[0].message.content);
69
- */
70
- chat(request: ChatRequest): Promise<ChatResponse>;
71
- /**
72
- * Send a chat completion request using /v1/chat/completions endpoint.
73
- *
74
- * This is an alternative endpoint that also returns OpenAI-compatible format.
75
- * Use this if you need compatibility with OpenAI's exact endpoint path.
76
- *
77
- * @param request - The chat request with model and messages
78
- * @returns The chat response with choices, usage, and credits
79
- */
80
- completions(request: ChatRequest): Promise<ChatResponse>;
81
- /**
82
- * Send a request to the legacy /generate endpoint.
83
- *
84
- * This endpoint returns a different response format than chat().
85
- * Use chat() for new projects; this is for backwards compatibility.
86
- *
87
- * @param request - The generate request with model and messages
88
- * @returns The generate response with reply, tokens_used, cost
89
- *
90
- * @example
91
- * const response = await client.generate({
92
- * model: "gpt-5.1",
93
- * messages: [{ role: "user", content: "Hello!" }],
94
- * });
95
- * console.log(response.reply);
96
- */
97
- generate(request: GenerateRequest): Promise<GenerateResponse>;
98
- /**
99
- * Stream a chat completion response.
100
- *
101
- * ⚠️ NOT IMPLEMENTED YET - Backend streaming support coming soon.
102
- *
103
- * @throws Error always - streaming not supported in this version
104
- */
105
- streamChat(_request: ChatRequest): AsyncGenerator<ChatResponse, void, unknown>;
106
- /**
107
- * Stream a completions response.
108
- *
109
- * ⚠️ NOT IMPLEMENTED YET - Backend streaming support coming soon.
110
- *
111
- * @throws Error always - streaming not supported in this version
112
- */
113
- streamCompletions(_request: ChatRequest): AsyncGenerator<ChatResponse, void, unknown>;
114
- /**
115
- * Stream a generate response.
116
- *
117
- * ⚠️ NOT IMPLEMENTED YET - Backend streaming support coming soon.
118
- *
119
- * @throws Error always - streaming not supported in this version
120
- */
121
- streamGenerate(_request: GenerateRequest): AsyncGenerator<GenerateResponse, void, unknown>;
122
- }
123
- export default PrimeLLMClient;
1
+ #!/usr/bin/env node
2
+ export {};
124
3
  //# sourceMappingURL=index.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;;;;;;;;GAkBG;AAEH,OAAO,EACH,WAAW,EACX,YAAY,EACZ,eAAe,EACf,gBAAgB,EAChB,qBAAqB,EACxB,MAAM,YAAY,CAAC;AAGpB,cAAc,YAAY,CAAC;AAE3B;;;;;GAKG;AACH,qBAAa,cAAc;IACvB,OAAO,CAAC,MAAM,CAAS;IACvB,OAAO,CAAC,OAAO,CAAS;IACxB,OAAO,CAAC,SAAS,CAAS;IAE1B;;;;;;;;;;;;OAYG;gBACS,OAAO,EAAE,qBAAqB;IAS1C;;;OAGG;YACW,OAAO;IAsCrB;;;;;;;;;;;;;;;;;;OAkBG;IACG,IAAI,CAAC,OAAO,EAAE,WAAW,GAAG,OAAO,CAAC,YAAY,CAAC;IAIvD;;;;;;;;OAQG;IACG,WAAW,CAAC,OAAO,EAAE,WAAW,GAAG,OAAO,CAAC,YAAY,CAAC;IAI9D;;;;;;;;;;;;;;;OAeG;IACG,QAAQ,CAAC,OAAO,EAAE,eAAe,GAAG,OAAO,CAAC,gBAAgB,CAAC;IAQnE;;;;;;OAMG;IACI,UAAU,CACb,QAAQ,EAAE,WAAW,GACtB,cAAc,CAAC,YAAY,EAAE,IAAI,EAAE,OAAO,CAAC;IAQ9C;;;;;;OAMG;IACI,iBAAiB,CACpB,QAAQ,EAAE,WAAW,GACtB,cAAc,CAAC,YAAY,EAAE,IAAI,EAAE,OAAO,CAAC;IAO9C;;;;;;OAMG;IACI,cAAc,CACjB,QAAQ,EAAE,eAAe,GAC1B,cAAc,CAAC,gBAAgB,EAAE,IAAI,EAAE,OAAO,CAAC;CAMrD;AAGD,eAAe,cAAc,CAAC"}
1
+ {"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":""}