viho 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -13,11 +13,14 @@
13
13
 
14
14
  ## Features
15
15
 
16
- - Multiple AI model management
16
+ - Multiple AI model management (OpenAI, Google Gemini)
17
+ - Support for OpenAI-compatible APIs
18
+ - Support for Google Gemini AI (API and Vertex AI)
17
19
  - Interactive Q&A with streaming responses
18
20
  - Continuous chat sessions for multi-turn conversations
19
- - Support for thinking mode (enabled/disabled/auto)
20
- - Configurable API endpoints (OpenAI, Anthropic, custom providers)
21
+ - Support for thinking mode (for compatible models)
22
+ - Expert mode with domain-specific knowledge
23
+ - Configurable API endpoints
21
24
  - Default model configuration
22
25
  - Simple and intuitive CLI interface
23
26
  - Persistent configuration storage
@@ -62,19 +65,40 @@ viho ask
62
65
 
63
66
  Add a new AI model configuration interactively.
64
67
 
65
- You'll be prompted to enter:
68
+ First, you'll select the model type:
69
+
70
+ - **openai** - For OpenAI and OpenAI-compatible APIs
71
+ - **gemini api** - For Google Gemini AI Studio
72
+ - **gemini vertex** - For Google Gemini Vertex AI
73
+
74
+ Then you'll be prompted to enter the required information based on the model type:
75
+
76
+ **For OpenAI:**
66
77
 
67
78
  - Model name (a custom identifier)
68
79
  - API key
69
80
  - Base URL (e.g., https://api.openai.com/v1)
70
- - Model ID (e.g., gpt-4, claude-3-5-sonnet-20241022)
71
- - Thinking mode (enabled/disabled/auto)
81
+ - Model ID (e.g., gpt-4, gpt-4o)
82
+ - Thinking mode (enabled/disabled)
83
+
84
+ **For Gemini API:**
85
+
86
+ - Model name (a custom identifier)
87
+ - API key (from Google AI Studio)
88
+ - Model ID (e.g., gemini-pro, gemini-1.5-flash, gemini-1.5-pro)
89
+
90
+ **For Gemini Vertex:**
91
+
92
+ - Model name (a custom identifier)
93
+ - Project ID (your GCP project)
94
+ - Location (e.g., us-east1, us-central1)
95
+ - Model ID (e.g., gemini-1.5-flash-002, gemini-1.5-pro-002)
72
96
 
73
97
  ```bash
74
98
  viho model add
75
99
  ```
76
100
 
77
- After adding a model, it will be available for use with `viho ask` and `viho chat` commands.
101
+ After adding a model, it will be available for use with `viho ask`, `viho chat`, and `viho expert` commands.
78
102
 
79
103
  #### `viho model list`
80
104
 
@@ -87,24 +111,30 @@ viho model list
87
111
  This command displays:
88
112
 
89
113
  - Model name with a `(default)` tag for the default model
90
- - Model ID
91
- - Base URL
92
- - Thinking mode setting
114
+ - Model type (openai, gemini api, or gemini vertex)
115
+ - Type-specific configuration details
93
116
 
94
117
  **Example output:**
95
118
 
96
119
  ```
97
120
  Configured models:
98
121
 
99
- deepseek (default)
100
- Model ID: ep-20250822181529-2gg27
101
- Base URL: https://ark.cn-beijing.volces.com/api/v3
102
- Thinking: auto
103
-
104
- • kimi
105
- Model ID: kimi-k2-thinking
106
- Base URL: https://api.moonshot.cn/v1
122
+ gpt4 (default)
123
+ Type: openai
124
+ Model ID: gpt-4o
125
+ Base URL: https://api.openai.com/v1
107
126
  Thinking: enabled
127
+
128
+ • gemini
129
+ Type: gemini api
130
+ Model ID: gemini-1.5-flash
131
+ API Key: ***
132
+
133
+ • gemini-pro
134
+ Type: gemini vertex
135
+ Model ID: gemini-1.5-pro-002
136
+ Project ID: my-project-123
137
+ Location: us-east1
108
138
  ```
109
139
 
110
140
  #### `viho model remove`
@@ -177,6 +207,49 @@ The chat session runs in a loop, allowing you to ask multiple questions continuo
177
207
  - `viho ask` - Single question, exits after receiving the answer
178
208
  - `viho chat` - Continuous loop, keeps asking for new questions until manually stopped (Ctrl+C)
179
209
 
210
+ ### Expert Mode
211
+
212
+ Expert mode allows you to chat with an AI model that has access to domain-specific documentation, making it more knowledgeable about particular libraries or frameworks.
213
+
214
+ #### `viho expert list`
215
+
216
+ List all available expert resources:
217
+
218
+ ```bash
219
+ viho expert list
220
+ ```
221
+
222
+ This displays available experts like:
223
+
224
+ - `antd` - Ant Design documentation
225
+ - `daisyui` - DaisyUI documentation
226
+
227
+ #### `viho expert <name> [modelName]`
228
+
229
+ Start an expert chat session with domain-specific knowledge:
230
+
231
+ ```bash
232
+ viho expert antd
233
+ ```
234
+
235
+ Or specify a model explicitly:
236
+
237
+ ```bash
238
+ viho expert daisyui mymodel
239
+ ```
240
+
241
+ The expert mode works similarly to `viho chat` but includes the relevant documentation as context, making the AI more accurate when answering questions about that specific library or framework.
242
+
243
+ **Example:**
244
+
245
+ ```bash
246
+ # Get help with Ant Design
247
+ viho expert antd
248
+
249
+ # Ask: "How do I create a responsive table with sorting?"
250
+ # The AI will use Ant Design documentation to provide accurate answers
251
+ ```
252
+
180
253
  ## Configuration
181
254
 
182
255
  Configuration is stored in `~/viho.json`. You can manage all settings through the CLI commands.
@@ -200,34 +273,58 @@ Configuration is stored in `~/viho.json`. You can manage all settings through th
200
273
 
201
274
  ## Supported Providers
202
275
 
203
- viho works with any OpenAI-compatible API, including:
276
+ viho supports multiple AI providers:
277
+
278
+ ### OpenAI-Compatible APIs
204
279
 
205
- - OpenAI (GPT-4, GPT-3.5, etc.)
206
- - Anthropic Claude (via compatible endpoints)
280
+ - OpenAI (GPT-4, GPT-4o, GPT-3.5, etc.)
281
+ - Any OpenAI-compatible API endpoints
207
282
  - Custom LLM providers with OpenAI-compatible APIs
208
283
 
284
+ ### Google Gemini
285
+
286
+ - **Gemini API** (via Google AI Studio)
287
+ - Ideal for personal development and prototyping
288
+ - Get API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
289
+
290
+ - **Gemini Vertex AI** (via Google Cloud)
291
+ - Enterprise-grade with advanced features
292
+ - Requires Google Cloud project with Vertex AI enabled
293
+ - Supports context caching for cost optimization
294
+
209
295
  ## Examples
210
296
 
211
297
  ### Adding an OpenAI Model
212
298
 
213
299
  ```bash
214
300
  viho model add
301
+ # Select model type: openai
215
302
  # Enter model name: gpt4
216
303
  # Enter API key: sk-...
217
304
  # Enter base URL: https://api.openai.com/v1
218
- # Enter model ID: gpt-4
305
+ # Enter model ID: gpt-4o
219
306
  # Thinking mode: disabled
220
307
  ```
221
308
 
222
- ### Adding a Claude Model
309
+ ### Adding a Gemini API Model
310
+
311
+ ```bash
312
+ viho model add
313
+ # Select model type: gemini api
314
+ # Enter model name: gemini
315
+ # Enter API key: your-google-ai-api-key
316
+ # Enter model ID: gemini-1.5-flash
317
+ ```
318
+
319
+ ### Adding a Gemini Vertex AI Model
223
320
 
224
321
  ```bash
225
322
  viho model add
226
- # Enter model name: claude
227
- # Enter API key: your-anthropic-key
228
- # Enter base URL: https://api.anthropic.com
229
- # Enter model ID: claude-3-5-sonnet-20241022
230
- # Thinking mode: auto
323
+ # Select model type: gemini vertex
324
+ # Enter model name: gemini-pro
325
+ # Enter projectId: my-gcp-project
326
+ # Enter location: us-east1
327
+ # Enter model ID: gemini-1.5-pro-002
231
328
  ```
232
329
 
233
330
  ### Setting Up for First Use
@@ -247,7 +344,8 @@ viho ask
247
344
 
248
345
  - [qiao-cli](https://www.npmjs.com/package/qiao-cli) - CLI utilities
249
346
  - [qiao-config](https://www.npmjs.com/package/qiao-config) - Configuration management
250
- - [qiao-llm](https://www.npmjs.com/package/qiao-llm) - LLM integration
347
+ - [qiao-file](https://www.npmjs.com/package/qiao-file) - File utilities
348
+ - [viho-llm](https://www.npmjs.com/package/viho-llm) - Multi-provider LLM integration (OpenAI, Gemini)
251
349
 
252
350
  ## License
253
351
 
package/bin/viho-ask.js CHANGED
@@ -1,12 +1,9 @@
1
1
  // qiao
2
2
  const cli = require('qiao-cli');
3
3
 
4
- // llm
5
- const LLM = require('qiao-llm');
6
-
7
4
  // util
8
5
  const { ask } = require('../src/llm.js');
9
- const { getDB, preLLMAsk } = require('../src/util.js');
6
+ const { getDB, preLLMAsk, initLLM } = require('../src/util.js');
10
7
  const db = getDB();
11
8
 
12
9
  // cmd
@@ -18,10 +15,7 @@ cli.cmd
18
15
  const model = await preLLMAsk('ask', db, modelName);
19
16
 
20
17
  // llm
21
- const llm = LLM({
22
- apiKey: model.apiKey,
23
- baseURL: model.baseURL,
24
- });
18
+ const llm = initLLM(model);
25
19
 
26
20
  // ask
27
21
  await ask(llm, model);
package/bin/viho-chat.js CHANGED
@@ -1,12 +1,9 @@
1
1
  // qiao
2
2
  const cli = require('qiao-cli');
3
3
 
4
- // llm
5
- const LLM = require('qiao-llm');
6
-
7
4
  // util
8
5
  const { ask } = require('../src/llm.js');
9
- const { getDB, preLLMAsk } = require('../src/util.js');
6
+ const { getDB, preLLMAsk, initLLM } = require('../src/util.js');
10
7
  const db = getDB();
11
8
 
12
9
  // cmd
@@ -18,10 +15,7 @@ cli.cmd
18
15
  const model = await preLLMAsk('chat', db, modelName);
19
16
 
20
17
  // init
21
- const llm = LLM({
22
- apiKey: model.apiKey,
23
- baseURL: model.baseURL,
24
- });
18
+ const llm = initLLM(model);
25
19
 
26
20
  // chat
27
21
  let keepChatting = true;
@@ -1,13 +1,10 @@
1
1
  // qiao
2
2
  const cli = require('qiao-cli');
3
3
 
4
- // llm
5
- const LLM = require('qiao-llm');
6
-
7
4
  // util
8
5
  const { expertAsk } = require('../src/llm.js');
9
6
  const { experts } = require('../src/experts/experts.js');
10
- const { getDB, printLogo, preLLMAsk } = require('../src/util.js');
7
+ const { getDB, printLogo, preLLMAsk, initLLM } = require('../src/util.js');
11
8
  const db = getDB();
12
9
 
13
10
  // actions
@@ -65,10 +62,7 @@ async function expertAskByName(expertName) {
65
62
  if (!model) return;
66
63
 
67
64
  // init
68
- const llm = LLM({
69
- apiKey: model.apiKey,
70
- baseURL: model.baseURL,
71
- });
65
+ const llm = initLLM(model);
72
66
 
73
67
  // chat
74
68
  let keepChatting = true;
package/bin/viho-model.js CHANGED
@@ -32,8 +32,19 @@ cli.cmd
32
32
  */
33
33
  async function modelAdd() {
34
34
  try {
35
- // q a
36
- const questions = [
35
+ // model type
36
+ const modelTypeQuestion = [
37
+ {
38
+ type: 'list',
39
+ name: 'modelType',
40
+ message: 'openai vs gemini',
41
+ choices: ['openai', 'gemini api', 'gemini vertex'],
42
+ },
43
+ ];
44
+ const modelTypeAnswer = await cli.ask(modelTypeQuestion);
45
+
46
+ // questions
47
+ const openAIQuestion = [
37
48
  {
38
49
  type: 'input',
39
50
  name: 'modelName',
@@ -61,7 +72,55 @@ async function modelAdd() {
61
72
  choices: ['enabled', 'disabled'],
62
73
  },
63
74
  ];
64
- const answers = await cli.ask(questions);
75
+ const geminiAPIQuestion = [
76
+ {
77
+ type: 'input',
78
+ name: 'modelName',
79
+ message: 'Enter model name:',
80
+ },
81
+ {
82
+ type: 'input',
83
+ name: 'apiKey',
84
+ message: 'Enter API key:',
85
+ },
86
+ {
87
+ type: 'input',
88
+ name: 'modelID',
89
+ message: 'Enter model ID (e.g., gemini-pro, gemini-1.5-flash):',
90
+ },
91
+ ];
92
+ const geminiVertexQuestion = [
93
+ {
94
+ type: 'input',
95
+ name: 'modelName',
96
+ message: 'Enter model name:',
97
+ },
98
+ {
99
+ type: 'input',
100
+ name: 'projectId',
101
+ message: 'Enter projectId:',
102
+ },
103
+ {
104
+ type: 'input',
105
+ name: 'location',
106
+ message: 'Enter location:',
107
+ },
108
+ {
109
+ type: 'input',
110
+ name: 'modelID',
111
+ message: 'Enter model ID (e.g., gemini-1.5-flash-002, gemini-1.5-pro-002):',
112
+ },
113
+ ];
114
+
115
+ // final question
116
+ let finalQuestion;
117
+ if (modelTypeAnswer.modelType === 'openai') finalQuestion = openAIQuestion;
118
+ if (modelTypeAnswer.modelType === 'gemini api') finalQuestion = geminiAPIQuestion;
119
+ if (modelTypeAnswer.modelType === 'gemini vertex') finalQuestion = geminiVertexQuestion;
120
+
121
+ // answers
122
+ const answers = await cli.ask(finalQuestion);
123
+ answers.modelType = modelTypeAnswer.modelType;
65
124
  console.log();
66
125
 
67
126
  // check
@@ -110,9 +169,21 @@ async function modelList() {
110
169
  const isDefault = model.modelName === defaultModel;
111
170
  const defaultTag = isDefault ? cli.colors.green(' (default)') : '';
112
171
  console.log(cli.colors.white(` • ${model.modelName}${defaultTag}`));
113
- console.log(cli.colors.gray(` Model ID: ${model.modelID}`));
114
- console.log(cli.colors.gray(` Base URL: ${model.baseURL}`));
115
- console.log(cli.colors.gray(` Thinking: ${model.modelThinking}`));
172
+ console.log(cli.colors.gray(` Type: ${model.modelType || 'openai'}`));
173
+
174
+ if (model.modelType === 'openai') {
175
+ console.log(cli.colors.gray(` Model ID: ${model.modelID}`));
176
+ console.log(cli.colors.gray(` Base URL: ${model.baseURL}`));
177
+ console.log(cli.colors.gray(` Thinking: ${model.modelThinking}`));
178
+ } else if (model.modelType === 'gemini api') {
179
+ console.log(cli.colors.gray(` Model ID: ${model.modelID}`));
180
+ console.log(cli.colors.gray(` API Key: ${model.apiKey ? '***' : 'Not set'}`));
181
+ } else if (model.modelType === 'gemini vertex') {
182
+ console.log(cli.colors.gray(` Model ID: ${model.modelID}`));
183
+ console.log(cli.colors.gray(` Project ID: ${model.projectId}`));
184
+ console.log(cli.colors.gray(` Location: ${model.location}`));
185
+ }
186
+
116
187
  console.log();
117
188
  });
118
189
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "viho",
3
- "version": "0.1.0",
3
+ "version": "0.2.0",
4
4
  "description": "A lightweight CLI tool for managing and interacting with AI models",
5
5
  "keywords": [
6
6
  "ai",
@@ -42,7 +42,7 @@
42
42
  "qiao-cli": "^5.0.0",
43
43
  "qiao-config": "^5.0.1",
44
44
  "qiao-file": "^5.0.1",
45
- "qiao-llm": "^0.2.5"
45
+ "viho-llm": "^0.2.0"
46
46
  },
47
- "gitHead": "277a99826d4354b4fdbb96f48395a7673a6ab042"
47
+ "gitHead": "38f9238d5c59fe322424ee30e32037b44f6be7d8"
48
48
  }