english-optimizer-cli 1.1.1 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/API_SETUP.md ADDED
@@ -0,0 +1,186 @@
1
+ # 使用云端API配置指南
2
+
3
+ ## 方法1: 使用配置文件
4
+
5
+ 在用户目录创建或编辑配置文件:
6
+
7
+ ```bash
8
+ # 创建配置目录
9
+ mkdir -p ~/.english-optimizer
10
+
11
+ # 创建或编辑配置文件
12
+ code ~/.english-optimizer/config.yaml
13
+ # 或者使用其他编辑器
14
+ ```
15
+
16
+ ### OpenAI 配置
17
+
18
+ ```yaml
19
+ ai:
20
+ provider: 'api'
21
+ api:
22
+ provider: 'openai'
23
+ apiKey: 'sk-your-openai-api-key-here'
24
+ baseUrl: 'https://api.openai.com/v1'
25
+ model: 'gpt-3.5-turbo'
26
+ # 或者使用更新的模型
27
+ model: 'gpt-4o'
28
+ model: 'gpt-4o-mini'
29
+ ```
30
+
31
+ ### GLM (智谱AI) 配置
32
+
33
+ ```yaml
34
+ ai:
35
+ provider: 'api'
36
+ api:
37
+ provider: 'glm'
38
+ apiKey: 'your-glm-api-key-here'
39
+ baseUrl: 'https://open.bigmodel.cn/api/paas/v4'
40
+ model: 'glm-4'
41
+ # 或者使用更便宜的模型
42
+ model: 'glm-4-flash'
43
+ ```
44
+
45
+ ### DeepSeek 配置
46
+
47
+ ```yaml
48
+ ai:
49
+ provider: 'api'
50
+ api:
51
+ provider: 'custom'
52
+ apiKey: 'your-deepseek-api-key-here'
53
+ baseUrl: 'https://api.deepseek.com/v1'
54
+ model: 'deepseek-chat'
55
+ ```
56
+
57
+ ### Claude (Anthropic) 配置
58
+
59
+ 注意:Claude 使用不同的API格式,可能需要调整代码
60
+
61
+ ```yaml
62
+ ai:
63
+ provider: 'api'
64
+ api:
65
+ provider: 'custom'
66
+ apiKey: 'your-anthropic-api-key-here'
67
+ baseUrl: 'https://api.anthropic.com/v1'
68
+ model: 'claude-3-5-sonnet-20241022'
69
+ ```
70
+
71
+ ### 其他兼容OpenAI格式的API
72
+
73
+ 如果服务使用OpenAI兼容格式:
74
+
75
+ ```yaml
76
+ ai:
77
+ provider: 'api'
78
+ api:
79
+ provider: 'custom'
80
+ apiKey: 'your-api-key-here'
81
+ baseUrl: 'https://your-api-endpoint.com/v1'
82
+ model: 'your-model-name'
83
+ ```
84
+
85
+ ## 方法2: 使用环境变量
86
+
87
+ 在项目目录创建 `.env` 文件:
88
+
89
+ ```bash
90
+ cp .env.example .env
91
+ code .env
92
+ ```
93
+
94
+ ### OpenAI 环境变量
95
+
96
+ ```env
97
+ AI_PROVIDER=api
98
+ API_PROVIDER=openai
99
+ API_KEY=sk-your-openai-api-key-here
100
+ API_BASE_URL=https://api.openai.com/v1
101
+ API_MODEL=gpt-4o
102
+ ```
103
+
104
+ ### GLM 环境变量
105
+
106
+ ```env
107
+ AI_PROVIDER=api
108
+ API_PROVIDER=glm
109
+ API_KEY=your-glm-api-key-here
110
+ API_BASE_URL=https://open.bigmodel.cn/api/paas/v4
111
+ API_MODEL=glm-4
112
+ ```
113
+
114
+ ### DeepSeek 环境变量
115
+
116
+ ```env
117
+ AI_PROVIDER=api
118
+ API_PROVIDER=custom
119
+ API_KEY=your-deepseek-api-key-here
120
+ API_BASE_URL=https://api.deepseek.com/v1
121
+ API_MODEL=deepseek-chat
122
+ ```
123
+
124
+ ## 常用大模型API配置
125
+
126
+ ### OpenAI (GPT系列)
127
+
128
+ - API Key: https://platform.openai.com/api-keys
129
+ - Base URL: `https://api.openai.com/v1`
130
+ - Models: `gpt-4o`, `gpt-4o-mini`, `gpt-3.5-turbo`
131
+
132
+ ### GLM-4 (智谱AI)
133
+
134
+ - API Key: https://open.bigmodel.cn/usercenter/apikeys
135
+ - Base URL: `https://open.bigmodel.cn/api/paas/v4`
136
+ - Models: `glm-4`, `glm-4-flash`, `glm-3-turbo`
137
+
138
+ ### DeepSeek
139
+
140
+ - API Key: https://platform.deepseek.com/
141
+ - Base URL: `https://api.deepseek.com/v1`
142
+ - Models: `deepseek-chat`, `deepseek-coder`
143
+
144
+ ### 阿里通义千问
145
+
146
+ - API Key: https://dashscope.console.aliyun.com/
147
+ - Base URL: `https://dashscope.aliyuncs.com/compatible-mode/v1`
148
+ - Models: `qwen-turbo`, `qwen-plus`, `qwen-max`
149
+
150
+ ### 月之暗面 (Moonshot/Kimi)
151
+
152
+ - API Key: https://platform.moonshot.cn/
153
+ - Base URL: `https://api.moonshot.cn/v1`
154
+ - Models: `moonshot-v1-8k`, `moonshot-v1-32k`, `moonshot-v1-128k`
155
+
156
+ ## 验证配置
157
+
158
+ 配置完成后,测试是否工作:
159
+
160
+ ```bash
161
+ # 检查配置
162
+ fuck-abc config
163
+
164
+ # 启动CLI测试
165
+ fuck-abc
166
+ ```
167
+
168
+ ## 切换回本地LLM
169
+
170
+ 如果想切回本地Ollama:
171
+
172
+ ```yaml
173
+ ai:
174
+ provider: 'ollama'
175
+ ollama:
176
+ baseUrl: 'http://localhost:11434'
177
+ model: 'llama3.2:3b'
178
+ ```
179
+
180
+ 或者使用环境变量:
181
+
182
+ ```env
183
+ AI_PROVIDER=ollama
184
+ OLLAMA_BASE_URL=http://localhost:11434
185
+ OLLAMA_MODEL=llama3.2:3b
186
+ ```
package/README.md CHANGED
@@ -215,9 +215,21 @@ fuck-abc prompts
215
215
  ### Check Configuration
216
216
 
217
217
  ```bash
218
+ # Show current configuration
218
219
  fuck-abc config
220
+
221
+ # Run interactive setup wizard
222
+ fuck-abc config --setup
219
223
  ```
220
224
 
225
+ The setup wizard will guide you through:
226
+
227
+ - Choosing AI provider (local Ollama or cloud API)
228
+ - Configuring API settings
229
+ - Setting up preferences
230
+
231
+ ### Using Cloud APIs (OpenAI, GLM, DeepSeek, etc.)
232
+
221
233
  ## YAML Prompt Configuration
222
234
 
223
235
  The CLI supports YAML-based prompt configuration for more advanced customization.
@@ -290,6 +302,32 @@ The CLI can be configured via:
290
302
  1. **Environment Variables** (`.env` file)
291
303
  2. **Config File** (`~/.english-optimizer/config.yaml` or `~/.english-optimizer/config.json`)
292
304
 
305
+ ### Using Cloud APIs (OpenAI, GLM, DeepSeek, etc.)
306
+
307
+ By default, the CLI uses local Ollama. To use cloud APIs, you need to configure them.
308
+
309
+ **Quick Setup:**
310
+
311
+ 1. Copy the example environment file:
312
+
313
+ ```bash
314
+ cp .env.example .env
315
+ ```
316
+
317
+ 2. Edit `.env` with your API configuration:
318
+
319
+ ```env
320
+ AI_PROVIDER=api
321
+ API_PROVIDER=openai # or 'glm', 'custom'
322
+ API_KEY=your_api_key_here
323
+ API_BASE_URL=https://api.openai.com/v1
324
+ API_MODEL=gpt-4o
325
+ ```
326
+
327
+ For detailed API configuration instructions, see [API_SETUP.md](API_SETUP.md)
328
+
329
+ ### Config File
330
+
293
331
  ### Environment Variables
294
332
 
295
333
  Copy `.env.example` to `.env`:
package/config.yaml CHANGED
@@ -13,11 +13,41 @@ ai:
13
13
 
14
14
  # API Configuration (when provider: 'api')
15
15
  api:
16
- provider: 'openai' # Options: 'openai', 'glm', 'custom'
17
- apiKey: '' # Your API key
16
+ # Provider type: 'openai', 'glm', or 'custom'
17
+ provider: 'openai'
18
+
19
+ # Your API key (get from your API provider's dashboard)
20
+ apiKey: ''
21
+
22
+ # API endpoint URL
18
23
  baseUrl: 'https://api.openai.com/v1'
24
+
25
+ # Model name to use
19
26
  model: 'gpt-3.5-turbo'
20
27
 
28
+ # OpenAI Examples:
29
+ # - apiKey: 'sk-xxx'
30
+ # - baseUrl: 'https://api.openai.com/v1'
31
+ # - model: 'gpt-4o' or 'gpt-4o-mini' or 'gpt-3.5-turbo'
32
+
33
+ # GLM (Zhipu AI) Examples:
34
+ # - provider: 'glm'
35
+ # - apiKey: 'your-glm-key'
36
+ # - baseUrl: 'https://open.bigmodel.cn/api/paas/v4'
37
+ # - model: 'glm-4' or 'glm-4-flash'
38
+
39
+ # DeepSeek Examples:
40
+ # - provider: 'custom'
41
+ # - apiKey: 'your-deepseek-key'
42
+ # - baseUrl: 'https://api.deepseek.com/v1'
43
+ # - model: 'deepseek-chat'
44
+
45
+ # Aliyun Qwen Examples:
46
+ # - provider: 'custom'
47
+ # - apiKey: 'your-qwen-key'
48
+ # - baseUrl: 'https://dashscope.aliyuncs.com/compatible-mode/v1'
49
+ # - model: 'qwen-turbo' or 'qwen-plus'
50
+
21
51
  # Hotkey mappings for interactive mode
22
52
  hotkeys:
23
53
  professional: 'p'
@@ -35,4 +65,3 @@ features:
35
65
  enableCustomPrompts: true
36
66
  customPromptsPath: ~/.english-optimizer/prompts.json
37
67
  useYAMLPrompt: true # Enable YAML prompt configuration from prompt.yaml
38
-
@@ -0,0 +1,183 @@
1
+ "use strict";
2
+ var __importDefault = (this && this.__importDefault) || function (mod) {
3
+ return (mod && mod.__esModule) ? mod : { "default": mod };
4
+ };
5
+ Object.defineProperty(exports, "__esModule", { value: true });
6
+ exports.runConfigWizard = runConfigWizard;
7
+ const inquirer_1 = __importDefault(require("inquirer"));
8
+ const chalk_1 = __importDefault(require("chalk"));
9
+ const config_1 = require("./config");
10
+ async function runConfigWizard() {
11
+ console.log(chalk_1.default.cyan.bold('\n🚀 English Optimizer CLI - Configuration Wizard\n'));
12
+ const configManager = new config_1.ConfigManager();
13
+ const currentConfig = configManager.getConfig();
14
+ const answers = await inquirer_1.default.prompt([
15
+ {
16
+ type: 'list',
17
+ name: 'provider',
18
+ message: 'Select AI provider:',
19
+ choices: [
20
+ { name: '🤖 Ollama (Local)', value: 'ollama' },
21
+ { name: '☁️ Cloud API (OpenAI, GLM, etc.)', value: 'api' },
22
+ ],
23
+ default: currentConfig.ai.provider,
24
+ },
25
+ {
26
+ type: 'input',
27
+ name: 'ollamaBaseUrl',
28
+ message: 'Ollama base URL:',
29
+ default: currentConfig.ai.ollama?.baseUrl || 'http://localhost:11434',
30
+ when: (answers) => answers.provider === 'ollama',
31
+ validate: (input) => {
32
+ try {
33
+ new URL(input);
34
+ return true;
35
+ }
36
+ catch {
37
+ return 'Please enter a valid URL';
38
+ }
39
+ },
40
+ },
41
+ {
42
+ type: 'input',
43
+ name: 'ollamaModel',
44
+ message: 'Ollama model name:',
45
+ default: currentConfig.ai.ollama?.model || 'llama3.2:3b',
46
+ when: (answers) => answers.provider === 'ollama',
47
+ },
48
+ {
49
+ type: 'list',
50
+ name: 'apiProvider',
51
+ message: 'Select API provider:',
52
+ choices: [
53
+ { name: '🔵 OpenAI (GPT-4, GPT-3.5)', value: 'openai' },
54
+ { name: '🟢 GLM (Zhipu AI)', value: 'glm' },
55
+ { name: '🔷 DeepSeek', value: 'deepseek' },
56
+ { name: '🟣 Aliyun Qwen', value: 'qwen' },
57
+ { name: '🌙 Moonshot (Kimi)', value: 'moonshot' },
58
+ { name: '🔧 Custom API', value: 'custom' },
59
+ ],
60
+ default: currentConfig.ai.api?.provider || 'openai',
61
+ when: (answers) => answers.provider === 'api',
62
+ },
63
+ {
64
+ type: 'input',
65
+ name: 'apiKey',
66
+ message: 'Enter your API key:',
67
+ default: currentConfig.ai.api?.apiKey || '',
68
+ when: (answers) => answers.provider === 'api',
69
+ validate: (input) => {
70
+ if (!input || input.trim() === '') {
71
+ return 'API key cannot be empty';
72
+ }
73
+ return true;
74
+ },
75
+ },
76
+ {
77
+ type: 'input',
78
+ name: 'apiBaseUrl',
79
+ message: 'API base URL:',
80
+ default: (answers) => {
81
+ const currentUrl = currentConfig.ai.api?.baseUrl || '';
82
+ const providerDefaults = {
83
+ openai: 'https://api.openai.com/v1',
84
+ glm: 'https://open.bigmodel.cn/api/paas/v4',
85
+ deepseek: 'https://api.deepseek.com/v1',
86
+ qwen: 'https://dashscope.aliyuncs.com/compatible-mode/v1',
87
+ moonshot: 'https://api.moonshot.cn/v1',
88
+ custom: currentUrl,
89
+ };
90
+ return providerDefaults[answers.apiProvider] || currentUrl;
91
+ },
92
+ when: (answers) => answers.provider === 'api',
93
+ validate: (input) => {
94
+ try {
95
+ new URL(input);
96
+ return true;
97
+ }
98
+ catch {
99
+ return 'Please enter a valid URL';
100
+ }
101
+ },
102
+ },
103
+ {
104
+ type: 'input',
105
+ name: 'apiModel',
106
+ message: 'Model name:',
107
+ default: (answers) => {
108
+ const currentModel = currentConfig.ai.api?.model || '';
109
+ const modelDefaults = {
110
+ openai: 'gpt-4o',
111
+ glm: 'glm-4',
112
+ deepseek: 'deepseek-chat',
113
+ qwen: 'qwen-turbo',
114
+ moonshot: 'moonshot-v1-8k',
115
+ custom: currentModel,
116
+ };
117
+ return modelDefaults[answers.apiProvider] || currentModel;
118
+ },
119
+ when: (answers) => answers.provider === 'api',
120
+ },
121
+ {
122
+ type: 'confirm',
123
+ name: 'enableHistory',
124
+ message: 'Enable optimization history?',
125
+ default: currentConfig.features.enableHistory ?? true,
126
+ },
127
+ {
128
+ type: 'confirm',
129
+ name: 'enableYAMLPrompt',
130
+ message: 'Use YAML prompt configuration (conversational mode)?',
131
+ default: true,
132
+ },
133
+ ]);
134
+ // Build new config
135
+ const newConfig = {
136
+ ...currentConfig,
137
+ ai: {
138
+ provider: answers.provider,
139
+ ollama: answers.provider === 'ollama'
140
+ ? {
141
+ baseUrl: answers.ollamaBaseUrl,
142
+ model: answers.ollamaModel,
143
+ }
144
+ : undefined,
145
+ api: answers.provider === 'api'
146
+ ? {
147
+ provider: answers.apiProvider,
148
+ apiKey: answers.apiKey,
149
+ baseUrl: answers.apiBaseUrl,
150
+ model: answers.apiModel,
151
+ }
152
+ : undefined,
153
+ },
154
+ features: {
155
+ ...currentConfig.features,
156
+ enableHistory: answers.enableHistory,
157
+ useYAMLPrompt: answers.enableYAMLPrompt,
158
+ },
159
+ };
160
+ // Save config
161
+ console.log(chalk_1.default.cyan('\n💾 Saving configuration...\n'));
162
+ configManager.saveConfig(newConfig);
163
+ console.log(chalk_1.default.green.bold('✅ Configuration saved successfully!\n'));
164
+ // Show summary
165
+ console.log(chalk_1.default.white.bold('Current Configuration:\n'));
166
+ console.log(chalk_1.default.gray('─'.repeat(50)));
167
+ if (answers.provider === 'ollama') {
168
+ console.log(chalk_1.default.cyan(`Provider: ${chalk_1.default.bold('Ollama (Local)')}`));
169
+ console.log(chalk_1.default.cyan(` URL: ${newConfig.ai.ollama.baseUrl}`));
170
+ console.log(chalk_1.default.cyan(` Model: ${newConfig.ai.ollama.model}`));
171
+ }
172
+ else {
173
+ console.log(chalk_1.default.cyan(`Provider: ${chalk_1.default.bold(answers.apiProvider)}`));
174
+ console.log(chalk_1.default.cyan(` URL: ${newConfig.ai.api.baseUrl}`));
175
+ console.log(chalk_1.default.cyan(` Model: ${newConfig.ai.api.model}`));
176
+ }
177
+ console.log(chalk_1.default.cyan(`History: ${answers.enableHistory ? 'Enabled' : 'Disabled'}`));
178
+ console.log(chalk_1.default.cyan(`YAML Prompt: ${answers.enableYAMLPrompt ? 'Enabled' : 'Disabled'}`));
179
+ console.log(chalk_1.default.gray('─'.repeat(50)));
180
+ console.log(chalk_1.default.green('\n🎉 Ready to use! Run:'));
181
+ console.log(chalk_1.default.white.bold(' fuck-abc\n'));
182
+ }
183
+ //# sourceMappingURL=wizard.js.map
package/dist/index.js CHANGED
@@ -136,11 +136,18 @@ program
136
136
  });
137
137
  program
138
138
  .command('config')
139
- .description('Show current configuration')
140
- .action(() => {
139
+ .description('Show current configuration or run setup wizard')
140
+ .option('-s, --setup', 'Run interactive setup wizard', false)
141
+ .action(async (options) => {
141
142
  try {
142
- const config = config_1.configManager.getConfig();
143
- console.log(JSON.stringify(config, null, 2));
143
+ if (options.setup) {
144
+ const { runConfigWizard } = await Promise.resolve().then(() => __importStar(require('./config/wizard')));
145
+ await runConfigWizard();
146
+ }
147
+ else {
148
+ const config = config_1.configManager.getConfig();
149
+ console.log(JSON.stringify(config, null, 2));
150
+ }
144
151
  }
145
152
  catch (error) {
146
153
  (0, display_1.displayError)(error);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "english-optimizer-cli",
3
- "version": "1.1.1",
3
+ "version": "1.3.0",
4
4
  "description": "CLI tool to help non-native English speakers improve their writing using AI",
5
5
  "main": "dist/index.js",
6
6
  "bin": {