english-optimizer-cli 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,487 @@
1
+ # English Optimizer CLI
2
+
3
+ A powerful CLI tool to help non-native English speakers improve their writing using AI. Simply type your text and use hotkeys to instantly optimize it in different styles!
4
+
5
+ ## Features
6
+
7
+ - 🎯 **Interactive Mode** - Real-time text optimization with hotkeys
8
+ - 🤖 **Multiple AI Backends** - Use local Ollama or cloud APIs (OpenAI, GLM)
9
+ - 📝 **Optimization Modes** - Professional, Concise, Grammar fix, Senior Developer
10
+ - 📜 **History Tracking** - Review your past optimizations and learn
11
+ - 🔧 **Custom Prompts** - Create your own optimization styles
12
+ - 🐋 **Docker Support** - Easy setup with Ollama in Docker
13
+ - ⚡ **Fast & Private** - Local AI processing with Ollama
14
+
15
+ ## Prerequisites
16
+
17
+ - Node.js 18+ and npm
18
+ - Docker (for local Ollama setup)
19
+
20
+ ## YAML Prompt Configuration
21
+
22
+ You can configure the AI behavior using a YAML file at the root of your project:
23
+
24
+ ### YAML Prompt File
25
+
26
+ Create a `prompt.yaml` file in your project root:
27
+
28
+ ```yaml
29
+ role:
30
+ name: Native English Full-Stack Engineer Coach
31
+ description: >
32
+ You are a senior full-stack software engineer who is a native English speaker.
33
+ You have strong experience in frontend, backend, system design, and real-world
34
+ engineering collaboration.
35
+
36
+ goals:
37
+ - Help the user learn natural, spoken English used by software engineers
38
+ - Translate the user's input into idiomatic, native-level English
39
+
40
+ user_profile:
41
+ background: Software Engineer
42
+ native_language: Chinese
43
+ learning_goal: Improve practical English for daily engineering communication
44
+
45
+ instructions:
46
+ - Always assume the user wants to improve their English
47
+ - Rewrite the input in a way a native English-speaking engineer would naturally say it
48
+ - Prefer conversational, work-friendly language over academic or overly formal English
49
+
50
+ output_format:
51
+ style: >
52
+ Natural, spoken English commonly used in engineering teams.
53
+ Sounds like real conversations in meetings, Slack, or code reviews.
54
+
55
+ examples:
56
+ - input: '这个需求我已经完成了,但是还需要再测试一下'
57
+ output: "I've finished this feature, but I still need to do some more testing."
58
+
59
+ - input: '这个问题我稍后再跟进'
60
+ output: "I'll follow up on this later."
61
+
62
+ constraints:
63
+ - Do not over-explain unless the user explicitly asks
64
+ - Do not change the original meaning
65
+ ```
66
+
67
+ The CLI will automatically detect and use the `prompt.yaml` file if it exists in the current directory.
68
+
69
+ ### Text-based Prompt Template
70
+
71
+ You can also customize the translation/optimization behavior using a text-based prompt:
72
+
73
+ ### View Current Prompt
74
+
75
+ ```bash
76
+ fuck-abc prompt --show
77
+ ```
78
+
79
+ ### Edit Prompt Template
80
+
81
+ ```bash
82
+ fuck-abc prompt --edit
83
+ # Or edit directly
84
+ code ~/.english-optimizer/translation-prompt.txt
85
+ ```
86
+
87
+ ### Edit Prompt Template
88
+
89
+ ```bash
90
+ english-opt prompt --edit
91
+ # Or edit directly
92
+ code ~/.english-optimizer/translation-prompt.txt
93
+ ```
94
+
95
+ ### Example Custom Prompts
96
+
97
+ **For formal business English:**
98
+
99
+ ```
100
+ You are a professional business translator.
101
+ Translate/optimize to formal business English with professional terminology.
102
+ Text: "{text}"
103
+ ```
104
+
105
+ **For casual developer chat:**
106
+
107
+ ```
108
+ You are a software developer.
109
+ Translate/optimize to casual English like Slack messages.
110
+ Text: "{text}"
111
+ ```
112
+
113
+ **For technical documentation:**
114
+
115
+ ```
116
+ You are a technical documentation translator.
117
+ Translate/optimize to precise technical documentation English.
118
+ Text: "{text}"
119
+ ```
120
+
121
+ See `TRANSLATION_PROMPT_EXAMPLE.md` for more examples.
122
+
123
+ ## Quick Start
124
+
125
+ ### 1. Clone and Install
126
+
127
+ ```bash
128
+ cd /Users/laosi/Documents/repos/english-optimizer-cli
129
+ npm install
130
+ ```
131
+
132
+ ### 2. Setup Ollama (Local AI)
133
+
134
+ ```bash
135
+ # Make setup script executable
136
+ chmod +x scripts/setup-ollama.sh
137
+
138
+ # Run setup (starts Ollama and pulls the model)
139
+ ./scripts/setup-ollama.sh
140
+ ```
141
+
142
+ ### 3. Build the CLI
143
+
144
+ ```bash
145
+ npm run build
146
+ ```
147
+
148
+ ### 4. Start Using
149
+
150
+ ```bash
151
+ npm start
152
+ # or
153
+ fuck-abc
154
+ ```
155
+
156
+ ## Usage
157
+
158
+ ### Instant Mode (Default)
159
+
160
+ Just run the CLI and start typing! Press **Cmd+key** to instantly optimize:
161
+
162
+ ```bash
163
+ fuck-abc
164
+ # or
165
+ npm start
166
+ ```
167
+
168
+ **How it works:**
169
+
170
+ 1. Type your text normally
171
+ 2. Press `Cmd+P` for Professional tone
172
+ 3. Press `Cmd+C` for Concise version
173
+ 4. Press `Cmd+G` for Grammar fix
174
+ 5. Press `Cmd+D` for Senior developer style
175
+ 6. Press **Enter** to accept the optimization, or keep typing to ignore
176
+
177
+ **Controls:**
178
+
179
+ - `Ctrl+U` - Clear text
180
+ - `Ctrl+C` - Quit
181
+
182
+ ### Classic Mode (Optional)
183
+
184
+ If you prefer to submit text first, then choose optimizations:
185
+
186
+ ```bash
187
+ fuck-abc --classic
188
+ # or
189
+ npm start -- --classic
190
+ ```
191
+
192
+ 1. Enter your text (press Enter twice to finish)
193
+ 2. Press hotkeys to see different optimizations:
194
+ - `Ctrl+P` - Professional tone
195
+ - `Ctrl+C` - Concise version
196
+ - `Ctrl+G` - Grammar fix
197
+ - `Ctrl+D` - Senior developer style
198
+ - `Ctrl+R` - Reset to original
199
+ - `Ctrl+H` - View history
200
+ - `Ctrl+Q` - Quit
201
+
202
+ ### View History
203
+
204
+ ```bash
205
+ fuck-abc history
206
+ fuck-abc history -n 5 # Show last 5 entries
207
+ ```
208
+
209
+ ### List Custom Prompts
210
+
211
+ ```bash
212
+ fuck-abc prompts
213
+ ```
214
+
215
+ ### Check Configuration
216
+
217
+ ```bash
218
+ fuck-abc config
219
+ ```
220
+
221
+ ## YAML Prompt Configuration
222
+
223
+ The CLI supports YAML-based prompt configuration for more advanced customization.
224
+
225
+ ### Using YAML Prompts
226
+
227
+ Create a `prompt.yaml` file in your project root (or any directory where you run the CLI):
228
+
229
+ ```yaml
230
+ role:
231
+ name: Native English Full-Stack Engineer Coach
232
+ description: >
233
+ You are a senior full-stack software engineer who is a native English speaker.
234
+ You have strong experience in frontend, backend, system design, and real-world
235
+ engineering collaboration.
236
+
237
+ goals:
238
+ - Help the user learn natural, spoken English used by software engineers
239
+ - Translate the user's input into idiomatic, native-level English
240
+ - Optimize wording to sound natural, confident, and professional
241
+
242
+ user_profile:
243
+ background: Software Engineer
244
+ native_language: Chinese
245
+ learning_goal: Improve practical English for daily engineering communication
246
+
247
+ instructions:
248
+ - Always assume the user wants to improve their English, not just get a literal translation
249
+ - Rewrite the input in a way a native English-speaking engineer would naturally say it
250
+ - Prefer conversational, work-friendly language over academic or overly formal English
251
+ - Keep sentences concise and clear, like real team communication
252
+
253
+ output_format:
254
+ style: >
255
+ Natural, spoken English commonly used in engineering teams.
256
+ Sounds like real conversations in meetings, Slack, or code reviews.
257
+
258
+ examples:
259
+ - input: '这个需求我已经完成了,但是还需要再测试一下'
260
+ output: "I've finished this feature, but I still need to do some more testing."
261
+
262
+ - input: '这个问题我稍后再跟进'
263
+ output: "I'll follow up on this later."
264
+
265
+ constraints:
266
+ - Do not over-explain unless the user explicitly asks
267
+ - Do not change the original meaning
268
+ - Do not use overly complex vocabulary unless it fits real engineering conversations
269
+ ```
270
+
271
+ ### Managing YAML Prompts
272
+
273
+ ```bash
274
+ # Check if YAML prompt is detected
275
+ fuck-abc prompt
276
+
277
+ # Show current YAML prompt configuration
278
+ fuck-abc prompt --show
279
+
280
+ # Edit YAML prompt file
281
+ fuck-abc prompt --edit
282
+ ```
283
+
284
+ The CLI automatically detects `prompt.yaml` in the current directory and uses it for translation/optimization.
285
+
286
+ ## Configuration
287
+
288
+ The CLI can be configured via:
289
+
290
+ 1. **Environment Variables** (`.env` file)
291
+ 2. **Config File** (`~/.english-optimizer/config.yaml` or `~/.english-optimizer/config.json`)
292
+
293
+ ### Environment Variables
294
+
295
+ Copy `.env.example` to `.env`:
296
+
297
+ ```bash
298
+ cp .env.example .env
299
+ ```
300
+
301
+ Edit `.env`:
302
+
303
+ ```env
304
+ # AI Provider: 'ollama' for local, 'api' for cloud
305
+ AI_PROVIDER=ollama
306
+
307
+ # Ollama Configuration (local)
308
+ OLLAMA_BASE_URL=http://localhost:11434
309
+ OLLAMA_MODEL=llama3.2:3b
310
+
311
+ # API Configuration (cloud - OpenAI/GLM)
312
+ # AI_PROVIDER=api
313
+ # API_PROVIDER=openai
314
+ # API_KEY=your_api_key_here
315
+ # API_BASE_URL=https://api.openai.com/v1
316
+ # API_MODEL=gpt-3.5-turbo
317
+
318
+ # For GLM (Zhipu AI)
319
+ # API_PROVIDER=glm
320
+ # API_KEY=your_glm_api_key
321
+ # API_BASE_URL=https://open.bigmodel.cn/api/paas/v4
322
+ # API_MODEL=glm-4
323
+ ```
324
+
325
+ ### Config File
326
+
327
+ The config file can be in YAML or JSON format and is created automatically at `~/.english-optimizer/config.yaml` (or `config.json`):
328
+
329
+ ```yaml
330
+ ai:
331
+ provider: 'ollama'
332
+ ollama:
333
+ baseUrl: 'http://localhost:11434'
334
+ model: 'llama3.2:3b'
335
+ api:
336
+ provider: 'openai'
337
+ apiKey: ''
338
+ baseUrl: 'https://api.openai.com/v1'
339
+ model: 'gpt-3.5-turbo'
340
+
341
+ hotkeys:
342
+ professional: 'p'
343
+ concise: 'c'
344
+ grammar: 'g'
345
+ senior_developer: 'd'
346
+ reset: 'r'
347
+ history: 'h'
348
+ quit: 'q'
349
+
350
+ features:
351
+ enableHistory: true
352
+ historyPath: ~/.english-optimizer/history.json
353
+ enableCustomPrompts: true
354
+ customPromptsPath: ~/.english-optimizer/prompts.json
355
+ useYAMLPrompt: true
356
+ ```
357
+
358
+ ## Custom Prompts
359
+
360
+ You can create custom optimization prompts at `~/.english-optimizer/prompts.json`:
361
+
362
+ ```json
363
+ [
364
+ {
365
+ "name": "Academic",
366
+ "description": "Rewrite in an academic style suitable for research papers",
367
+ "prompt": "Please rewrite the following text in an academic style...",
368
+ "hotkey": "a"
369
+ },
370
+ {
371
+ "name": "Friendly",
372
+ "description": "Make the text more friendly and casual",
373
+ "prompt": "Please rewrite the following text to sound more friendly...",
374
+ "hotkey": "f"
375
+ }
376
+ ]
377
+ ```
378
+
379
+ ## Docker Commands
380
+
381
+ ```bash
382
+ # Start Ollama
383
+ docker-compose up -d
384
+
385
+ # Stop Ollama
386
+ docker-compose down
387
+
388
+ # View logs
389
+ docker-compose logs -f ollama
390
+
391
+ # Restart Ollama
392
+ docker-compose restart
393
+ ```
394
+
395
+ ## Production Deployment
396
+
397
+ To use cloud APIs instead of local Ollama:
398
+
399
+ 1. Update your `.env` file:
400
+
401
+ ```env
402
+ AI_PROVIDER=api
403
+ API_PROVIDER=openai # or 'glm' for Zhipu AI
404
+ API_KEY=your_actual_api_key
405
+ API_MODEL=gpt-3.5-turbo # or your preferred model
406
+ ```
407
+
408
+ 2. The CLI will automatically use the cloud API
409
+
410
+ ## Development
411
+
412
+ ```bash
413
+ # Watch and rebuild
414
+ npm run watch
415
+
416
+ # Run in development mode
417
+ npm run dev
418
+
419
+ # Lint
420
+ npm run lint
421
+ ```
422
+
423
+ ## Project Structure
424
+
425
+ ```
426
+ english-optimizer-cli/
427
+ ├── src/
428
+ │ ├── index.ts # CLI entry point
429
+ │ ├── core/
430
+ │ │ ├── editor.ts # Interactive editor with hotkeys
431
+ │ │ └── optimizer.ts # Optimization orchestration
432
+ │ ├── ai/
433
+ │ │ ├── provider.ts # AI provider factory
434
+ │ │ ├── ollama.ts # Ollama integration
435
+ │ │ └── api-provider.ts # OpenAI/GLM integration
436
+ │ ├── prompts/
437
+ │ │ ├── templates.ts # Built-in prompt templates
438
+ │ │ └── custom.ts # Custom prompt loader
439
+ │ ├── config/
440
+ │ │ └── config.ts # Configuration management
441
+ │ ├── history/
442
+ │ │ └── logger.ts # History logging
443
+ │ └── utils/
444
+ │ └── display.ts # Output formatting
445
+ ├── docker-compose.yml # Ollama Docker setup
446
+ ├── scripts/
447
+ │ └── setup-ollama.sh # Setup script
448
+ └── package.json
449
+ ```
450
+
451
+ ## Troubleshooting
452
+
453
+ ### Ollama Connection Failed
454
+
455
+ ```bash
456
+ # Check if Ollama is running
457
+ docker-compose ps
458
+
459
+ # Restart Ollama
460
+ docker-compose restart
461
+
462
+ # Check Ollama logs
463
+ docker-compose logs ollama
464
+ ```
465
+
466
+ ### Model Not Found
467
+
468
+ ```bash
469
+ # Manually pull the model
470
+ docker exec -it english-optimizer-ollama ollama pull llama3.2:3b
471
+ ```
472
+
473
+ ### API Key Issues
474
+
475
+ Make sure your `.env` file has the correct API key and the API provider URL is accurate.
476
+
477
+ ## License
478
+
479
+ MIT
480
+
481
+ ## Contributing
482
+
483
+ Contributions are welcome! Please feel free to submit a Pull Request.
484
+
485
+ ## Author
486
+
487
+ Created with ❤️ for non-native English speakers everywhere.
package/config.yaml ADDED
@@ -0,0 +1,38 @@
1
+ # English Optimizer CLI Configuration
2
+ # Copy this file to ~/.english-optimizer/config.yaml to customize
3
+
4
+ # AI Provider Configuration
5
+ # Options: 'ollama' for local or 'api' for cloud API
6
+ ai:
7
+ provider: 'ollama' # or 'api'
8
+
9
+ # Ollama Configuration (when provider: 'ollama')
10
+ ollama:
11
+ baseUrl: 'http://localhost:11434'
12
+ model: 'llama3.2:3b'
13
+
14
+ # API Configuration (when provider: 'api')
15
+ api:
16
+ provider: 'openai' # Options: 'openai', 'glm', 'custom'
17
+ apiKey: '' # Your API key
18
+ baseUrl: 'https://api.openai.com/v1'
19
+ model: 'gpt-3.5-turbo'
20
+
21
+ # Hotkey mappings for interactive mode
22
+ hotkeys:
23
+ professional: 'p'
24
+ concise: 'c'
25
+ grammar: 'g'
26
+ senior_developer: 'd'
27
+ reset: 'r'
28
+ history: 'h'
29
+ quit: 'q'
30
+
31
+ # Feature settings
32
+ features:
33
+ enableHistory: true
34
+ historyPath: ~/.english-optimizer/history.json
35
+ enableCustomPrompts: true
36
+ customPromptsPath: ~/.english-optimizer/prompts.json
37
+ useYAMLPrompt: true # Enable YAML prompt configuration from prompt.yaml
38
+
@@ -0,0 +1,138 @@
1
+ "use strict";
2
+ var __importDefault = (this && this.__importDefault) || function (mod) {
3
+ return (mod && mod.__esModule) ? mod : { "default": mod };
4
+ };
5
+ Object.defineProperty(exports, "__esModule", { value: true });
6
+ exports.ApiProvider = void 0;
7
+ const axios_1 = __importDefault(require("axios"));
8
+ const templates_1 = require("../prompts/templates");
9
+ class ApiProvider {
10
+ constructor(config) {
11
+ this.config = config;
12
+ }
13
+ async optimize(text, mode) {
14
+ const prompt = (0, templates_1.getPromptTemplate)(mode, text);
15
+ try {
16
+ const response = await axios_1.default.post(`${this.config.baseUrl}/chat/completions`, {
17
+ model: this.config.model,
18
+ messages: [
19
+ {
20
+ role: 'user',
21
+ content: prompt,
22
+ },
23
+ ],
24
+ temperature: 0.7,
25
+ max_tokens: 2000,
26
+ }, {
27
+ headers: {
28
+ 'Authorization': `Bearer ${this.config.apiKey}`,
29
+ 'Content-Type': 'application/json',
30
+ },
31
+ timeout: 60000, // 60 seconds timeout
32
+ });
33
+ if (response.data && response.data.choices && response.data.choices.length > 0) {
34
+ let result = response.data.choices[0].message.content.trim();
35
+ // Remove quotes if the model wrapped the response in them
36
+ if (result.startsWith('"') && result.endsWith('"')) {
37
+ result = result.slice(1, -1);
38
+ }
39
+ // Remove common prefixes if present
40
+ const prefixesToRemove = ['Rewritten text:', 'Corrected text:', 'Optimized text:'];
41
+ for (const prefix of prefixesToRemove) {
42
+ if (result.startsWith(prefix)) {
43
+ result = result.slice(prefix.length).trim();
44
+ }
45
+ }
46
+ return result;
47
+ }
48
+ throw new Error('Invalid response from API');
49
+ }
50
+ catch (error) {
51
+ if (axios_1.default.isAxiosError(error)) {
52
+ const axiosError = error;
53
+ if (axiosError.response) {
54
+ const status = axiosError.response.status;
55
+ if (status === 401) {
56
+ throw new Error('Invalid API key. Please check your API credentials.');
57
+ }
58
+ else if (status === 429) {
59
+ throw new Error('Rate limit exceeded. Please try again later.');
60
+ }
61
+ }
62
+ throw new Error(`API error: ${axiosError.message}`);
63
+ }
64
+ throw error;
65
+ }
66
+ }
67
+ async generateWithPrompt(prompt) {
68
+ try {
69
+ const response = await axios_1.default.post(`${this.config.baseUrl}/chat/completions`, {
70
+ model: this.config.model,
71
+ messages: [
72
+ {
73
+ role: 'user',
74
+ content: prompt,
75
+ },
76
+ ],
77
+ temperature: 0.7,
78
+ max_tokens: 2000,
79
+ }, {
80
+ headers: {
81
+ 'Authorization': `Bearer ${this.config.apiKey}`,
82
+ 'Content-Type': 'application/json',
83
+ },
84
+ timeout: 60000,
85
+ });
86
+ if (response.data && response.data.choices && response.data.choices.length > 0) {
87
+ let result = response.data.choices[0].message.content.trim();
88
+ // Remove quotes if the model wrapped the response in them
89
+ if (result.startsWith('"') && result.endsWith('"')) {
90
+ result = result.slice(1, -1);
91
+ }
92
+ return result;
93
+ }
94
+ throw new Error('Invalid response from API');
95
+ }
96
+ catch (error) {
97
+ if (axios_1.default.isAxiosError(error)) {
98
+ const axiosError = error;
99
+ if (axiosError.response) {
100
+ const status = axiosError.response.status;
101
+ if (status === 401) {
102
+ throw new Error('Invalid API key. Please check your API credentials.');
103
+ }
104
+ else if (status === 429) {
105
+ throw new Error('Rate limit exceeded. Please try again later.');
106
+ }
107
+ }
108
+ throw new Error(`API error: ${axiosError.message}`);
109
+ }
110
+ throw error;
111
+ }
112
+ }
113
+ async isAvailable() {
114
+ if (!this.config.apiKey) {
115
+ return false;
116
+ }
117
+ try {
118
+ // Try a simple API call to check if the key is valid
119
+ const response = await axios_1.default.post(`${this.config.baseUrl}/chat/completions`, {
120
+ model: this.config.model,
121
+ messages: [{ role: 'user', content: 'test' }],
122
+ max_tokens: 5,
123
+ }, {
124
+ headers: {
125
+ 'Authorization': `Bearer ${this.config.apiKey}`,
126
+ 'Content-Type': 'application/json',
127
+ },
128
+ timeout: 10000,
129
+ });
130
+ return response.status === 200;
131
+ }
132
+ catch {
133
+ return false;
134
+ }
135
+ }
136
+ }
137
+ exports.ApiProvider = ApiProvider;
138
+ //# sourceMappingURL=api-provider.js.map