llmshell-cli 0.0.1__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- gpt_shell/__init__.py +9 -0
- gpt_shell/cli.py +56 -0
- gpt_shell/config.py +190 -0
- gpt_shell/core.py +32 -0
- gpt_shell/llm_client.py +325 -0
- gpt_shell/llm_manager.py +225 -0
- gpt_shell/main.py +327 -0
- gpt_shell/py.typed +1 -0
- gpt_shell/utils.py +305 -0
- llmshell_cli-0.0.1.dist-info/METADATA +446 -0
- llmshell_cli-0.0.1.dist-info/RECORD +15 -0
- llmshell_cli-0.0.1.dist-info/WHEEL +5 -0
- llmshell_cli-0.0.1.dist-info/entry_points.txt +2 -0
- llmshell_cli-0.0.1.dist-info/licenses/LICENSE +21 -0
- llmshell_cli-0.0.1.dist-info/top_level.txt +1 -0
|
@@ -0,0 +1,446 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: llmshell-cli
|
|
3
|
+
Version: 0.0.1
|
|
4
|
+
Summary: Convert natural language to shell commands using LLMs (GPT4All, OpenAI, Ollama)
|
|
5
|
+
Author-email: Naresh Reddy Gurijala <naresh.gurijala@gmail.com>
|
|
6
|
+
Maintainer-email: Naresh Reddy Gurijala <naresh.gurijala@gmail.com>
|
|
7
|
+
License: MIT
|
|
8
|
+
Project-URL: Homepage, https://github.com/imgnr/llmshell
|
|
9
|
+
Project-URL: Repository, https://github.com/imgnr/llmshell
|
|
10
|
+
Project-URL: Issues, https://github.com/imgnr/llmshell/issues
|
|
11
|
+
Keywords: gpt,shell,cli,llm,gpt4all,openai,ollama,command-line,natural-language,aishell,intelligent-shell
|
|
12
|
+
Classifier: Development Status :: 3 - Alpha
|
|
13
|
+
Classifier: Intended Audience :: Developers
|
|
14
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
15
|
+
Classifier: Programming Language :: Python :: 3
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.8
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.9
|
|
18
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
20
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
21
|
+
Requires-Python: >=3.8
|
|
22
|
+
Description-Content-Type: text/markdown
|
|
23
|
+
License-File: LICENSE
|
|
24
|
+
Requires-Dist: typer>=0.9.0
|
|
25
|
+
Requires-Dist: pyyaml>=6.0
|
|
26
|
+
Requires-Dist: gpt4all>=2.0.0
|
|
27
|
+
Requires-Dist: requests>=2.31.0
|
|
28
|
+
Requires-Dist: rich>=13.0.0
|
|
29
|
+
Requires-Dist: openai>=1.0.0
|
|
30
|
+
Provides-Extra: dev
|
|
31
|
+
Requires-Dist: pytest>=7.0; extra == "dev"
|
|
32
|
+
Requires-Dist: pytest-cov>=4.0; extra == "dev"
|
|
33
|
+
Requires-Dist: black>=23.0; extra == "dev"
|
|
34
|
+
Requires-Dist: flake8>=6.0; extra == "dev"
|
|
35
|
+
Requires-Dist: mypy>=1.0; extra == "dev"
|
|
36
|
+
Dynamic: license-file
|
|
37
|
+
|
|
38
|
+
# 🐚 llmshell-cli
|
|
39
|
+
|
|
40
|
+
A powerful Python CLI tool that converts natural language into Linux/Unix shell commands using LLMs.
|
|
41
|
+
|
|
42
|
+
## ✨ Features
|
|
43
|
+
|
|
44
|
+
- 🤖 **Multiple LLM Backends**: GPT4All (local, default), OpenAI, Ollama, or custom APIs
|
|
45
|
+
- 🔒 **Privacy-First**: Uses GPT4All locally by default - no data leaves your machine
|
|
46
|
+
- 🎯 **Smart Command Generation**: Converts natural language to accurate shell commands
|
|
47
|
+
- ✅ **Safe Execution**: Confirmation prompts before running commands
|
|
48
|
+
- 🎨 **Beautiful Output**: Colored terminal output using Rich
|
|
49
|
+
- ⚙️ **Flexible Configuration**: YAML-based config at `~/.llmshell/config.yaml`
|
|
50
|
+
- 🔧 **Easy Setup**: Auto-downloads models, handles fallbacks gracefully
|
|
51
|
+
|
|
52
|
+
## 📦 Installation
|
|
53
|
+
|
|
54
|
+
```bash
|
|
55
|
+
pip install llmshell-cli
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
### Development Installation
|
|
59
|
+
|
|
60
|
+
```bash
|
|
61
|
+
git clone https://github.com/yourusername/llmshell.git
|
|
62
|
+
cd llmshell
|
|
63
|
+
pip install -e ".[dev]"
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
## 🚀 Quick Start
|
|
67
|
+
|
|
68
|
+
### Generate a Command
|
|
69
|
+
|
|
70
|
+
```bash
|
|
71
|
+
llmshell run "list all docker containers"
|
|
72
|
+
# Output: docker ps -a
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
### Get Command with Explanation
|
|
76
|
+
|
|
77
|
+
```bash
|
|
78
|
+
llmshell run "find large files" --explain
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### Dry Run (Don't Execute)
|
|
82
|
+
|
|
83
|
+
```bash
|
|
84
|
+
llmshell run "remove all logs" --dry-run
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
### Auto-Execute (Skip Confirmation)
|
|
88
|
+
|
|
89
|
+
```bash
|
|
90
|
+
llmshell run "show disk usage" --execute
|
|
91
|
+
# Note: Dangerous commands will still require confirmation
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
## 📖 CLI Commands
|
|
95
|
+
|
|
96
|
+
### `llmshell run`
|
|
97
|
+
|
|
98
|
+
Generate and optionally execute shell commands:
|
|
99
|
+
|
|
100
|
+
```bash
|
|
101
|
+
llmshell run "your natural language request"
|
|
102
|
+
llmshell run "list python files" --dry-run
|
|
103
|
+
llmshell run "check memory usage" --explain
|
|
104
|
+
llmshell run "restart nginx" --execute
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
**Options:**
|
|
108
|
+
- `--dry-run` / `-d`: Show command without executing
|
|
109
|
+
- `--explain` / `-x`: Include explanation with the command
|
|
110
|
+
- `--execute` / `-e`: Skip confirmation prompt (except for dangerous commands)
|
|
111
|
+
- `--backend` / `-b`: Override default backend (gpt4all, openai, ollama, custom)
|
|
112
|
+
|
|
113
|
+
**Safety Note:** Dangerous commands (like `rm -rf /`, `mkfs`, etc.) will **always** require confirmation, even with `--execute`.
|
|
114
|
+
|
|
115
|
+
### `llmshell config`
|
|
116
|
+
|
|
117
|
+
Manage configuration:
|
|
118
|
+
|
|
119
|
+
```bash
|
|
120
|
+
# Show current configuration
|
|
121
|
+
llmshell config show
|
|
122
|
+
|
|
123
|
+
# Set a configuration value
|
|
124
|
+
llmshell config set llm_backend openai
|
|
125
|
+
llmshell config set backends.openai.api_key sk-xxxxx
|
|
126
|
+
|
|
127
|
+
# List available backends
|
|
128
|
+
llmshell config backends
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
### `llmshell model`
|
|
132
|
+
|
|
133
|
+
Manage GPT4All models:
|
|
134
|
+
|
|
135
|
+
```bash
|
|
136
|
+
# Show available models to download
|
|
137
|
+
llmshell model show-available
|
|
138
|
+
|
|
139
|
+
# Install/download the default model
|
|
140
|
+
llmshell model install
|
|
141
|
+
|
|
142
|
+
# Install a specific model
|
|
143
|
+
llmshell model install --name Meta-Llama-3-8B-Instruct.Q4_0.gguf
|
|
144
|
+
|
|
145
|
+
# List installed models
|
|
146
|
+
llmshell model list
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
### `llmshell doctor`
|
|
150
|
+
|
|
151
|
+
Diagnose setup and check backend availability:
|
|
152
|
+
|
|
153
|
+
```bash
|
|
154
|
+
llmshell doctor
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
Output shows:
|
|
158
|
+
- Configuration file status
|
|
159
|
+
- Available backends
|
|
160
|
+
- Model installation status
|
|
161
|
+
- API connectivity
|
|
162
|
+
|
|
163
|
+
## ⚙️ Configuration
|
|
164
|
+
|
|
165
|
+
Configuration is stored at `~/.llmshell/config.yaml`:
|
|
166
|
+
|
|
167
|
+
```yaml
|
|
168
|
+
llm_backend: gpt4all
|
|
169
|
+
|
|
170
|
+
backends:
|
|
171
|
+
gpt4all:
|
|
172
|
+
model: mistral-7b-instruct-v0.2.Q4_0.gguf
|
|
173
|
+
model_path: null # Auto-detected
|
|
174
|
+
|
|
175
|
+
openai:
|
|
176
|
+
api_key: sk-your-api-key-here
|
|
177
|
+
model: gpt-4-turbo
|
|
178
|
+
base_url: null # Optional custom endpoint
|
|
179
|
+
|
|
180
|
+
ollama:
|
|
181
|
+
model: llama3
|
|
182
|
+
api_url: http://localhost:11434
|
|
183
|
+
|
|
184
|
+
custom:
|
|
185
|
+
api_url: https://your-llm-endpoint/v1/chat/completions
|
|
186
|
+
headers:
|
|
187
|
+
Authorization: Bearer YOUR_TOKEN
|
|
188
|
+
|
|
189
|
+
execution:
|
|
190
|
+
auto_execute: false
|
|
191
|
+
confirmation_required: true
|
|
192
|
+
|
|
193
|
+
output:
|
|
194
|
+
colored: true
|
|
195
|
+
verbose: false
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
## 🔧 Backend Setup
|
|
199
|
+
|
|
200
|
+
### GPT4All (Default - Local)
|
|
201
|
+
|
|
202
|
+
No setup required! On first run:
|
|
203
|
+
```bash
|
|
204
|
+
# Show available models
|
|
205
|
+
llmshell model show-available
|
|
206
|
+
|
|
207
|
+
# Install a model (default: Meta Llama 3)
|
|
208
|
+
llmshell model install
|
|
209
|
+
|
|
210
|
+
# Or install a specific model
|
|
211
|
+
llmshell model install --name Phi-3-mini-4k-instruct.Q4_0.gguf
|
|
212
|
+
```
|
|
213
|
+
|
|
214
|
+
This downloads the model locally (~2-5GB depending on the model).
|
|
215
|
+
|
|
216
|
+
### OpenAI
|
|
217
|
+
|
|
218
|
+
1. Get API key from [OpenAI](https://platform.openai.com)
|
|
219
|
+
2. Configure:
|
|
220
|
+
```bash
|
|
221
|
+
llmshell config set backends.openai.api_key sk-xxxxx
|
|
222
|
+
llmshell config set llm_backend openai
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
### Ollama
|
|
226
|
+
|
|
227
|
+
1. Install [Ollama](https://ollama.ai)
|
|
228
|
+
2. Pull a model:
|
|
229
|
+
```bash
|
|
230
|
+
ollama pull llama3
|
|
231
|
+
```
|
|
232
|
+
3. Configure:
|
|
233
|
+
```bash
|
|
234
|
+
llmshell config set llm_backend ollama
|
|
235
|
+
```
|
|
236
|
+
|
|
237
|
+
### Custom API
|
|
238
|
+
|
|
239
|
+
For any OpenAI-compatible API:
|
|
240
|
+
```bash
|
|
241
|
+
llmshell config set llm_backend custom
|
|
242
|
+
llmshell config set backends.custom.api_url https://your-endpoint
|
|
243
|
+
llmshell config set backends.custom.headers.Authorization "Bearer TOKEN"
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
## 💡 Usage Examples
|
|
247
|
+
|
|
248
|
+
```bash
|
|
249
|
+
# Docker commands
|
|
250
|
+
llmshell run "stop all running containers"
|
|
251
|
+
llmshell run "remove unused images"
|
|
252
|
+
|
|
253
|
+
# File operations
|
|
254
|
+
llmshell run "find files modified in last 24 hours"
|
|
255
|
+
llmshell run "compress all logs to archive"
|
|
256
|
+
|
|
257
|
+
# System monitoring
|
|
258
|
+
llmshell run "show top 10 memory-consuming processes"
|
|
259
|
+
llmshell run "check disk space on all mounts"
|
|
260
|
+
|
|
261
|
+
# Git operations
|
|
262
|
+
llmshell run "show commits from last week"
|
|
263
|
+
llmshell run "list branches sorted by recent activity"
|
|
264
|
+
|
|
265
|
+
# Network operations
|
|
266
|
+
llmshell run "check if port 8080 is open"
|
|
267
|
+
llmshell run "show active network connections"
|
|
268
|
+
```
|
|
269
|
+
|
|
270
|
+
## 🐍 Python API
|
|
271
|
+
|
|
272
|
+
You can also use llmshell programmatically:
|
|
273
|
+
|
|
274
|
+
```python
|
|
275
|
+
from gpt_shell.config import Config
|
|
276
|
+
from gpt_shell.llm_manager import LLMManager
|
|
277
|
+
|
|
278
|
+
# Initialize
|
|
279
|
+
config = Config()
|
|
280
|
+
manager = LLMManager(config)
|
|
281
|
+
|
|
282
|
+
# Generate command
|
|
283
|
+
command = manager.generate_command("list all docker containers")
|
|
284
|
+
print(f"Generated: {command}")
|
|
285
|
+
|
|
286
|
+
# With explanation
|
|
287
|
+
result = manager.generate_command("find large files", explain=True)
|
|
288
|
+
print(result)
|
|
289
|
+
```
|
|
290
|
+
|
|
291
|
+
## 🐳 Docker Support
|
|
292
|
+
|
|
293
|
+
Run llmshell in a Docker container for isolated environments.
|
|
294
|
+
|
|
295
|
+
### Quick Start
|
|
296
|
+
|
|
297
|
+
```bash
|
|
298
|
+
# Build the image
|
|
299
|
+
docker build -t llmshell:latest .
|
|
300
|
+
|
|
301
|
+
# Run a command
|
|
302
|
+
docker run --rm llmshell:latest run "list files"
|
|
303
|
+
|
|
304
|
+
# With persistent config
|
|
305
|
+
docker run -it --rm \
|
|
306
|
+
-v llmshell-data:/root/.llmshell \
|
|
307
|
+
llmshell:latest model install
|
|
308
|
+
```
|
|
309
|
+
|
|
310
|
+
### Using Docker Compose
|
|
311
|
+
|
|
312
|
+
```bash
|
|
313
|
+
# Interactive mode
|
|
314
|
+
docker-compose run --rm llmshell
|
|
315
|
+
|
|
316
|
+
# Inside container
|
|
317
|
+
llmshell run "show disk usage"
|
|
318
|
+
```
|
|
319
|
+
|
|
320
|
+
For detailed Docker instructions, see [DOCKER.md](DOCKER.md)
|
|
321
|
+
|
|
322
|
+
## 🧪 Testing
|
|
323
|
+
|
|
324
|
+
```bash
|
|
325
|
+
# Run all tests
|
|
326
|
+
pytest
|
|
327
|
+
|
|
328
|
+
# Run with coverage
|
|
329
|
+
pytest --cov=gpt_shell --cov-report=html
|
|
330
|
+
|
|
331
|
+
# Run specific test file
|
|
332
|
+
pytest tests/test_config.py
|
|
333
|
+
```
|
|
334
|
+
|
|
335
|
+
## 🛠️ Development
|
|
336
|
+
|
|
337
|
+
### Setup
|
|
338
|
+
|
|
339
|
+
```bash
|
|
340
|
+
# Clone and install
|
|
341
|
+
git clone https://github.com/yourusername/llmshell.git
|
|
342
|
+
cd llmshell
|
|
343
|
+
pip install -e ".[dev]"
|
|
344
|
+
|
|
345
|
+
# Run tests
|
|
346
|
+
pytest
|
|
347
|
+
|
|
348
|
+
# Format code
|
|
349
|
+
black src tests
|
|
350
|
+
|
|
351
|
+
# Type checking
|
|
352
|
+
mypy src
|
|
353
|
+
|
|
354
|
+
# Linting
|
|
355
|
+
flake8 src tests
|
|
356
|
+
```
|
|
357
|
+
|
|
358
|
+
## 🤝 Contributing
|
|
359
|
+
|
|
360
|
+
Contributions are welcome! Please:
|
|
361
|
+
|
|
362
|
+
1. Fork the repository
|
|
363
|
+
2. Create a feature branch
|
|
364
|
+
3. Add tests for new features
|
|
365
|
+
4. Ensure all tests pass
|
|
366
|
+
5. Submit a pull request
|
|
367
|
+
|
|
368
|
+
## 📋 Requirements
|
|
369
|
+
|
|
370
|
+
- Python 3.8+
|
|
371
|
+
- ~4GB disk space for GPT4All model (optional)
|
|
372
|
+
- Internet connection (for OpenAI/Ollama/custom backends)
|
|
373
|
+
|
|
374
|
+
## 🔒 Privacy
|
|
375
|
+
|
|
376
|
+
- **GPT4All**: All processing happens locally, no data sent anywhere
|
|
377
|
+
- **OpenAI/Custom APIs**: Commands are sent to external services
|
|
378
|
+
- **Ollama**: Runs locally, no data sent to external servers
|
|
379
|
+
|
|
380
|
+
## 🐛 Troubleshooting
|
|
381
|
+
|
|
382
|
+
### GPT4All model not found
|
|
383
|
+
```bash
|
|
384
|
+
llmshell model install
|
|
385
|
+
```
|
|
386
|
+
|
|
387
|
+
### OpenAI API errors
|
|
388
|
+
```bash
|
|
389
|
+
llmshell config set backends.openai.api_key sk-xxxxx
|
|
390
|
+
llmshell doctor
|
|
391
|
+
```
|
|
392
|
+
|
|
393
|
+
### Ollama not connecting
|
|
394
|
+
```bash
|
|
395
|
+
# Check if Ollama is running
|
|
396
|
+
curl http://localhost:11434/api/tags
|
|
397
|
+
|
|
398
|
+
# Start Ollama
|
|
399
|
+
ollama serve
|
|
400
|
+
```
|
|
401
|
+
|
|
402
|
+
### Configuration issues
|
|
403
|
+
```bash
|
|
404
|
+
# Reset to defaults
|
|
405
|
+
rm ~/.llmshell/config.yaml
|
|
406
|
+
llmshell config show
|
|
407
|
+
```
|
|
408
|
+
|
|
409
|
+
## 📝 License
|
|
410
|
+
|
|
411
|
+
MIT License - see LICENSE file for details.
|
|
412
|
+
|
|
413
|
+
## 🙏 Acknowledgments
|
|
414
|
+
|
|
415
|
+
- [GPT4All](https://gpt4all.io/) - Local LLM runtime
|
|
416
|
+
- [Typer](https://typer.tiangolo.com/) - CLI framework
|
|
417
|
+
- [Rich](https://rich.readthedocs.io/) - Terminal formatting
|
|
418
|
+
- [OpenAI](https://openai.com/) - API integration
|
|
419
|
+
- [Ollama](https://ollama.ai/) - Local LLM platform
|
|
420
|
+
|
|
421
|
+
## 📚 More Examples
|
|
422
|
+
|
|
423
|
+
### System Administration
|
|
424
|
+
```bash
|
|
425
|
+
llmshell run "create a backup of /etc directory"
|
|
426
|
+
llmshell run "find processes using more than 1GB RAM"
|
|
427
|
+
llmshell run "schedule a cron job for midnight"
|
|
428
|
+
```
|
|
429
|
+
|
|
430
|
+
### Development
|
|
431
|
+
```bash
|
|
432
|
+
llmshell run "count lines of code in this project"
|
|
433
|
+
llmshell run "find all TODO comments in python files"
|
|
434
|
+
llmshell run "generate requirements.txt from imports"
|
|
435
|
+
```
|
|
436
|
+
|
|
437
|
+
### Data Processing
|
|
438
|
+
```bash
|
|
439
|
+
llmshell run "extract column 2 from CSV file"
|
|
440
|
+
llmshell run "convert all PNG images to JPG"
|
|
441
|
+
llmshell run "merge all text files into one"
|
|
442
|
+
```
|
|
443
|
+
|
|
444
|
+
---
|
|
445
|
+
|
|
446
|
+
**Made with ❤️ for developers who prefer typing naturally**
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
gpt_shell/__init__.py,sha256=xT5DLKjjNA7Pr_6jA4UmMXiNfK5ew8Y3DvqoWGW5PLw,277
|
|
2
|
+
gpt_shell/cli.py,sha256=YijKlpO7BKbvSfypfUHRRVDV5H8QCCYFp1wJSVCNe9g,1401
|
|
3
|
+
gpt_shell/config.py,sha256=2TWEoN9Ksnz35ynVHlEZTQT6wUEeXTdxvi3KS_ornfE,5555
|
|
4
|
+
gpt_shell/core.py,sha256=0HQ3Mq-QO3se0f-ZjBqVkNL4xsjxeWT-Mm7yTI4vdmg,730
|
|
5
|
+
gpt_shell/llm_client.py,sha256=NStv2CZ6QuFmloCs7kijL3dkruarHoDHSLJaulak1VU,10658
|
|
6
|
+
gpt_shell/llm_manager.py,sha256=PzN1YDXGhIXCM2N_3IqDAspY13zl8wJyaYs9w60jJA0,7483
|
|
7
|
+
gpt_shell/main.py,sha256=LKA3PBACxnIKhfDb_fWWwIBWeFz14E1MGg1z2unOcsM,12114
|
|
8
|
+
gpt_shell/py.typed,sha256=8ZJUsxZiuOy1oJeVhsTWQhTG_6pTVHVXk5hJL79ebTk,25
|
|
9
|
+
gpt_shell/utils.py,sha256=MlDwyIepO-AqL2wYpIs44C7p7nQ3lDl7ASP2CrIz5NE,7814
|
|
10
|
+
llmshell_cli-0.0.1.dist-info/licenses/LICENSE,sha256=guCN7x8oCcd_T2PuWYUCZZTNJ80ucNg9FnmoLPTm0h4,1078
|
|
11
|
+
llmshell_cli-0.0.1.dist-info/METADATA,sha256=Qln-9X9ixtdSijrUJzC2p4j2ev8QHKyTKCbeGSFFJIc,9993
|
|
12
|
+
llmshell_cli-0.0.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
|
|
13
|
+
llmshell_cli-0.0.1.dist-info/entry_points.txt,sha256=_8cjOs3vMpC6bPX5IcscnbkuzE0gOyWzVB8K4UHpa_8,48
|
|
14
|
+
llmshell_cli-0.0.1.dist-info/top_level.txt,sha256=LuICifXQ9goP9C4V-AOiiVe8GhdQJfKdRS5KPIircOg,10
|
|
15
|
+
llmshell_cli-0.0.1.dist-info/RECORD,,
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 Naresh Reddy Gurijala
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
gpt_shell
|