@0xprathamesh/why-cli 1.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +436 -0
- package/dist/cli/index.js +101 -0
- package/dist/cli/index.js.map +1 -0
- package/dist/core/ai.js +195 -0
- package/dist/core/ai.js.map +1 -0
- package/dist/core/args.js +221 -0
- package/dist/core/args.js.map +1 -0
- package/dist/core/codebase-context.js +74 -0
- package/dist/core/codebase-context.js.map +1 -0
- package/dist/core/command-intelligence.js +196 -0
- package/dist/core/command-intelligence.js.map +1 -0
- package/dist/core/env.js +117 -0
- package/dist/core/env.js.map +1 -0
- package/dist/core/error-parser.js +172 -0
- package/dist/core/error-parser.js.map +1 -0
- package/dist/core/process.js +118 -0
- package/dist/core/process.js.map +1 -0
- package/dist/core/prompts.js +64 -0
- package/dist/core/prompts.js.map +1 -0
- package/dist/core/provider-health.js +71 -0
- package/dist/core/provider-health.js.map +1 -0
- package/dist/core/runner.js +266 -0
- package/dist/core/runner.js.map +1 -0
- package/dist/core/setup.js +82 -0
- package/dist/core/setup.js.map +1 -0
- package/dist/core/simulation.js +330 -0
- package/dist/core/simulation.js.map +1 -0
- package/dist/core/skills.js +57 -0
- package/dist/core/skills.js.map +1 -0
- package/dist/utils/logger.js +147 -0
- package/dist/utils/logger.js.map +1 -0
- package/package.json +43 -0
package/README.md
ADDED
|
@@ -0,0 +1,436 @@
|
|
|
1
|
+
# why
|
|
2
|
+
|
|
3
|
+
`why` helps you understand what a terminal command will do, what went wrong, and what to try next.
|
|
4
|
+
|
|
5
|
+
It can:
|
|
6
|
+
|
|
7
|
+
- run a command and explain failures
|
|
8
|
+
- simulate risky commands before they change anything
|
|
9
|
+
- stream live logs for long-running commands
|
|
10
|
+
- use OpenAI or Ollama for AI explanations
|
|
11
|
+
- read local code context when an error points into your project
|
|
12
|
+
|
|
13
|
+
## Install
|
|
14
|
+
|
|
15
|
+
Global install:
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
npm install -g @0xprathamesh/why-cli
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
Local development:
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
npm install
|
|
25
|
+
npm run build
|
|
26
|
+
npm link
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
After that, the `why` command is available from any folder.
|
|
30
|
+
|
|
31
|
+
Package name on npm:
|
|
32
|
+
|
|
33
|
+
```text
|
|
34
|
+
@0xprathamesh/why-cli
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
Repository name on GitHub:
|
|
38
|
+
|
|
39
|
+
```text
|
|
40
|
+
why
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
The install package is scoped, but the CLI command is still just:
|
|
44
|
+
|
|
45
|
+
```bash
|
|
46
|
+
why
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
## Docker
|
|
50
|
+
|
|
51
|
+
Build the image:
|
|
52
|
+
|
|
53
|
+
```bash
|
|
54
|
+
docker build -t why-cli .
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
Show help from the container:
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
docker run --rm why-cli --help
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
Run a command through `why` inside the container:
|
|
64
|
+
|
|
65
|
+
```bash
|
|
66
|
+
docker run --rm why-cli --simulate -- git push origin main
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
If you want to use your current project inside the container:
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
docker run --rm -it -v "$PWD:/workspace" -w /workspace why-cli -- npm run build
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
If you want AI config inside Docker, pass env values or an env file:
|
|
76
|
+
|
|
77
|
+
```bash
|
|
78
|
+
docker run --rm --env-file .env why-cli --doctor
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
## Quick Start
|
|
82
|
+
|
|
83
|
+
Run a normal command:
|
|
84
|
+
|
|
85
|
+
```bash
|
|
86
|
+
why -- npm run build
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
Simulate a risky command:
|
|
90
|
+
|
|
91
|
+
```bash
|
|
92
|
+
why --simulate -- git push origin main
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
Run a command for real even if `auto` mode would simulate it:
|
|
96
|
+
|
|
97
|
+
```bash
|
|
98
|
+
why --run -- git init
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
Check AI setup:
|
|
102
|
+
|
|
103
|
+
```bash
|
|
104
|
+
why --doctor
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
Interactive setup:
|
|
108
|
+
|
|
109
|
+
```bash
|
|
110
|
+
why --setup
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
## How It Works
|
|
114
|
+
|
|
115
|
+
For every command, `why-cli` goes through this pipeline:
|
|
116
|
+
|
|
117
|
+
`command -> classify -> risk -> simulate/run -> analyze -> explain`
|
|
118
|
+
|
|
119
|
+
That means:
|
|
120
|
+
|
|
121
|
+
- safe read-only commands usually run in `auto` mode
|
|
122
|
+
- risky state-changing commands are usually simulated in `auto` mode
|
|
123
|
+
- failures are summarized in plain language
|
|
124
|
+
- if AI is configured, `why-cli` adds an AI explanation on top
|
|
125
|
+
|
|
126
|
+
## Modes
|
|
127
|
+
|
|
128
|
+
`why-cli` has 3 execution modes.
|
|
129
|
+
|
|
130
|
+
### `auto`
|
|
131
|
+
|
|
132
|
+
Default mode.
|
|
133
|
+
|
|
134
|
+
- runs read-only commands
|
|
135
|
+
- simulates risky commands
|
|
136
|
+
|
|
137
|
+
Example:
|
|
138
|
+
|
|
139
|
+
```bash
|
|
140
|
+
why npm run build
|
|
141
|
+
why git init
|
|
142
|
+
why rm test.txt
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
### `run`
|
|
146
|
+
|
|
147
|
+
Runs the command for real.
|
|
148
|
+
|
|
149
|
+
```bash
|
|
150
|
+
why --run -- git init
|
|
151
|
+
why --run -- rm test.txt
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
### `simulate`
|
|
155
|
+
|
|
156
|
+
Never runs the real command. It only does a safe preview when supported.
|
|
157
|
+
|
|
158
|
+
```bash
|
|
159
|
+
why --simulate -- git add .
|
|
160
|
+
why --simulate -- npm install express
|
|
161
|
+
why --simulate -- mkdir demo-folder
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
## Important Behavior
|
|
165
|
+
|
|
166
|
+
If you run:
|
|
167
|
+
|
|
168
|
+
```bash
|
|
169
|
+
why git init
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
you may see simulation output instead of actual execution. That is expected in `auto` mode.
|
|
173
|
+
|
|
174
|
+
If you want the real command to run, use:
|
|
175
|
+
|
|
176
|
+
```bash
|
|
177
|
+
why --run -- git init
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
## AI Setup
|
|
181
|
+
|
|
182
|
+
You can configure AI once and stop passing keys or model flags every time.
|
|
183
|
+
|
|
184
|
+
Recommended:
|
|
185
|
+
|
|
186
|
+
```bash
|
|
187
|
+
why --setup
|
|
188
|
+
```
|
|
189
|
+
|
|
190
|
+
This writes config to:
|
|
191
|
+
|
|
192
|
+
```bash
|
|
193
|
+
~/.config/why-cli/.env
|
|
194
|
+
```
|
|
195
|
+
|
|
196
|
+
You can also create that file manually.
|
|
197
|
+
|
|
198
|
+
Example:
|
|
199
|
+
|
|
200
|
+
```env
|
|
201
|
+
WHY_PROVIDER=ollama
|
|
202
|
+
|
|
203
|
+
OPENAI_API_KEY=
|
|
204
|
+
OPENAI_MODEL=gpt-4.1
|
|
205
|
+
OPENAI_BASE_URL=https://api.openai.com/v1
|
|
206
|
+
|
|
207
|
+
OLLAMA_HOST=http://127.0.0.1:11434
|
|
208
|
+
OLLAMA_MODEL=gemma3:4b
|
|
209
|
+
|
|
210
|
+
WHY_SKILL=debug,fix
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
Supported config locations:
|
|
214
|
+
|
|
215
|
+
- `.env.local` in the current folder
|
|
216
|
+
- `.env` in the current folder
|
|
217
|
+
- `.env.local` in parent folders
|
|
218
|
+
- `.env` in parent folders
|
|
219
|
+
- `~/.config/why-cli/.env`
|
|
220
|
+
- `~/.env`
|
|
221
|
+
|
|
222
|
+
Check provider health:
|
|
223
|
+
|
|
224
|
+
```bash
|
|
225
|
+
why --doctor
|
|
226
|
+
```
|
|
227
|
+
|
|
228
|
+
## Providers
|
|
229
|
+
|
|
230
|
+
Supported providers:
|
|
231
|
+
|
|
232
|
+
- `auto`
|
|
233
|
+
- `openai`
|
|
234
|
+
- `ollama`
|
|
235
|
+
- `none`
|
|
236
|
+
|
|
237
|
+
Examples:
|
|
238
|
+
|
|
239
|
+
```bash
|
|
240
|
+
why --provider openai --explain -- npm test
|
|
241
|
+
why --provider ollama --model gemma3:4b -- npm run build
|
|
242
|
+
why --provider none -- npm start
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
## Common Commands
|
|
246
|
+
|
|
247
|
+
Run commands:
|
|
248
|
+
|
|
249
|
+
```bash
|
|
250
|
+
why -- npm run build
|
|
251
|
+
why -- npm start
|
|
252
|
+
why -- python3 script.py
|
|
253
|
+
why node -v
|
|
254
|
+
why npm -v
|
|
255
|
+
why cat README.md
|
|
256
|
+
```
|
|
257
|
+
|
|
258
|
+
Simulate commands:
|
|
259
|
+
|
|
260
|
+
```bash
|
|
261
|
+
why --simulate -- git add .
|
|
262
|
+
why --simulate -- git commit -m "test"
|
|
263
|
+
why --simulate -- git push
|
|
264
|
+
why --simulate -- npm install
|
|
265
|
+
why --simulate -- npm install express
|
|
266
|
+
why --simulate -- npm publish
|
|
267
|
+
why --simulate -- rm test.txt
|
|
268
|
+
why --simulate -- mkdir test-folder
|
|
269
|
+
why --simulate -- touch demo.txt
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
Run risky commands for real:
|
|
273
|
+
|
|
274
|
+
```bash
|
|
275
|
+
why --run -- git init
|
|
276
|
+
why --run -- git commit -m "ship"
|
|
277
|
+
why --run -- npm publish
|
|
278
|
+
```
|
|
279
|
+
|
|
280
|
+
## Failure Examples
|
|
281
|
+
|
|
282
|
+
Missing package:
|
|
283
|
+
|
|
284
|
+
```bash
|
|
285
|
+
why --simulate -- npm install some-invalid-package-xyz
|
|
286
|
+
```
|
|
287
|
+
|
|
288
|
+
Missing file:
|
|
289
|
+
|
|
290
|
+
```bash
|
|
291
|
+
why --simulate -- rm non-existing-file
|
|
292
|
+
why --simulate -- git add non-existing-file
|
|
293
|
+
```
|
|
294
|
+
|
|
295
|
+
Existing directory:
|
|
296
|
+
|
|
297
|
+
```bash
|
|
298
|
+
mkdir existing-folder
|
|
299
|
+
why --simulate -- mkdir existing-folder
|
|
300
|
+
```
|
|
301
|
+
|
|
302
|
+
Wrong Git push target:
|
|
303
|
+
|
|
304
|
+
```bash
|
|
305
|
+
why --simulate -- git push origin wrong-branch
|
|
306
|
+
```
|
|
307
|
+
|
|
308
|
+
Bad build:
|
|
309
|
+
|
|
310
|
+
```bash
|
|
311
|
+
why --explain -- npm run build
|
|
312
|
+
```
|
|
313
|
+
|
|
314
|
+
## Long-Running Commands
|
|
315
|
+
|
|
316
|
+
`why-cli` can stream logs for servers, watchers, and dev processes.
|
|
317
|
+
|
|
318
|
+
Examples:
|
|
319
|
+
|
|
320
|
+
```bash
|
|
321
|
+
why -- npm start
|
|
322
|
+
why --stream -- npm run dev
|
|
323
|
+
why --no-stream -- npm run build
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
Use:
|
|
327
|
+
|
|
328
|
+
- `--stream` to force live logs
|
|
329
|
+
- `--no-stream` to wait until the command exits
|
|
330
|
+
- `Ctrl+C` to stop the child process
|
|
331
|
+
|
|
332
|
+
## Code-Aware Explanations
|
|
333
|
+
|
|
334
|
+
When an error points to files in your project, `why-cli` can read local code context and include it in the explanation.
|
|
335
|
+
|
|
336
|
+
That helps with cases like:
|
|
337
|
+
|
|
338
|
+
- TypeScript build errors
|
|
339
|
+
- import or module resolution failures
|
|
340
|
+
- stack traces with file paths
|
|
341
|
+
- runtime failures pointing into your app code
|
|
342
|
+
|
|
343
|
+
This is most useful when you run `why` inside the project that failed.
|
|
344
|
+
|
|
345
|
+
## Skills
|
|
346
|
+
|
|
347
|
+
Skills shape how the AI explains the result.
|
|
348
|
+
|
|
349
|
+
Built-in skills:
|
|
350
|
+
|
|
351
|
+
- `debug`
|
|
352
|
+
- `teach`
|
|
353
|
+
- `fix`
|
|
354
|
+
- `tests`
|
|
355
|
+
- `security`
|
|
356
|
+
- `perf`
|
|
357
|
+
|
|
358
|
+
Examples:
|
|
359
|
+
|
|
360
|
+
```bash
|
|
361
|
+
why --skill debug --skill fix -- npm run build
|
|
362
|
+
why --provider ollama --skill teach -- python3 script.py
|
|
363
|
+
why --list-skills
|
|
364
|
+
```
|
|
365
|
+
|
|
366
|
+
You can also set default skills in your config:
|
|
367
|
+
|
|
368
|
+
```env
|
|
369
|
+
WHY_SKILL=debug,fix
|
|
370
|
+
```
|
|
371
|
+
|
|
372
|
+
## Flags
|
|
373
|
+
|
|
374
|
+
```text
|
|
375
|
+
-h, --help
|
|
376
|
+
-v, --version
|
|
377
|
+
-s, --silent
|
|
378
|
+
--json
|
|
379
|
+
--no-color
|
|
380
|
+
-r, --raw
|
|
381
|
+
-e, --explain
|
|
382
|
+
--mode <auto|run|simulate>
|
|
383
|
+
--simulate
|
|
384
|
+
--run
|
|
385
|
+
--execute
|
|
386
|
+
--provider <auto|none|openai|ollama>
|
|
387
|
+
--model <name>
|
|
388
|
+
--cwd <path>
|
|
389
|
+
--timeout <ms>
|
|
390
|
+
--skill <name>
|
|
391
|
+
--list-skills
|
|
392
|
+
--doctor
|
|
393
|
+
--setup
|
|
394
|
+
--stream
|
|
395
|
+
--no-stream
|
|
396
|
+
--api-key <key>
|
|
397
|
+
--api-key-env <name>
|
|
398
|
+
--openai-base-url <url>
|
|
399
|
+
--ollama-host <url>
|
|
400
|
+
```
|
|
401
|
+
|
|
402
|
+
## Notes
|
|
403
|
+
|
|
404
|
+
- Shell builtins like `cd` cannot change your parent shell session through `why-cli`.
|
|
405
|
+
- In `auto` mode, risky commands are often simulated instead of executed.
|
|
406
|
+
- If a command starts with flags that confuse parsing, use `--` before the command.
|
|
407
|
+
|
|
408
|
+
Example:
|
|
409
|
+
|
|
410
|
+
```bash
|
|
411
|
+
why -- node -v
|
|
412
|
+
why --simulate -- git status
|
|
413
|
+
```
|
|
414
|
+
|
|
415
|
+
|
|
416
|
+
|
|
417
|
+
## CI/CD
|
|
418
|
+
|
|
419
|
+
GitHub Actions is included.
|
|
420
|
+
|
|
421
|
+
CI workflow:
|
|
422
|
+
|
|
423
|
+
- file: `.github/workflows/ci.yml`
|
|
424
|
+
- runs on pushes to `main` and on pull requests
|
|
425
|
+
- tests Node.js `18` and `20`
|
|
426
|
+
- runs `npm ci`
|
|
427
|
+
- runs `npm run build`
|
|
428
|
+
- runs `npm pack --dry-run`
|
|
429
|
+
|
|
430
|
+
Release workflow:
|
|
431
|
+
|
|
432
|
+
- file: `.github/workflows/release.yml`
|
|
433
|
+
- runs on tags like `v1.1.0`
|
|
434
|
+
- builds the project
|
|
435
|
+
- publishes to npm
|
|
436
|
+
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
"use strict";
|
|
3
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
4
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
5
|
+
};
|
|
6
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
7
|
+
const package_json_1 = __importDefault(require("../../package.json"));
|
|
8
|
+
const args_1 = require("../core/args");
|
|
9
|
+
const env_1 = require("../core/env");
|
|
10
|
+
const provider_health_1 = require("../core/provider-health");
|
|
11
|
+
const runner_1 = require("../core/runner");
|
|
12
|
+
const setup_1 = require("../core/setup");
|
|
13
|
+
const skills_1 = require("../core/skills");
|
|
14
|
+
const logger_1 = require("../utils/logger");
|
|
15
|
+
async function main() {
|
|
16
|
+
let options;
|
|
17
|
+
try {
|
|
18
|
+
options = (0, args_1.parseArgs)(process.argv.slice(2));
|
|
19
|
+
}
|
|
20
|
+
catch (error) {
|
|
21
|
+
console.error(error.message);
|
|
22
|
+
console.error("\nRun `why --help` to see available flags.");
|
|
23
|
+
process.exit(1);
|
|
24
|
+
}
|
|
25
|
+
if (options.version) {
|
|
26
|
+
console.log(package_json_1.default.version);
|
|
27
|
+
return;
|
|
28
|
+
}
|
|
29
|
+
if (options.help) {
|
|
30
|
+
console.log((0, args_1.renderHelp)(package_json_1.default.version));
|
|
31
|
+
return;
|
|
32
|
+
}
|
|
33
|
+
if (options.listSkills) {
|
|
34
|
+
console.log((0, skills_1.formatSkillList)());
|
|
35
|
+
return;
|
|
36
|
+
}
|
|
37
|
+
const cwd = options.cwd ? options.cwd : process.cwd();
|
|
38
|
+
const loadedEnv = (0, env_1.loadEnv)(cwd);
|
|
39
|
+
if (options.provider === "auto") {
|
|
40
|
+
const configuredProvider = (0, env_1.resolveEnvAlias)(["WHY_PROVIDER"]);
|
|
41
|
+
if (configuredProvider === "openai" || configuredProvider === "ollama" || configuredProvider === "none" || configuredProvider === "auto") {
|
|
42
|
+
options.provider = configuredProvider;
|
|
43
|
+
}
|
|
44
|
+
}
|
|
45
|
+
if (!options.model) {
|
|
46
|
+
if (options.provider === "openai") {
|
|
47
|
+
options.model = (0, env_1.resolveEnvAlias)(["OPENAI_MODEL", "openaimodel"]);
|
|
48
|
+
}
|
|
49
|
+
else if (options.provider === "ollama") {
|
|
50
|
+
options.model = (0, env_1.resolveEnvAlias)(["OLLAMA_MODEL", "ollamamodel"]);
|
|
51
|
+
}
|
|
52
|
+
}
|
|
53
|
+
if (options.skills.length === 0) {
|
|
54
|
+
const configuredSkills = (0, env_1.resolveEnvAlias)(["WHY_SKILL"]);
|
|
55
|
+
if (configuredSkills) {
|
|
56
|
+
options.skills = configuredSkills.split(",").map((value) => value.trim()).filter(Boolean);
|
|
57
|
+
}
|
|
58
|
+
}
|
|
59
|
+
const logger = new logger_1.Logger({
|
|
60
|
+
color: options.color,
|
|
61
|
+
silent: options.silent,
|
|
62
|
+
json: options.json,
|
|
63
|
+
});
|
|
64
|
+
if (options.setup) {
|
|
65
|
+
await (0, setup_1.runSetup)(options, logger);
|
|
66
|
+
return;
|
|
67
|
+
}
|
|
68
|
+
if (options.doctor) {
|
|
69
|
+
const [openai, ollama] = await Promise.all([(0, provider_health_1.checkOpenAIHealth)(options), (0, provider_health_1.checkOllamaHealth)(options)]);
|
|
70
|
+
if (options.json) {
|
|
71
|
+
logger.printJson({
|
|
72
|
+
envFile: loadedEnv.filePath ?? null,
|
|
73
|
+
openai,
|
|
74
|
+
ollama,
|
|
75
|
+
});
|
|
76
|
+
return;
|
|
77
|
+
}
|
|
78
|
+
logger.heading("why-cli doctor", loadedEnv.filePath ? `Loaded ${loadedEnv.filePath}` : "No .env file found");
|
|
79
|
+
logger.list([
|
|
80
|
+
`openai: ${openai.configured ? "configured" : "missing config"}, ${openai.reachable ? "reachable" : "not reachable"} - ${openai.message}`,
|
|
81
|
+
`ollama: ${ollama.configured ? "configured" : "missing config"}, ${ollama.reachable ? "reachable" : "not reachable"} - ${ollama.message}`,
|
|
82
|
+
], "info");
|
|
83
|
+
return;
|
|
84
|
+
}
|
|
85
|
+
if (!options.command) {
|
|
86
|
+
console.error("Please provide a command. Run `why --help` for usage.");
|
|
87
|
+
process.exit(1);
|
|
88
|
+
}
|
|
89
|
+
try {
|
|
90
|
+
const session = await (0, runner_1.runCommand)(options, logger);
|
|
91
|
+
session.envFilePath = loadedEnv.filePath;
|
|
92
|
+
(0, runner_1.printCommandReport)(session, options, logger);
|
|
93
|
+
process.exit(session.ok ? 0 : 1);
|
|
94
|
+
}
|
|
95
|
+
catch (error) {
|
|
96
|
+
console.error(`why-cli failed to start the command: ${error.message}`);
|
|
97
|
+
process.exit(1);
|
|
98
|
+
}
|
|
99
|
+
}
|
|
100
|
+
void main();
|
|
101
|
+
//# sourceMappingURL=index.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../src/cli/index.ts"],"names":[],"mappings":";;;;;;AAEA,sEAA6C;AAC7C,uCAAqD;AACrD,qCAAuD;AACvD,6DAA+E;AAC/E,2CAAgE;AAChE,yCAAyC;AACzC,2CAAiD;AACjD,4CAAyC;AAEzC,KAAK,UAAU,IAAI;IACjB,IAAI,OAAO,CAAC;IAEZ,IAAI,CAAC;QACH,OAAO,GAAG,IAAA,gBAAS,EAAC,OAAO,CAAC,IAAI,CAAC,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC;IAC7C,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO,CAAC,KAAK,CAAE,KAAe,CAAC,OAAO,CAAC,CAAC;QACxC,OAAO,CAAC,KAAK,CAAC,4CAA4C,CAAC,CAAC;QAC5D,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;IAED,IAAI,OAAO,CAAC,OAAO,EAAE,CAAC;QACpB,OAAO,CAAC,GAAG,CAAC,sBAAW,CAAC,OAAO,CAAC,CAAC;QACjC,OAAO;IACT,CAAC;IAED,IAAI,OAAO,CAAC,IAAI,EAAE,CAAC;QACjB,OAAO,CAAC,GAAG,CAAC,IAAA,iBAAU,EAAC,sBAAW,CAAC,OAAO,CAAC,CAAC,CAAC;QAC7C,OAAO;IACT,CAAC;IAED,IAAI,OAAO,CAAC,UAAU,EAAE,CAAC;QACvB,OAAO,CAAC,GAAG,CAAC,IAAA,wBAAe,GAAE,CAAC,CAAC;QAC/B,OAAO;IACT,CAAC;IAED,MAAM,GAAG,GAAG,OAAO,CAAC,GAAG,CAAC,CAAC,CAAC,OAAO,CAAC,GAAG,CAAC,CAAC,CAAC,OAAO,CAAC,GAAG,EAAE,CAAC;IACtD,MAAM,SAAS,GAAG,IAAA,aAAO,EAAC,GAAG,CAAC,CAAC;IAE/B,IAAI,OAAO,CAAC,QAAQ,KAAK,MAAM,EAAE,CAAC;QAChC,MAAM,kBAAkB,GAAG,IAAA,qBAAe,EAAC,CAAC,cAAc,CAAC,CAAC,CAAC;QAC7D,IAAI,kBAAkB,KAAK,QAAQ,IAAI,kBAAkB,KAAK,QAAQ,IAAI,kBAAkB,KAAK,MAAM,IAAI,kBAAkB,KAAK,MAAM,EAAE,CAAC;YACzI,OAAO,CAAC,QAAQ,GAAG,kBAAkB,CAAC;QACxC,CAAC;IACH,CAAC;IAED,IAAI,CAAC,OAAO,CAAC,KAAK,EAAE,CAAC;QACnB,IAAI,OAAO,CAAC,QAAQ,KAAK,QAAQ,EAAE,CAAC;YAClC,OAAO,CAAC,KAAK,GAAG,IAAA,qBAAe,EAAC,CAAC,cAAc,EAAE,aAAa,CAAC,CAAC,CAAC;QACnE,CAAC;aAAM,IAAI,OAAO,CAAC,QAAQ,KAAK,QAAQ,EAAE,CAAC;YACzC,OAAO,CAAC,KAAK,GAAG,IAAA,qBAAe,EAAC,CAAC,cAAc,EAAE,aAAa,CAAC,CAAC,CAAC;QACnE,CAAC;IACH,CAAC;IAED,IAAI,OAAO,CAAC,MAAM,CAAC,MAAM,KAAK,CAAC,EAAE,CAAC;QAChC,MAAM,gBAAgB,GAAG,IAAA,qBAAe,EAAC,CAAC,WAAW,CAAC,CAAC,CAAC;QACxD,IAAI,gBAAgB,EAAE,CAAC;YACrB,OAAO,CAAC,MAAM,GAAG,gBAAgB,CAAC,KAAK,CAAC,GAAG,CAAC,CAAC,GAAG,CAAC,CAAC,KAAK,EAAE,EAAE,CAAC,KAAK,CAAC,IAAI,EAAE,CAAC,CAAC,MAAM,CAAC,OAAO,CAAC,CAAC;QAC5F,CAAC;IACH,CAAC;IAED,MAAM,MAAM,GAAG,IAAI,eAAM,CAAC;QACxB,KAAK,EAAE,OAAO,CAAC,KAAK;QACpB,MAAM,EAAE,OAAO,CAAC,MAAM;QACtB,IAAI,EAAE,OAAO,CAAC,IAAI;KACnB,CAAC,CAAC;IAEH,IAAI,OAAO,CAAC,KAAK,EAAE,CAAC;QAClB,MAAM,IAAA,gBAAQ,EAAC,OAAO,EAAE,MAAM,CAAC,CAAC;QAChC,OAAO;IACT,CAAC;IAED,IAAI,OAAO,CAAC,MAAM,EAAE,CAAC;QACnB,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,GAAG,MAAM,OAAO,CAAC,GAAG,CAAC,CAAC,IAAA,mCAAiB,EAAC,OAAO,CAAC,EAAE,IAAA,mCAAiB,EAAC,OAAO,CAAC,CAAC,CAAC,CAAC;QACrG,IAAI,OAAO,CAAC,IAAI,EAAE,CAAC;YACjB,MAAM,CAAC,SAAS,CAAC;gBACf,OAAO,EAAE,SAAS,CAAC,QAAQ,IAAI,IAAI;gBACnC,MAAM;gBACN,MAAM;aACP,CAAC,CAAC;YACH,OAAO;QACT,CAAC;QAED,MAAM,CAAC,OAAO,CAAC,gBAAgB,EAAE,SAAS,CAAC,QAAQ,CAAC,CAAC,CAAC,UAAU,SAAS,CAAC,QAAQ,EAAE,CAAC,CAAC,CAAC,oBAAoB,CAAC,CAAC;QAC7G,MAAM,CAAC,IAAI,CACT;YACE,WAAW,MAAM,CAAC,UAAU,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC,CAAC,gBAAgB,KAAK,MAAM,CAAC,SAAS,CAAC,CAAC,CAAC,WAAW,CAAC,CAAC,CAAC,eAAe,MAAM,MAAM,CAAC,OAAO,EAAE;YACzI,WAAW,MAAM,CAAC,UAAU,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC,CAAC,gBAAgB,KAAK,MAAM,CAAC,SAAS,CAAC,CAAC,CAAC,WAAW,CAAC,CAAC,CAAC,eAAe,MAAM,MAAM,CAAC,OAAO,EAAE;SAC1I,EACD,MAAM,CACP,CAAC;QACF,OAAO;IACT,CAAC;IAED,IAAI,CAAC,OAAO,CAAC,OAAO,EAAE,CAAC;QACrB,OAAO,CAAC,KAAK,CAAC,uDAAuD,CAAC,CAAC;QACvE,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;IAED,IAAI,CAAC;QACH,MAAM,OAAO,GAAG,MAAM,IAAA,mBAAU,EAAC,OAAO,EAAE,MAAM,CAAC,CAAC;QAClD,OAAO,CAAC,WAAW,GAAG,SAAS,CAAC,QAAQ,CAAC;QACzC,IAAA,2BAAkB,EAAC,OAAO,EAAE,OAAO,EAAE,MAAM,CAAC,CAAC;QAC7C,OAAO,CAAC,IAAI,CAAC,OAAO,CAAC,EAAE,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC;IACnC,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO,CAAC,KAAK,CAAC,wCAAyC,KAAe,CAAC,OAAO,EAAE,CAAC,CAAC;QAClF,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;AACH,CAAC;AAED,KAAK,IAAI,EAAE,CAAC"}
|