tf-ai 0.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +104 -0
- package/dist/index.d.ts +1 -0
- package/dist/index.js +641 -0
- package/dist/index.js.map +1 -0
- package/package.json +56 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 AspireOne
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,104 @@
|
|
|
1
|
+
# tf-ai 🤖
|
|
2
|
+
|
|
3
|
+
> *The AI-powered successor to `thefuck` — explains errors, fixes commands, and troubleshoots your terminal.*
|
|
4
|
+
|
|
5
|
+
**tf-ai** is a smart CLI tool that uses Large Language Models (LLMs) to analyze your terminal commands and their output. It understands your environment (Shell, OS, Project Type) to provide tailored explanations and fixes.
|
|
6
|
+
|
|
7
|
+
Run a command, fail, type `fuck`, and let AI handle the rest.
|
|
8
|
+
|
|
9
|
+
## ✨ Features
|
|
10
|
+
|
|
11
|
+
- **🧠 Context-Aware Analysis**: Understanding not just *what* failed, but *where* (PowerShell vs Bash, Node.js vs Python project, etc).
|
|
12
|
+
- **📝 Plain English Explanations**: Deciphers cryptic error codes and log spew into clear, actionable insights.
|
|
13
|
+
- **⚡ Real-Time Streaming**: Watch the explanation appear instantly — no waiting for the full response.
|
|
14
|
+
- **🛠️ Smart Suggestions**: Offers corrected commands that match your specific shell syntax.
|
|
15
|
+
- **🔌 Provider Agnostic**: Works with Anthropic (Claude), OpenAI (GPT-4), Google (Gemini), or any [Vercel AI SDK](https://sdk.vercel.ai) compatible provider.
|
|
16
|
+
|
|
17
|
+
## 📦 Installation
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
# Install globally via npm
|
|
21
|
+
npm install -g tf-ai
|
|
22
|
+
|
|
23
|
+
# Or run directly
|
|
24
|
+
npx tf-ai --help
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
## ⚙️ Configuration
|
|
28
|
+
|
|
29
|
+
1. **Set your API Key** (Required)
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
export ANTHROPIC_API_KEY="sk-ant-..."
|
|
33
|
+
# OR
|
|
34
|
+
export OPENAI_API_KEY="sk-..."
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
2. **Advanced Config** (Optional)
|
|
38
|
+
|
|
39
|
+
Create `~/.tf-ai/config.json`:
|
|
40
|
+
```json
|
|
41
|
+
{
|
|
42
|
+
"model": "claude-3-5-sonnet-latest",
|
|
43
|
+
"confirmBeforeRun": true,
|
|
44
|
+
"verbose": false
|
|
45
|
+
}
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
## 🚀 Usage
|
|
49
|
+
|
|
50
|
+
### 1. Setup Shell Integration (Recommended)
|
|
51
|
+
|
|
52
|
+
To use the `fuck` alias, run the setup command for your shell:
|
|
53
|
+
|
|
54
|
+
```bash
|
|
55
|
+
tf-ai --setup
|
|
56
|
+
```
|
|
57
|
+
Follow the instructions to add the function to your shell profile (PowerShell `$PROFILE`, `.bashrc`, etc).
|
|
58
|
+
|
|
59
|
+
### 2. The "Fuck" Workflow
|
|
60
|
+
|
|
61
|
+
```powershell
|
|
62
|
+
# 1. Mess up a command
|
|
63
|
+
PS> git pussh origin main
|
|
64
|
+
git: 'pussh' is not a git command.
|
|
65
|
+
|
|
66
|
+
# 2. Summon the AI
|
|
67
|
+
PS> fuck
|
|
68
|
+
|
|
69
|
+
# 3. Get help instantly
|
|
70
|
+
🤖 tf-ai
|
|
71
|
+
It looks like you made a typo. 'pussh' is not a valid git command.
|
|
72
|
+
|
|
73
|
+
💡 Suggested command:
|
|
74
|
+
➜ git push origin main
|
|
75
|
+
|
|
76
|
+
[Enter] Run [e] Edit [Esc] Cancel
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
### 3. Direct Usage
|
|
80
|
+
|
|
81
|
+
You can also use it manually to analyze specific errors:
|
|
82
|
+
|
|
83
|
+
```bash
|
|
84
|
+
# Analyze a specific failure
|
|
85
|
+
tf-ai --command "npm install" --output "EBADENGINE Unsupported engine"
|
|
86
|
+
|
|
87
|
+
# Just get an explanation without fixes
|
|
88
|
+
tf-ai --command "ls -la" --explain
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
## 🛠️ Development
|
|
92
|
+
|
|
93
|
+
```bash
|
|
94
|
+
git clone https://github.com/yourusername/thefuckai.git
|
|
95
|
+
cd thefuckai
|
|
96
|
+
pnpm install
|
|
97
|
+
|
|
98
|
+
# Run locally
|
|
99
|
+
pnpm dev -- --command "echo test" --output "error"
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
## 📄 License
|
|
103
|
+
|
|
104
|
+
MIT
|
package/dist/index.d.ts
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
package/dist/index.js
ADDED
|
@@ -0,0 +1,641 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
#!/usr/bin/env node
|
|
3
|
+
|
|
4
|
+
// src/index.ts
|
|
5
|
+
import { Command } from "commander";
|
|
6
|
+
|
|
7
|
+
// src/config.ts
|
|
8
|
+
import { existsSync, readFileSync } from "fs";
|
|
9
|
+
import { homedir } from "os";
|
|
10
|
+
import { join } from "path";
|
|
11
|
+
var CONFIG_DIR = join(homedir(), ".tf-ai");
|
|
12
|
+
var CONFIG_FILE = join(CONFIG_DIR, "config.json");
|
|
13
|
+
function loadConfigFile() {
|
|
14
|
+
if (!existsSync(CONFIG_FILE)) {
|
|
15
|
+
return {};
|
|
16
|
+
}
|
|
17
|
+
try {
|
|
18
|
+
const content = readFileSync(CONFIG_FILE, "utf-8");
|
|
19
|
+
return JSON.parse(content);
|
|
20
|
+
} catch {
|
|
21
|
+
return {};
|
|
22
|
+
}
|
|
23
|
+
}
|
|
24
|
+
function detectProvider(model) {
|
|
25
|
+
if (model.startsWith("anthropic/") || model.includes("claude")) {
|
|
26
|
+
return "anthropic";
|
|
27
|
+
}
|
|
28
|
+
if (model.startsWith("openai/") || model.includes("gpt")) {
|
|
29
|
+
return "openai";
|
|
30
|
+
}
|
|
31
|
+
if (model.startsWith("google/") || model.includes("gemini")) {
|
|
32
|
+
return "google";
|
|
33
|
+
}
|
|
34
|
+
return "anthropic";
|
|
35
|
+
}
|
|
36
|
+
function getApiKey(provider, configKey) {
|
|
37
|
+
if (configKey) {
|
|
38
|
+
return configKey;
|
|
39
|
+
}
|
|
40
|
+
const envVars = {
|
|
41
|
+
anthropic: "ANTHROPIC_API_KEY",
|
|
42
|
+
openai: "OPENAI_API_KEY",
|
|
43
|
+
google: "GOOGLE_API_KEY"
|
|
44
|
+
};
|
|
45
|
+
const envVarName = envVars[provider];
|
|
46
|
+
const envKey = process.env[envVarName];
|
|
47
|
+
if (envKey) {
|
|
48
|
+
return envKey;
|
|
49
|
+
}
|
|
50
|
+
return process.env["TF_AI_API_KEY"] ?? "";
|
|
51
|
+
}
|
|
52
|
+
function loadConfig(overrides) {
|
|
53
|
+
const file = loadConfigFile();
|
|
54
|
+
const model = overrides?.model ?? process.env["TF_AI_MODEL"] ?? file.model ?? "claude-sonnet-4-5-20250929";
|
|
55
|
+
const provider = overrides?.provider ?? file.provider ?? detectProvider(model);
|
|
56
|
+
const apiKey = getApiKey(provider, file.apiKey);
|
|
57
|
+
return {
|
|
58
|
+
model,
|
|
59
|
+
provider,
|
|
60
|
+
apiKey,
|
|
61
|
+
confirmBeforeRun: overrides?.confirmBeforeRun ?? file.confirmBeforeRun ?? true,
|
|
62
|
+
verbose: overrides?.verbose ?? file.verbose ?? false
|
|
63
|
+
};
|
|
64
|
+
}
|
|
65
|
+
function validateConfig(config) {
|
|
66
|
+
if (!config.apiKey) {
|
|
67
|
+
const envVar = {
|
|
68
|
+
anthropic: "ANTHROPIC_API_KEY",
|
|
69
|
+
openai: "OPENAI_API_KEY",
|
|
70
|
+
google: "GOOGLE_API_KEY"
|
|
71
|
+
}[config.provider];
|
|
72
|
+
return {
|
|
73
|
+
valid: false,
|
|
74
|
+
error: `No API key found. Set ${envVar} environment variable or add "apiKey" to ${CONFIG_FILE}`
|
|
75
|
+
};
|
|
76
|
+
}
|
|
77
|
+
return { valid: true };
|
|
78
|
+
}
|
|
79
|
+
|
|
80
|
+
// src/ai/client.ts
|
|
81
|
+
import { streamText, Output } from "ai";
|
|
82
|
+
import { anthropic } from "@ai-sdk/anthropic";
|
|
83
|
+
import { openai } from "@ai-sdk/openai";
|
|
84
|
+
import { google } from "@ai-sdk/google";
|
|
85
|
+
|
|
86
|
+
// src/ai/prompts.ts
|
|
87
|
+
var SYSTEM_PROMPT = `You are an expert terminal assistant that helps developers understand command output and troubleshoot issues.
|
|
88
|
+
|
|
89
|
+
Given a command and its output, analyze the situation and provide helpful insight. The output could be:
|
|
90
|
+
- An error message (syntax error, permission denied, command not found, etc.)
|
|
91
|
+
- A warning or unexpected behavior
|
|
92
|
+
- A build failure, test failure, or deployment issue
|
|
93
|
+
- API/network errors
|
|
94
|
+
- Confusing or unclear output that needs explanation
|
|
95
|
+
- Or even successful output that the user wants to understand better
|
|
96
|
+
|
|
97
|
+
Your response should:
|
|
98
|
+
1. **Explain** what happened - interpret the output in plain, helpful terms
|
|
99
|
+
2. **Diagnose** the root cause if there's an issue.
|
|
100
|
+
3. **Suggest a command** (optional) - only if there's a clear action the user can take
|
|
101
|
+
|
|
102
|
+
Guidelines:
|
|
103
|
+
- Be concise but thorough (if the problem is complex/deep, you have more leeway)
|
|
104
|
+
- If you suggest a command, explain why / what it will do
|
|
105
|
+
- If you're unsure or lacking enough context, say so
|
|
106
|
+
- Tailor suggestions to the user's shell and OS`;
|
|
107
|
+
function formatUserMessage(command, output, context) {
|
|
108
|
+
const projectInfo = context.projectType ? `
|
|
109
|
+
- Project Type: ${context.projectType}` : "";
|
|
110
|
+
return `Command: ${command}
|
|
111
|
+
|
|
112
|
+
Output:
|
|
113
|
+
${output || "(no output)"}
|
|
114
|
+
|
|
115
|
+
Environment:
|
|
116
|
+
- Shell: ${context.shell}
|
|
117
|
+
- OS: ${context.os}
|
|
118
|
+
- Working Directory: ${context.cwd}${projectInfo}
|
|
119
|
+
|
|
120
|
+
Analyze this command and its output. Explain what happened and suggest a follow-up action if appropriate.`;
|
|
121
|
+
}
|
|
122
|
+
|
|
123
|
+
// src/ai/schema.ts
|
|
124
|
+
import { z } from "zod";
|
|
125
|
+
var analysisResponseSchema = z.object({
|
|
126
|
+
explanation: z.string().describe(
|
|
127
|
+
"Clear, helpful explanation of what happened. Interpret the output, diagnose issues, and provide context."
|
|
128
|
+
),
|
|
129
|
+
suggestedCommand: z.string().nullable().describe(
|
|
130
|
+
"A follow-up command the user could run, if applicable. Could be a fix, a next step, or a diagnostic command. Pass null if no command is suggested"
|
|
131
|
+
)
|
|
132
|
+
});
|
|
133
|
+
|
|
134
|
+
// src/ai/client.ts
|
|
135
|
+
function getModel(config) {
|
|
136
|
+
const modelName = config.model.includes("/") ? config.model.split("/")[1] : config.model;
|
|
137
|
+
switch (config.provider) {
|
|
138
|
+
case "anthropic":
|
|
139
|
+
return anthropic(modelName);
|
|
140
|
+
case "openai":
|
|
141
|
+
return openai(modelName);
|
|
142
|
+
case "google":
|
|
143
|
+
return google(modelName);
|
|
144
|
+
default:
|
|
145
|
+
return anthropic(modelName);
|
|
146
|
+
}
|
|
147
|
+
}
|
|
148
|
+
async function analyzeCommandStream(command, output, config, environment, callbacks = {}) {
|
|
149
|
+
const model = getModel(config);
|
|
150
|
+
const { partialOutputStream } = streamText({
|
|
151
|
+
model,
|
|
152
|
+
system: SYSTEM_PROMPT,
|
|
153
|
+
prompt: formatUserMessage(command, output, environment),
|
|
154
|
+
output: Output.object({ schema: analysisResponseSchema })
|
|
155
|
+
});
|
|
156
|
+
let lastExplanation = "";
|
|
157
|
+
let finalResponse = {};
|
|
158
|
+
try {
|
|
159
|
+
for await (const partial of partialOutputStream) {
|
|
160
|
+
finalResponse = partial;
|
|
161
|
+
if (partial.explanation && partial.explanation !== lastExplanation) {
|
|
162
|
+
lastExplanation = partial.explanation;
|
|
163
|
+
callbacks.onExplanationUpdate?.(partial.explanation);
|
|
164
|
+
}
|
|
165
|
+
}
|
|
166
|
+
} catch (error) {
|
|
167
|
+
callbacks.onError?.(error);
|
|
168
|
+
throw error;
|
|
169
|
+
}
|
|
170
|
+
const result = {
|
|
171
|
+
explanation: finalResponse.explanation ?? "",
|
|
172
|
+
suggestion: finalResponse.suggestedCommand ? {
|
|
173
|
+
command: finalResponse.suggestedCommand,
|
|
174
|
+
explanation: ""
|
|
175
|
+
// Explanation is in the main field
|
|
176
|
+
} : void 0
|
|
177
|
+
};
|
|
178
|
+
callbacks.onComplete?.(result);
|
|
179
|
+
return result;
|
|
180
|
+
}
|
|
181
|
+
|
|
182
|
+
// src/shell/powershell.ts
|
|
183
|
+
import { exec } from "child_process";
|
|
184
|
+
import { promisify } from "util";
|
|
185
|
+
var execAsync = promisify(exec);
|
|
186
|
+
var PowerShellAdapter = class {
|
|
187
|
+
async getLastCommand() {
|
|
188
|
+
try {
|
|
189
|
+
const { stdout } = await execAsync(
|
|
190
|
+
'powershell -Command "(Get-History -Count 1).CommandLine"',
|
|
191
|
+
{ encoding: "utf-8" }
|
|
192
|
+
);
|
|
193
|
+
const command = stdout.trim();
|
|
194
|
+
if (!command) {
|
|
195
|
+
return null;
|
|
196
|
+
}
|
|
197
|
+
return {
|
|
198
|
+
command,
|
|
199
|
+
output: "(Output not captured - use the 'fuck' shell function for full functionality)"
|
|
200
|
+
};
|
|
201
|
+
} catch {
|
|
202
|
+
return null;
|
|
203
|
+
}
|
|
204
|
+
}
|
|
205
|
+
async executeCommand(command) {
|
|
206
|
+
try {
|
|
207
|
+
const { stdout, stderr } = await execAsync(
|
|
208
|
+
`powershell -Command "${command.replace(/"/g, '\\"')}"`,
|
|
209
|
+
{ encoding: "utf-8" }
|
|
210
|
+
);
|
|
211
|
+
return {
|
|
212
|
+
output: stdout + stderr,
|
|
213
|
+
exitCode: 0
|
|
214
|
+
};
|
|
215
|
+
} catch (error) {
|
|
216
|
+
const execError = error;
|
|
217
|
+
return {
|
|
218
|
+
output: (execError.stdout ?? "") + (execError.stderr ?? ""),
|
|
219
|
+
exitCode: execError.code ?? 1
|
|
220
|
+
};
|
|
221
|
+
}
|
|
222
|
+
}
|
|
223
|
+
getSetupInstructions() {
|
|
224
|
+
return `
|
|
225
|
+
Add this to your PowerShell profile ($PROFILE):
|
|
226
|
+
|
|
227
|
+
function fuck {
|
|
228
|
+
$lastCmd = (Get-History -Count 1).CommandLine
|
|
229
|
+
if (-not $lastCmd) {
|
|
230
|
+
Write-Host "No command in history" -ForegroundColor Red
|
|
231
|
+
return
|
|
232
|
+
}
|
|
233
|
+
|
|
234
|
+
# Re-execute to capture output
|
|
235
|
+
$output = try {
|
|
236
|
+
Invoke-Expression $lastCmd 2>&1 | Out-String
|
|
237
|
+
} catch {
|
|
238
|
+
$_.Exception.Message
|
|
239
|
+
}
|
|
240
|
+
|
|
241
|
+
# Call tf-ai with captured command and output
|
|
242
|
+
tf-ai --command $lastCmd --output $output
|
|
243
|
+
}
|
|
244
|
+
|
|
245
|
+
Then reload your profile:
|
|
246
|
+
. $PROFILE
|
|
247
|
+
`.trim();
|
|
248
|
+
}
|
|
249
|
+
};
|
|
250
|
+
var powershell = new PowerShellAdapter();
|
|
251
|
+
|
|
252
|
+
// src/shell/context.ts
|
|
253
|
+
import { existsSync as existsSync2 } from "fs";
|
|
254
|
+
import { join as join2 } from "path";
|
|
255
|
+
import { platform } from "os";
|
|
256
|
+
function detectShell() {
|
|
257
|
+
if (process.env["PSModulePath"]) {
|
|
258
|
+
return "powershell";
|
|
259
|
+
}
|
|
260
|
+
const shell = process.env["SHELL"];
|
|
261
|
+
if (shell) {
|
|
262
|
+
if (shell.includes("bash")) return "bash";
|
|
263
|
+
if (shell.includes("zsh")) return "zsh";
|
|
264
|
+
if (shell.includes("fish")) return "fish";
|
|
265
|
+
}
|
|
266
|
+
if (process.env["ComSpec"]?.includes("cmd.exe")) {
|
|
267
|
+
return "cmd";
|
|
268
|
+
}
|
|
269
|
+
return "unknown";
|
|
270
|
+
}
|
|
271
|
+
function detectOS() {
|
|
272
|
+
const os = platform();
|
|
273
|
+
switch (os) {
|
|
274
|
+
case "win32":
|
|
275
|
+
return "windows";
|
|
276
|
+
case "darwin":
|
|
277
|
+
return "macos";
|
|
278
|
+
case "linux":
|
|
279
|
+
return "linux";
|
|
280
|
+
default:
|
|
281
|
+
return "linux";
|
|
282
|
+
}
|
|
283
|
+
}
|
|
284
|
+
function detectProjectType(cwd) {
|
|
285
|
+
if (existsSync2(join2(cwd, ".git"))) {
|
|
286
|
+
return "git";
|
|
287
|
+
}
|
|
288
|
+
if (existsSync2(join2(cwd, "package.json"))) {
|
|
289
|
+
return "nodejs";
|
|
290
|
+
}
|
|
291
|
+
if (existsSync2(join2(cwd, "requirements.txt")) || existsSync2(join2(cwd, "pyproject.toml"))) {
|
|
292
|
+
return "python";
|
|
293
|
+
}
|
|
294
|
+
if (existsSync2(join2(cwd, "Dockerfile"))) {
|
|
295
|
+
return "docker";
|
|
296
|
+
}
|
|
297
|
+
return void 0;
|
|
298
|
+
}
|
|
299
|
+
function detectEnvironment() {
|
|
300
|
+
const cwd = process.cwd();
|
|
301
|
+
const projectType = detectProjectType(cwd);
|
|
302
|
+
return {
|
|
303
|
+
shell: detectShell(),
|
|
304
|
+
os: detectOS(),
|
|
305
|
+
cwd,
|
|
306
|
+
...projectType ? { projectType } : {}
|
|
307
|
+
};
|
|
308
|
+
}
|
|
309
|
+
|
|
310
|
+
// src/ui/output.ts
|
|
311
|
+
import "chalk";
|
|
312
|
+
|
|
313
|
+
// src/ui/theme.ts
|
|
314
|
+
import chalk from "chalk";
|
|
315
|
+
var COLORS = {
|
|
316
|
+
stone: "#F5F2E8",
|
|
317
|
+
// Warmest White / Base
|
|
318
|
+
warmGray: "#9E9A95",
|
|
319
|
+
// Muted text / Labels
|
|
320
|
+
text: "#E3DCD2",
|
|
321
|
+
// Main reading text (Warm Off-White)
|
|
322
|
+
coral: "#D97757",
|
|
323
|
+
// Primary Accent (Anthropic-like orange/coral) - Highlights
|
|
324
|
+
maple: "#C8A682",
|
|
325
|
+
// Secondary Accent - Interactions
|
|
326
|
+
olive: "#8E9C6D",
|
|
327
|
+
// Success
|
|
328
|
+
sienna: "#C1554D",
|
|
329
|
+
// Error
|
|
330
|
+
charcoal: "#2D2B29"
|
|
331
|
+
// Dark background text if needed
|
|
332
|
+
};
|
|
333
|
+
var theme = {
|
|
334
|
+
// --- Global Roles ---
|
|
335
|
+
primary: chalk.hex(COLORS.coral),
|
|
336
|
+
secondary: chalk.hex(COLORS.maple),
|
|
337
|
+
success: chalk.hex(COLORS.olive),
|
|
338
|
+
error: chalk.hex(COLORS.sienna),
|
|
339
|
+
warning: chalk.hex(COLORS.maple),
|
|
340
|
+
// Maple works well for warning too
|
|
341
|
+
muted: chalk.hex(COLORS.warmGray),
|
|
342
|
+
text: chalk.hex(COLORS.text),
|
|
343
|
+
// --- Component Specifics ---
|
|
344
|
+
// Header / Branding
|
|
345
|
+
header: chalk.hex(COLORS.coral).bold,
|
|
346
|
+
// The suggested command to run
|
|
347
|
+
command: chalk.hex(COLORS.stone).bold,
|
|
348
|
+
// Bright, distinct
|
|
349
|
+
// The AI's explanation text
|
|
350
|
+
explanation: chalk.hex(COLORS.text),
|
|
351
|
+
// "Suggested fix:" label - subtle yet clear
|
|
352
|
+
suggestionLabel: chalk.hex(COLORS.maple).bold,
|
|
353
|
+
// Confidence levels
|
|
354
|
+
confidenceHigh: chalk.hex(COLORS.olive),
|
|
355
|
+
confidenceMedium: chalk.hex(COLORS.maple),
|
|
356
|
+
confidenceLow: chalk.hex(COLORS.sienna),
|
|
357
|
+
// --- Animation Colors ---
|
|
358
|
+
// Glitch characters
|
|
359
|
+
glitch1: chalk.hex(COLORS.coral),
|
|
360
|
+
glitch2: chalk.hex(COLORS.maple),
|
|
361
|
+
// Base text for "Analyzing..."
|
|
362
|
+
glitchBase: chalk.hex(COLORS.warmGray)
|
|
363
|
+
};
|
|
364
|
+
|
|
365
|
+
// src/ui/output.ts
|
|
366
|
+
var ICONS = {
|
|
367
|
+
robot: "\u{1F916}",
|
|
368
|
+
lightbulb: "\u{1F4A1}",
|
|
369
|
+
command: "\u279C",
|
|
370
|
+
warning: "\u26A0\uFE0F",
|
|
371
|
+
error: "\u274C",
|
|
372
|
+
success: "\u2713",
|
|
373
|
+
thinking: "\u{1F9E0}"
|
|
374
|
+
};
|
|
375
|
+
var lastPrintedLength = 0;
|
|
376
|
+
function printStreamingExplanation(fullText) {
|
|
377
|
+
if (!fullText) return;
|
|
378
|
+
const newText = fullText.slice(lastPrintedLength);
|
|
379
|
+
if (newText) {
|
|
380
|
+
process.stdout.write(theme.explanation(newText));
|
|
381
|
+
lastPrintedLength = fullText.length;
|
|
382
|
+
}
|
|
383
|
+
}
|
|
384
|
+
function finalizeStreaming() {
|
|
385
|
+
console.log();
|
|
386
|
+
lastPrintedLength = 0;
|
|
387
|
+
}
|
|
388
|
+
function printSuggestion(suggestion) {
|
|
389
|
+
console.log(theme.suggestionLabel(`${ICONS.lightbulb} Suggested fix:`));
|
|
390
|
+
console.log();
|
|
391
|
+
console.log(theme.command(` ${ICONS.command} ${suggestion.command} `));
|
|
392
|
+
console.log();
|
|
393
|
+
}
|
|
394
|
+
function printError(message) {
|
|
395
|
+
console.error(theme.error(`${ICONS.error} ${message}`));
|
|
396
|
+
}
|
|
397
|
+
function printWarning(message) {
|
|
398
|
+
console.log(theme.warning(`${ICONS.warning} ${message}`));
|
|
399
|
+
}
|
|
400
|
+
function printCommand(label, command) {
|
|
401
|
+
console.log(theme.muted(`${label}: `) + theme.text(command));
|
|
402
|
+
}
|
|
403
|
+
function printVerbose(message, verbose) {
|
|
404
|
+
if (verbose) {
|
|
405
|
+
console.log(theme.muted(`[debug] ${message}`));
|
|
406
|
+
}
|
|
407
|
+
}
|
|
408
|
+
|
|
409
|
+
// src/ui/confirm.ts
|
|
410
|
+
import * as readline from "readline";
|
|
411
|
+
import "chalk";
|
|
412
|
+
import { exec as exec2 } from "child_process";
|
|
413
|
+
import { promisify as promisify2 } from "util";
|
|
414
|
+
var execAsync2 = promisify2(exec2);
|
|
415
|
+
async function confirmCommand(command) {
|
|
416
|
+
return new Promise((resolve) => {
|
|
417
|
+
console.log(theme.muted("Press:"));
|
|
418
|
+
console.log(
|
|
419
|
+
theme.secondary(" [Enter]") + theme.muted(" Run command")
|
|
420
|
+
);
|
|
421
|
+
console.log(
|
|
422
|
+
theme.secondary(" [e] ") + theme.muted(" Edit command")
|
|
423
|
+
);
|
|
424
|
+
console.log(
|
|
425
|
+
theme.secondary(" [Esc/q]") + theme.muted(" Cancel")
|
|
426
|
+
);
|
|
427
|
+
console.log();
|
|
428
|
+
if (process.stdin.isTTY) {
|
|
429
|
+
process.stdin.setRawMode(true);
|
|
430
|
+
}
|
|
431
|
+
process.stdin.resume();
|
|
432
|
+
process.stdin.setEncoding("utf8");
|
|
433
|
+
const cleanup = () => {
|
|
434
|
+
if (process.stdin.isTTY) {
|
|
435
|
+
process.stdin.setRawMode(false);
|
|
436
|
+
}
|
|
437
|
+
process.stdin.pause();
|
|
438
|
+
process.stdin.removeAllListeners("data");
|
|
439
|
+
};
|
|
440
|
+
process.stdin.once("data", (key) => {
|
|
441
|
+
cleanup();
|
|
442
|
+
if (key === "\r" || key === "\n") {
|
|
443
|
+
resolve({ action: "run" });
|
|
444
|
+
return;
|
|
445
|
+
}
|
|
446
|
+
if (key === "\x1B" || key === "q" || key === "") {
|
|
447
|
+
console.log(theme.warning("Cancelled"));
|
|
448
|
+
resolve({ action: "cancel" });
|
|
449
|
+
return;
|
|
450
|
+
}
|
|
451
|
+
if (key === "e" || key === "E") {
|
|
452
|
+
resolve({ action: "edit" });
|
|
453
|
+
return;
|
|
454
|
+
}
|
|
455
|
+
console.log(theme.warning("Cancelled"));
|
|
456
|
+
resolve({ action: "cancel" });
|
|
457
|
+
});
|
|
458
|
+
});
|
|
459
|
+
}
|
|
460
|
+
async function editCommand(originalCommand) {
|
|
461
|
+
return new Promise((resolve) => {
|
|
462
|
+
const rl = readline.createInterface({
|
|
463
|
+
input: process.stdin,
|
|
464
|
+
output: process.stdout
|
|
465
|
+
});
|
|
466
|
+
console.log(theme.secondary("\nEdit command (press Enter to confirm, Ctrl+C to cancel):"));
|
|
467
|
+
rl.question(theme.muted("> "), (answer) => {
|
|
468
|
+
rl.close();
|
|
469
|
+
const command = answer.trim();
|
|
470
|
+
resolve(command || originalCommand);
|
|
471
|
+
});
|
|
472
|
+
rl.on("close", () => {
|
|
473
|
+
if (!rl.terminal) {
|
|
474
|
+
resolve(null);
|
|
475
|
+
}
|
|
476
|
+
});
|
|
477
|
+
});
|
|
478
|
+
}
|
|
479
|
+
async function runCommand(command) {
|
|
480
|
+
console.log(theme.secondary(`
|
|
481
|
+
Running: ${command}
|
|
482
|
+
`));
|
|
483
|
+
console.log(theme.muted("\u2500".repeat(40)));
|
|
484
|
+
try {
|
|
485
|
+
const { stdout, stderr } = await execAsync2(
|
|
486
|
+
`powershell -Command "${command.replace(/"/g, '\\"')}"`,
|
|
487
|
+
{ encoding: "utf-8" }
|
|
488
|
+
);
|
|
489
|
+
if (stdout) console.log(stdout);
|
|
490
|
+
if (stderr) console.error(theme.error(stderr));
|
|
491
|
+
console.log(theme.muted("\u2500".repeat(40)));
|
|
492
|
+
console.log(theme.success("\u2713 Command completed"));
|
|
493
|
+
} catch (error) {
|
|
494
|
+
const execError = error;
|
|
495
|
+
if (execError.stdout) console.log(execError.stdout);
|
|
496
|
+
if (execError.stderr) console.error(theme.error(execError.stderr));
|
|
497
|
+
console.log(theme.muted("\u2500".repeat(40)));
|
|
498
|
+
console.log(theme.error(`\u2717 Command failed with exit code ${execError.code ?? 1}`));
|
|
499
|
+
}
|
|
500
|
+
}
|
|
501
|
+
|
|
502
|
+
// src/ui/animation.ts
|
|
503
|
+
function createHackingAnimation(text) {
|
|
504
|
+
const glitchChars = ["#", "$", "%", "*", "?", "~", "^", "+", "=", ":"];
|
|
505
|
+
let interval = null;
|
|
506
|
+
let isRunning = false;
|
|
507
|
+
const originalText = text;
|
|
508
|
+
const textArray = text.split("");
|
|
509
|
+
function glitchText() {
|
|
510
|
+
return textArray.map((char, i) => {
|
|
511
|
+
if (char === " ") return " ";
|
|
512
|
+
if (Math.random() < 0.05) {
|
|
513
|
+
return Math.random() > 0.5 ? theme.glitch1(glitchChars[Math.floor(Math.random() * glitchChars.length)]) : theme.glitch2(glitchChars[Math.floor(Math.random() * glitchChars.length)]);
|
|
514
|
+
}
|
|
515
|
+
return theme.glitchBase(char);
|
|
516
|
+
}).join("");
|
|
517
|
+
}
|
|
518
|
+
function render() {
|
|
519
|
+
if (!isRunning) return;
|
|
520
|
+
process.stdout.write("\r" + glitchText());
|
|
521
|
+
}
|
|
522
|
+
return {
|
|
523
|
+
start() {
|
|
524
|
+
if (isRunning) return;
|
|
525
|
+
isRunning = true;
|
|
526
|
+
process.stdout.write(theme.glitchBase(originalText));
|
|
527
|
+
interval = setInterval(render, 120);
|
|
528
|
+
},
|
|
529
|
+
stop() {
|
|
530
|
+
if (!isRunning) return;
|
|
531
|
+
isRunning = false;
|
|
532
|
+
if (interval) {
|
|
533
|
+
clearInterval(interval);
|
|
534
|
+
interval = null;
|
|
535
|
+
}
|
|
536
|
+
process.stdout.write("\r" + " ".repeat(originalText.length) + "\r");
|
|
537
|
+
}
|
|
538
|
+
};
|
|
539
|
+
}
|
|
540
|
+
|
|
541
|
+
// src/index.ts
|
|
542
|
+
import "chalk";
|
|
543
|
+
var program = new Command();
|
|
544
|
+
program.name("tf-ai").description("AI-powered terminal assistant that explains errors and suggests fixes").version("1.0.0");
|
|
545
|
+
program.option("-c, --command <command>", "The command that failed").option("-o, --output <output>", "The output/error from the failed command").option("-e, --explain", "Only explain the error, don't suggest commands").option("-m, --model <model>", "Model to use (e.g., claude-sonnet-4-20250514, gpt-4o)").option("-y, --yes", "Auto-run the suggested command without confirmation").option("-v, --verbose", "Show verbose output").option("--setup", "Show shell setup instructions").action(async (options) => {
|
|
546
|
+
if (options.setup) {
|
|
547
|
+
console.log(powershell.getSetupInstructions());
|
|
548
|
+
return;
|
|
549
|
+
}
|
|
550
|
+
const configOverrides = {};
|
|
551
|
+
if (options.model !== void 0) configOverrides.model = options.model;
|
|
552
|
+
if (options.verbose !== void 0) configOverrides.verbose = options.verbose;
|
|
553
|
+
if (options.yes !== void 0) configOverrides.confirmBeforeRun = !options.yes;
|
|
554
|
+
const config = loadConfig(configOverrides);
|
|
555
|
+
printVerbose(`Config: ${JSON.stringify(config, null, 2)}`, config.verbose);
|
|
556
|
+
const validation = validateConfig(config);
|
|
557
|
+
if (!validation.valid) {
|
|
558
|
+
printError(validation.error);
|
|
559
|
+
console.log(`
|
|
560
|
+
Create a config file at ${CONFIG_FILE} or set the appropriate environment variable.`);
|
|
561
|
+
console.log("\nExample config.json:");
|
|
562
|
+
console.log(JSON.stringify({
|
|
563
|
+
model: "claude-sonnet-4-20250514",
|
|
564
|
+
apiKey: "your-api-key-here"
|
|
565
|
+
}, null, 2));
|
|
566
|
+
process.exit(1);
|
|
567
|
+
}
|
|
568
|
+
let command = options.command;
|
|
569
|
+
let output = options.output ?? "";
|
|
570
|
+
if (!command) {
|
|
571
|
+
printVerbose("No command provided, checking shell history...", config.verbose);
|
|
572
|
+
const lastCmd = await powershell.getLastCommand();
|
|
573
|
+
if (lastCmd) {
|
|
574
|
+
command = lastCmd.command;
|
|
575
|
+
output = lastCmd.output;
|
|
576
|
+
printCommand("Last command", command);
|
|
577
|
+
} else {
|
|
578
|
+
printWarning("No command provided and couldn't get last command from history.");
|
|
579
|
+
console.log("\nUsage:");
|
|
580
|
+
console.log(" tf-ai --command 'git pussh' --output 'error message'");
|
|
581
|
+
console.log("\nOr set up the shell integration:");
|
|
582
|
+
console.log(" tf-ai --setup");
|
|
583
|
+
process.exit(1);
|
|
584
|
+
}
|
|
585
|
+
}
|
|
586
|
+
if (output && config.verbose) {
|
|
587
|
+
console.log("\nOutput:");
|
|
588
|
+
console.log(output);
|
|
589
|
+
}
|
|
590
|
+
console.log();
|
|
591
|
+
const animation = createHackingAnimation("Analyzing what happened...");
|
|
592
|
+
animation.start();
|
|
593
|
+
const envContext = detectEnvironment();
|
|
594
|
+
if (config.verbose) {
|
|
595
|
+
printVerbose(`Environment: ${JSON.stringify(envContext, null, 2)}`, true);
|
|
596
|
+
}
|
|
597
|
+
try {
|
|
598
|
+
let isFirstChunk = true;
|
|
599
|
+
const result = await analyzeCommandStream(command, output, config, envContext, {
|
|
600
|
+
onExplanationUpdate: (text) => {
|
|
601
|
+
if (isFirstChunk) {
|
|
602
|
+
animation.stop();
|
|
603
|
+
isFirstChunk = false;
|
|
604
|
+
}
|
|
605
|
+
printStreamingExplanation(text);
|
|
606
|
+
}
|
|
607
|
+
});
|
|
608
|
+
if (isFirstChunk) {
|
|
609
|
+
animation.stop();
|
|
610
|
+
}
|
|
611
|
+
finalizeStreaming();
|
|
612
|
+
console.log();
|
|
613
|
+
if (result.suggestion && !options.explain) {
|
|
614
|
+
printSuggestion(result.suggestion);
|
|
615
|
+
if (config.confirmBeforeRun) {
|
|
616
|
+
const confirmation = await confirmCommand(result.suggestion.command);
|
|
617
|
+
if (confirmation.action === "run") {
|
|
618
|
+
await runCommand(result.suggestion.command);
|
|
619
|
+
} else if (confirmation.action === "edit") {
|
|
620
|
+
const edited = await editCommand(result.suggestion.command);
|
|
621
|
+
if (edited) {
|
|
622
|
+
await runCommand(edited);
|
|
623
|
+
}
|
|
624
|
+
}
|
|
625
|
+
} else {
|
|
626
|
+
await runCommand(result.suggestion.command);
|
|
627
|
+
}
|
|
628
|
+
}
|
|
629
|
+
} catch (error) {
|
|
630
|
+
animation.stop();
|
|
631
|
+
finalizeStreaming();
|
|
632
|
+
const err = error;
|
|
633
|
+
printError(`Failed to analyze command: ${err.message}`);
|
|
634
|
+
if (config.verbose && err.stack) {
|
|
635
|
+
console.log(err.stack);
|
|
636
|
+
}
|
|
637
|
+
process.exit(1);
|
|
638
|
+
}
|
|
639
|
+
});
|
|
640
|
+
program.parse();
|
|
641
|
+
//# sourceMappingURL=index.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"sources":["../src/index.ts","../src/config.ts","../src/ai/client.ts","../src/ai/prompts.ts","../src/ai/schema.ts","../src/shell/powershell.ts","../src/shell/context.ts","../src/ui/output.ts","../src/ui/theme.ts","../src/ui/confirm.ts","../src/ui/animation.ts"],"sourcesContent":["#!/usr/bin/env node\r\nimport { Command } from \"commander\";\r\nimport { loadConfig, validateConfig, CONFIG_FILE } from \"./config.js\";\r\nimport { analyzeCommandStream } from \"./ai/index.js\";\r\nimport { powershell, detectEnvironment } from \"./shell/index.js\";\r\nimport {\r\n printHeader,\r\n printSuggestion,\r\n printError,\r\n printWarning,\r\n printCommand,\r\n printVerbose,\r\n printStreamingExplanation,\r\n finalizeStreaming,\r\n confirmCommand,\r\n editCommand,\r\n runCommand,\r\n createHackingAnimation,\r\n} from \"./ui/index.js\";\r\nimport chalk from \"chalk\";\r\n\r\nconst program = new Command();\r\n\r\nprogram\r\n .name(\"tf-ai\")\r\n .description(\"AI-powered terminal assistant that explains errors and suggests fixes\")\r\n .version(\"1.0.0\");\r\n\r\nprogram\r\n .option(\"-c, --command <command>\", \"The command that failed\")\r\n .option(\"-o, --output <output>\", \"The output/error from the failed command\")\r\n .option(\"-e, --explain\", \"Only explain the error, don't suggest commands\")\r\n .option(\"-m, --model <model>\", \"Model to use (e.g., claude-sonnet-4-20250514, gpt-4o)\")\r\n .option(\"-y, --yes\", \"Auto-run the suggested command without confirmation\")\r\n .option(\"-v, --verbose\", \"Show verbose output\")\r\n .option(\"--setup\", \"Show shell setup instructions\")\r\n .action(async (options: {\r\n command?: string;\r\n output?: string;\r\n explain?: boolean;\r\n model?: string;\r\n yes?: boolean;\r\n verbose?: boolean;\r\n setup?: boolean;\r\n }) => {\r\n // Show setup instructions\r\n if (options.setup) {\r\n console.log(powershell.getSetupInstructions());\r\n return;\r\n }\r\n\r\n // Load and validate config\r\n const configOverrides: Parameters<typeof loadConfig>[0] = {};\r\n if (options.model !== undefined) configOverrides.model = options.model;\r\n if (options.verbose !== undefined) configOverrides.verbose = options.verbose;\r\n if (options.yes !== undefined) configOverrides.confirmBeforeRun = !options.yes;\r\n \r\n const config = loadConfig(configOverrides);\r\n\r\n printVerbose(`Config: ${JSON.stringify(config, null, 2)}`, config.verbose);\r\n\r\n const validation = validateConfig(config);\r\n if (!validation.valid) {\r\n printError(validation.error!);\r\n console.log(`\\nCreate a config file at ${CONFIG_FILE} or set the appropriate environment variable.`);\r\n console.log('\\nExample config.json:');\r\n console.log(JSON.stringify({\r\n model: \"claude-sonnet-4-20250514\",\r\n apiKey: \"your-api-key-here\",\r\n }, null, 2));\r\n process.exit(1);\r\n }\r\n\r\n // Get command to analyze\r\n let command = options.command;\r\n let output = options.output ?? \"\";\r\n\r\n if (!command) {\r\n printVerbose(\"No command provided, checking shell history...\", config.verbose);\r\n const lastCmd = await powershell.getLastCommand();\r\n \r\n if (lastCmd) {\r\n command = lastCmd.command;\r\n output = lastCmd.output;\r\n printCommand(\"Last command\", command);\r\n } else {\r\n printWarning(\"No command provided and couldn't get last command from history.\");\r\n console.log(\"\\nUsage:\");\r\n console.log(\" tf-ai --command 'git pussh' --output 'error message'\");\r\n console.log(\"\\nOr set up the shell integration:\");\r\n console.log(\" tf-ai --setup\");\r\n process.exit(1);\r\n }\r\n } \r\n // Removed \"Analyzing: command\" print\r\n\r\n if (output && config.verbose) {\r\n console.log(\"\\nOutput:\");\r\n console.log(output);\r\n }\r\n\r\n console.log(); // Spacing\r\n const animation = createHackingAnimation(\"Analyzing what happened...\");\r\n animation.start();\r\n \r\n // Detect environment context\r\n const envContext = detectEnvironment();\r\n if (config.verbose) {\r\n printVerbose(`Environment: ${JSON.stringify(envContext, null, 2)}`, true);\r\n }\r\n \r\n try {\r\n let isFirstChunk = true;\r\n const result = await analyzeCommandStream(command, output, config, envContext, {\r\n onExplanationUpdate: (text) => {\r\n if (isFirstChunk) {\r\n animation.stop();\r\n isFirstChunk = false;\r\n }\r\n // Stream using the polished UI function\r\n printStreamingExplanation(text);\r\n },\r\n });\r\n \r\n if (isFirstChunk) {\r\n animation.stop();\r\n }\r\n \r\n finalizeStreaming();\r\n console.log();\r\n\r\n // Handle suggestion\r\n if (result.suggestion && !options.explain) {\r\n printSuggestion(result.suggestion);\r\n\r\n // Ask for confirmation\r\n if (config.confirmBeforeRun) {\r\n const confirmation = await confirmCommand(result.suggestion.command);\r\n\r\n if (confirmation.action === \"run\") {\r\n await runCommand(result.suggestion.command);\r\n } else if (confirmation.action === \"edit\") {\r\n const edited = await editCommand(result.suggestion.command);\r\n if (edited) {\r\n await runCommand(edited);\r\n }\r\n }\r\n } else {\r\n // Auto-run (-y flag)\r\n await runCommand(result.suggestion.command);\r\n }\r\n }\r\n } catch (error) {\r\n animation.stop(); // Stop animation on error\r\n finalizeStreaming(); // Reset streaming state\r\n \r\n const err = error as Error;\r\n printError(`Failed to analyze command: ${err.message}`);\r\n \r\n if (config.verbose && err.stack) {\r\n console.log(err.stack);\r\n }\r\n \r\n process.exit(1);\r\n }\r\n });\r\n\r\nprogram.parse();\r\n","import { existsSync, readFileSync } from \"node:fs\";\r\nimport { homedir } from \"node:os\";\r\nimport { join } from \"node:path\";\r\n\r\nexport interface Config {\r\n /** Model identifier, e.g. \"anthropic/claude-sonnet-4-20250514\" or \"openai/gpt-4o\" */\r\n model: string;\r\n /** Provider to use: anthropic, openai, or google */\r\n provider: \"anthropic\" | \"openai\" | \"google\";\r\n /** API key for the selected provider */\r\n apiKey: string;\r\n /** Whether to ask for confirmation before running suggested commands */\r\n confirmBeforeRun: boolean;\r\n /** Show verbose/debug output */\r\n verbose: boolean;\r\n}\r\n\r\ninterface ConfigFile {\r\n model?: string;\r\n provider?: \"anthropic\" | \"openai\" | \"google\";\r\n apiKey?: string;\r\n confirmBeforeRun?: boolean;\r\n verbose?: boolean;\r\n}\r\n\r\nconst CONFIG_DIR = join(homedir(), \".tf-ai\");\r\nconst CONFIG_FILE = join(CONFIG_DIR, \"config.json\");\r\n\r\nfunction loadConfigFile(): ConfigFile {\r\n if (!existsSync(CONFIG_FILE)) {\r\n return {};\r\n }\r\n \r\n try {\r\n const content = readFileSync(CONFIG_FILE, \"utf-8\");\r\n return JSON.parse(content) as ConfigFile;\r\n } catch {\r\n return {};\r\n }\r\n}\r\n\r\nfunction detectProvider(model: string): \"anthropic\" | \"openai\" | \"google\" {\r\n if (model.startsWith(\"anthropic/\") || model.includes(\"claude\")) {\r\n return \"anthropic\";\r\n }\r\n if (model.startsWith(\"openai/\") || model.includes(\"gpt\")) {\r\n return \"openai\";\r\n }\r\n if (model.startsWith(\"google/\") || model.includes(\"gemini\")) {\r\n return \"google\";\r\n }\r\n // Default to anthropic\r\n return \"anthropic\";\r\n}\r\n\r\nfunction getApiKey(provider: \"anthropic\" | \"openai\" | \"google\", configKey?: string): string {\r\n // Config file key takes precedence\r\n if (configKey) {\r\n return configKey;\r\n }\r\n \r\n // Then check environment variables\r\n const envVars = {\r\n anthropic: \"ANTHROPIC_API_KEY\",\r\n openai: \"OPENAI_API_KEY\",\r\n google: \"GOOGLE_API_KEY\",\r\n } as const;\r\n \r\n const envVarName = envVars[provider];\r\n const envKey = process.env[envVarName];\r\n if (envKey) {\r\n return envKey;\r\n }\r\n \r\n // Also check generic key\r\n return process.env[\"TF_AI_API_KEY\"] ?? \"\";\r\n}\r\n\r\nexport function loadConfig(overrides?: Partial<Config>): Config {\r\n const file = loadConfigFile();\r\n \r\n // Determine model (priority: override > env > file > default)\r\n const model =\r\n overrides?.model ??\r\n process.env[\"TF_AI_MODEL\"] ??\r\n file.model ??\r\n \"claude-sonnet-4-5-20250929\";\r\n \r\n // Detect provider from model string\r\n const provider = \r\n overrides?.provider ?? \r\n file.provider ?? \r\n detectProvider(model);\r\n \r\n // Get API key\r\n const apiKey = getApiKey(provider, file.apiKey);\r\n \r\n return {\r\n model,\r\n provider,\r\n apiKey,\r\n confirmBeforeRun: overrides?.confirmBeforeRun ?? file.confirmBeforeRun ?? true,\r\n verbose: overrides?.verbose ?? file.verbose ?? false,\r\n };\r\n}\r\n\r\nexport function validateConfig(config: Config): { valid: boolean; error?: string } {\r\n if (!config.apiKey) {\r\n const envVar = {\r\n anthropic: \"ANTHROPIC_API_KEY\",\r\n openai: \"OPENAI_API_KEY\",\r\n google: \"GOOGLE_API_KEY\",\r\n }[config.provider];\r\n \r\n return {\r\n valid: false,\r\n error: `No API key found. Set ${envVar} environment variable or add \"apiKey\" to ${CONFIG_FILE}`,\r\n };\r\n }\r\n \r\n return { valid: true };\r\n}\r\n\r\nexport { CONFIG_DIR, CONFIG_FILE };\r\n","import { streamText, Output } from \"ai\";\r\nimport { anthropic } from \"@ai-sdk/anthropic\";\r\nimport { openai } from \"@ai-sdk/openai\";\r\nimport { google } from \"@ai-sdk/google\";\r\nimport type { Config } from \"../config.js\";\r\nimport { SYSTEM_PROMPT, formatUserMessage } from \"./prompts.js\";\r\nimport { analysisResponseSchema, type AnalysisResponse } from \"./schema.js\";\r\n\r\nexport interface CommandSuggestion {\r\n command: string;\r\n explanation: string;\r\n}\r\n\r\nexport interface AnalysisResult {\r\n explanation: string;\r\n suggestion: CommandSuggestion | undefined;\r\n}\r\n\r\nfunction getModel(config: Config) {\r\n const modelName = config.model.includes(\"/\") \r\n ? config.model.split(\"/\")[1]! \r\n : config.model;\r\n \r\n switch (config.provider) {\r\n case \"anthropic\":\r\n return anthropic(modelName);\r\n case \"openai\":\r\n return openai(modelName);\r\n case \"google\":\r\n return google(modelName);\r\n default:\r\n return anthropic(modelName);\r\n }\r\n}\r\n\r\nimport type { EnvironmentContext } from \"../shell/context.js\";\r\n\r\nexport interface StreamCallbacks {\r\n onExplanationUpdate?: (text: string) => void;\r\n onComplete?: (result: AnalysisResult) => void;\r\n onError?: (error: Error) => void;\r\n}\r\n\r\n/**\r\n * Analyzes a command using streaming structured output.\r\n * Calls onExplanationUpdate as the explanation streams in.\r\n */\r\nexport async function analyzeCommandStream(\r\n command: string,\r\n output: string,\r\n config: Config,\r\n environment: EnvironmentContext,\r\n callbacks: StreamCallbacks = {}\r\n): Promise<AnalysisResult> {\r\n const model = getModel(config);\r\n \r\n const { partialOutputStream } = streamText({\r\n model,\r\n system: SYSTEM_PROMPT,\r\n prompt: formatUserMessage(command, output, environment),\r\n output: Output.object({ schema: analysisResponseSchema }),\r\n });\r\n\r\n let lastExplanation = \"\";\r\n let finalResponse: Partial<AnalysisResponse> = {};\r\n\r\n try {\r\n for await (const partial of partialOutputStream) {\r\n finalResponse = partial;\r\n \r\n // Stream explanation updates\r\n if (partial.explanation && partial.explanation !== lastExplanation) {\r\n lastExplanation = partial.explanation;\r\n callbacks.onExplanationUpdate?.(partial.explanation);\r\n }\r\n }\r\n } catch (error) {\r\n callbacks.onError?.(error as Error);\r\n throw error;\r\n }\r\n\r\n // Build final result\r\n const result: AnalysisResult = {\r\n explanation: finalResponse.explanation ?? \"\",\r\n suggestion: finalResponse.suggestedCommand \r\n ? {\r\n command: finalResponse.suggestedCommand,\r\n explanation: \"\", // Explanation is in the main field\r\n }\r\n : undefined,\r\n };\r\n\r\n callbacks.onComplete?.(result);\r\n return result;\r\n}\r\n\r\n/**\r\n * Non-streaming version for simpler use cases.\r\n */\r\nexport async function analyzeCommand(\r\n command: string,\r\n output: string,\r\n config: Config,\r\n environment: EnvironmentContext\r\n): Promise<AnalysisResult> {\r\n return analyzeCommandStream(command, output, config, environment);\r\n}\r\n","import type { EnvironmentContext } from \"../shell/context.js\";\r\n\r\nexport const SYSTEM_PROMPT = `You are an expert terminal assistant that helps developers understand command output and troubleshoot issues.\r\n\r\nGiven a command and its output, analyze the situation and provide helpful insight. The output could be:\r\n- An error message (syntax error, permission denied, command not found, etc.)\r\n- A warning or unexpected behavior\r\n- A build failure, test failure, or deployment issue\r\n- API/network errors\r\n- Confusing or unclear output that needs explanation\r\n- Or even successful output that the user wants to understand better\r\n\r\nYour response should:\r\n1. **Explain** what happened - interpret the output in plain, helpful terms\r\n2. **Diagnose** the root cause if there's an issue.\r\n3. **Suggest a command** (optional) - only if there's a clear action the user can take\r\n\r\nGuidelines:\r\n- Be concise but thorough (if the problem is complex/deep, you have more leeway)\r\n- If you suggest a command, explain why / what it will do\r\n- If you're unsure or lacking enough context, say so\r\n- Tailor suggestions to the user's shell and OS`;\r\n\r\nexport function formatUserMessage(\r\n command: string, \r\n output: string,\r\n context: EnvironmentContext\r\n): string {\r\n const projectInfo = context.projectType ? `\\n- Project Type: ${context.projectType}` : '';\r\n \r\n return `Command: ${command}\r\n\r\nOutput:\r\n${output || \"(no output)\"}\r\n\r\nEnvironment:\r\n- Shell: ${context.shell}\r\n- OS: ${context.os}\r\n- Working Directory: ${context.cwd}${projectInfo}\r\n\r\nAnalyze this command and its output. Explain what happened and suggest a follow-up action if appropriate.`;\r\n}\r\n","import { z } from \"zod\";\r\n\r\n// Response schema for the AI analysis\r\nexport const analysisResponseSchema = z.object({\r\n explanation: z\r\n .string()\r\n .describe(\r\n \"Clear, helpful explanation of what happened. Interpret the output, diagnose issues, and provide context.\",\r\n ),\r\n suggestedCommand: z\r\n .string()\r\n .nullable()\r\n .describe(\r\n \"A follow-up command the user could run, if applicable. Could be a fix, a next step, or a diagnostic command. Pass null if no command is suggested\",\r\n ),\r\n});\r\n\r\nexport type AnalysisResponse = z.infer<typeof analysisResponseSchema>;\r\n","import { exec } from \"node:child_process\";\r\nimport { promisify } from \"node:util\";\r\nimport type { ShellAdapter, CommandContext } from \"./types.js\";\r\n\r\nconst execAsync = promisify(exec);\r\n\r\nexport class PowerShellAdapter implements ShellAdapter {\r\n async getLastCommand(): Promise<CommandContext | null> {\r\n // This is called when the user runs tf-ai without explicit --command\r\n // We try to get the last command from PowerShell history\r\n try {\r\n const { stdout } = await execAsync(\r\n 'powershell -Command \"(Get-History -Count 1).CommandLine\"',\r\n { encoding: \"utf-8\" }\r\n );\r\n \r\n const command = stdout.trim();\r\n if (!command) {\r\n return null;\r\n }\r\n \r\n // We can't reliably get the output of the previous command after the fact\r\n // The user should use the shell function which captures output in real-time\r\n return {\r\n command,\r\n output: \"(Output not captured - use the 'fuck' shell function for full functionality)\",\r\n };\r\n } catch {\r\n return null;\r\n }\r\n }\r\n\r\n async executeCommand(command: string): Promise<{ output: string; exitCode: number }> {\r\n try {\r\n const { stdout, stderr } = await execAsync(\r\n `powershell -Command \"${command.replace(/\"/g, '\\\\\"')}\"`,\r\n { encoding: \"utf-8\" }\r\n );\r\n return {\r\n output: stdout + stderr,\r\n exitCode: 0,\r\n };\r\n } catch (error) {\r\n const execError = error as { stdout?: string; stderr?: string; code?: number };\r\n return {\r\n output: (execError.stdout ?? \"\") + (execError.stderr ?? \"\"),\r\n exitCode: execError.code ?? 1,\r\n };\r\n }\r\n }\r\n\r\n getSetupInstructions(): string {\r\n return `\r\nAdd this to your PowerShell profile ($PROFILE):\r\n\r\nfunction fuck {\r\n $lastCmd = (Get-History -Count 1).CommandLine\r\n if (-not $lastCmd) {\r\n Write-Host \"No command in history\" -ForegroundColor Red\r\n return\r\n }\r\n \r\n # Re-execute to capture output\r\n $output = try { \r\n Invoke-Expression $lastCmd 2>&1 | Out-String \r\n } catch { \r\n $_.Exception.Message \r\n }\r\n \r\n # Call tf-ai with captured command and output\r\n tf-ai --command $lastCmd --output $output\r\n}\r\n\r\nThen reload your profile:\r\n. $PROFILE\r\n`.trim();\r\n }\r\n}\r\n\r\nexport const powershell = new PowerShellAdapter();\r\n","import { existsSync } from \"node:fs\";\r\nimport { join } from \"node:path\";\r\nimport { platform } from \"node:os\";\r\n\r\nexport interface EnvironmentContext {\r\n shell: \"powershell\" | \"bash\" | \"zsh\" | \"cmd\" | \"fish\" | \"unknown\";\r\n os: \"windows\" | \"macos\" | \"linux\";\r\n cwd: string;\r\n projectType?: \"git\" | \"nodejs\" | \"python\" | \"docker\";\r\n}\r\n\r\nfunction detectShell(): EnvironmentContext[\"shell\"] {\r\n // Check for PowerShell-specific env var\r\n if (process.env[\"PSModulePath\"]) {\r\n return \"powershell\";\r\n }\r\n \r\n // Check SHELL env var (Unix-like systems)\r\n const shell = process.env[\"SHELL\"];\r\n if (shell) {\r\n if (shell.includes(\"bash\")) return \"bash\";\r\n if (shell.includes(\"zsh\")) return \"zsh\";\r\n if (shell.includes(\"fish\")) return \"fish\";\r\n }\r\n \r\n // Check for CMD on Windows\r\n if (process.env[\"ComSpec\"]?.includes(\"cmd.exe\")) {\r\n return \"cmd\";\r\n }\r\n \r\n return \"unknown\";\r\n}\r\n\r\nfunction detectOS(): EnvironmentContext[\"os\"] {\r\n const os = platform();\r\n \r\n switch (os) {\r\n case \"win32\":\r\n return \"windows\";\r\n case \"darwin\":\r\n return \"macos\";\r\n case \"linux\":\r\n return \"linux\";\r\n default:\r\n // Fallback to linux for other Unix-like systems\r\n return \"linux\";\r\n }\r\n}\r\n\r\nfunction detectProjectType(cwd: string): EnvironmentContext[\"projectType\"] {\r\n // Check in order of specificity\r\n if (existsSync(join(cwd, \".git\"))) {\r\n return \"git\";\r\n }\r\n \r\n if (existsSync(join(cwd, \"package.json\"))) {\r\n return \"nodejs\";\r\n }\r\n \r\n if (existsSync(join(cwd, \"requirements.txt\")) || existsSync(join(cwd, \"pyproject.toml\"))) {\r\n return \"python\";\r\n }\r\n \r\n if (existsSync(join(cwd, \"Dockerfile\"))) {\r\n return \"docker\";\r\n }\r\n \r\n return undefined;\r\n}\r\n\r\nexport function detectEnvironment(): EnvironmentContext {\r\n const cwd = process.cwd();\r\n const projectType = detectProjectType(cwd);\r\n \r\n return {\r\n shell: detectShell(),\r\n os: detectOS(),\r\n cwd,\r\n ...(projectType ? { projectType } : {}),\r\n };\r\n}\r\n","import chalk from \"chalk\";\r\nimport type { CommandSuggestion, AnalysisResult } from \"../ai/index.js\";\r\nimport { theme } from \"./theme.js\";\r\n\r\nconst ICONS = {\r\n robot: \"🤖\",\r\n lightbulb: \"💡\",\r\n command: \"➜\",\r\n warning: \"⚠️\",\r\n error: \"❌\",\r\n success: \"✓\",\r\n thinking: \"🧠\",\r\n};\r\n\r\nexport function printHeader(): void {\r\n // Clean header using theme\r\n console.log(theme.header(`\\n${ICONS.robot} tf-ai\\n`));\r\n}\r\n\r\nexport function printExplanation(result: AnalysisResult): void {\r\n if (result.explanation) {\r\n console.log(theme.explanation(result.explanation));\r\n console.log();\r\n }\r\n}\r\n\r\nlet lastPrintedLength = 0;\r\n\r\n/**\r\n * For streaming: prints only the new part of the explanation\r\n */\r\nexport function printStreamingExplanation(fullText: string): void {\r\n if (!fullText) return;\r\n \r\n // Calculate the new chunk to print\r\n const newText = fullText.slice(lastPrintedLength);\r\n \r\n if (newText) {\r\n // Use theme explanation color\r\n process.stdout.write(theme.explanation(newText));\r\n lastPrintedLength = fullText.length;\r\n }\r\n}\r\n\r\n/**\r\n * Finalize streaming output with newline and reset state\r\n */\r\nexport function finalizeStreaming(): void {\r\n console.log(); // New line after streaming completes\r\n lastPrintedLength = 0; // Reset for next run\r\n}\r\n\r\nexport function printSuggestion(suggestion: CommandSuggestion): void {\r\n console.log(theme.suggestionLabel(`${ICONS.lightbulb} Suggested fix:`));\r\n console.log();\r\n // Using theme.command color\r\n console.log(theme.command(` ${ICONS.command} ${suggestion.command} `));\r\n console.log();\r\n}\r\n\r\nexport function printError(message: string): void {\r\n console.error(theme.error(`${ICONS.error} ${message}`));\r\n}\r\n\r\nexport function printWarning(message: string): void {\r\n console.log(theme.warning(`${ICONS.warning} ${message}`));\r\n}\r\n\r\nexport function printSuccess(message: string): void {\r\n console.log(theme.success(`${ICONS.success} ${message}`));\r\n}\r\n\r\nexport function printCommand(label: string, command: string): void {\r\n console.log(theme.muted(`${label}: `) + theme.text(command));\r\n}\r\n\r\nexport function printVerbose(message: string, verbose: boolean): void {\r\n if (verbose) {\r\n console.log(theme.muted(`[debug] ${message}`));\r\n }\r\n}\r\n","import chalk from \"chalk\";\r\n\r\n/**\r\n * Anthropic-Inspired Palette\r\n * Sophisticated, warm, and professional.\r\n */\r\nexport const COLORS = {\r\n stone: \"#F5F2E8\", // Warmest White / Base\r\n warmGray: \"#9E9A95\", // Muted text / Labels\r\n text: \"#E3DCD2\", // Main reading text (Warm Off-White)\r\n coral: \"#D97757\", // Primary Accent (Anthropic-like orange/coral) - Highlights\r\n maple: \"#C8A682\", // Secondary Accent - Interactions\r\n olive: \"#8E9C6D\", // Success\r\n sienna: \"#C1554D\", // Error\r\n charcoal: \"#2D2B29\", // Dark background text if needed\r\n};\r\n\r\n/**\r\n * Semantic Theme Definitions\r\n */\r\nexport const theme = {\r\n // --- Global Roles ---\r\n primary: chalk.hex(COLORS.coral),\r\n secondary: chalk.hex(COLORS.maple),\r\n success: chalk.hex(COLORS.olive),\r\n error: chalk.hex(COLORS.sienna),\r\n warning: chalk.hex(COLORS.maple), // Maple works well for warning too\r\n muted: chalk.hex(COLORS.warmGray),\r\n text: chalk.hex(COLORS.text),\r\n \r\n // --- Component Specifics ---\r\n \r\n // Header / Branding\r\n header: chalk.hex(COLORS.coral).bold,\r\n \r\n // The suggested command to run\r\n command: chalk.hex(COLORS.stone).bold, // Bright, distinct\r\n \r\n // The AI's explanation text\r\n explanation: chalk.hex(COLORS.text),\r\n \r\n // \"Suggested fix:\" label - subtle yet clear\r\n suggestionLabel: chalk.hex(COLORS.maple).bold,\r\n \r\n // Confidence levels\r\n confidenceHigh: chalk.hex(COLORS.olive),\r\n confidenceMedium: chalk.hex(COLORS.maple),\r\n confidenceLow: chalk.hex(COLORS.sienna),\r\n \r\n // --- Animation Colors ---\r\n // Glitch characters\r\n glitch1: chalk.hex(COLORS.coral),\r\n glitch2: chalk.hex(COLORS.maple),\r\n // Base text for \"Analyzing...\"\r\n glitchBase: chalk.hex(COLORS.warmGray),\r\n};\r\n","import * as readline from \"node:readline\";\r\nimport chalk from \"chalk\";\r\nimport { exec } from \"node:child_process\";\r\nimport { promisify } from \"node:util\";\r\nimport { theme } from \"./theme.js\";\r\n\r\nconst execAsync = promisify(exec);\r\n\r\nexport interface ConfirmResult {\r\n action: \"run\" | \"cancel\" | \"edit\";\r\n editedCommand?: string;\r\n}\r\n\r\nexport async function confirmCommand(command: string): Promise<ConfirmResult> {\r\n return new Promise((resolve) => {\r\n // \"Press:\" is now muted (Warm Gray), not Red/Pink\r\n console.log(theme.muted(\"Press:\"));\r\n \r\n // Keys highlighted in Secondary (Maple) / Accent (Coral)\r\n // Using formatted strings to mix colors\r\n console.log(\r\n theme.secondary(\" [Enter]\") + theme.muted(\" Run command\")\r\n );\r\n console.log(\r\n theme.secondary(\" [e] \") + theme.muted(\" Edit command\")\r\n );\r\n console.log(\r\n theme.secondary(\" [Esc/q]\") + theme.muted(\" Cancel\")\r\n );\r\n console.log();\r\n\r\n // Set up raw mode to capture single keypresses\r\n if (process.stdin.isTTY) {\r\n process.stdin.setRawMode(true);\r\n }\r\n process.stdin.resume();\r\n process.stdin.setEncoding(\"utf8\");\r\n\r\n const cleanup = () => {\r\n if (process.stdin.isTTY) {\r\n process.stdin.setRawMode(false);\r\n }\r\n process.stdin.pause();\r\n process.stdin.removeAllListeners(\"data\");\r\n };\r\n\r\n process.stdin.once(\"data\", (key: string) => {\r\n cleanup();\r\n\r\n // Enter key\r\n if (key === \"\\r\" || key === \"\\n\") {\r\n resolve({ action: \"run\" });\r\n return;\r\n }\r\n\r\n // Escape or q\r\n if (key === \"\\u001b\" || key === \"q\" || key === \"\\u0003\") {\r\n console.log(theme.warning(\"Cancelled\"));\r\n resolve({ action: \"cancel\" });\r\n return;\r\n }\r\n\r\n // Edit\r\n if (key === \"e\" || key === \"E\") {\r\n resolve({ action: \"edit\" });\r\n return;\r\n }\r\n\r\n // Unknown key - treat as cancel\r\n console.log(theme.warning(\"Cancelled\"));\r\n resolve({ action: \"cancel\" });\r\n });\r\n });\r\n}\r\n\r\nexport async function editCommand(originalCommand: string): Promise<string | null> {\r\n return new Promise((resolve) => {\r\n const rl = readline.createInterface({\r\n input: process.stdin,\r\n output: process.stdout,\r\n });\r\n\r\n console.log(theme.secondary(\"\\nEdit command (press Enter to confirm, Ctrl+C to cancel):\"));\r\n \r\n rl.question(theme.muted(\"> \"), (answer) => {\r\n rl.close();\r\n const command = answer.trim();\r\n resolve(command || originalCommand);\r\n });\r\n\r\n rl.on(\"close\", () => {\r\n if (!rl.terminal) {\r\n resolve(null);\r\n }\r\n });\r\n });\r\n}\r\n\r\nexport async function runCommand(command: string): Promise<void> {\r\n console.log(theme.secondary(`\\nRunning: ${command}\\n`));\r\n console.log(theme.muted(\"─\".repeat(40)));\r\n \r\n try {\r\n // Run the command and stream output\r\n const { stdout, stderr } = await execAsync(\r\n `powershell -Command \"${command.replace(/\"/g, '\\\\\"')}\"`,\r\n { encoding: \"utf-8\" }\r\n );\r\n \r\n if (stdout) console.log(stdout);\r\n if (stderr) console.error(theme.error(stderr));\r\n \r\n console.log(theme.muted(\"─\".repeat(40)));\r\n console.log(theme.success(\"✓ Command completed\"));\r\n } catch (error) {\r\n const execError = error as { stdout?: string; stderr?: string; code?: number };\r\n if (execError.stdout) console.log(execError.stdout);\r\n if (execError.stderr) console.error(theme.error(execError.stderr));\r\n \r\n console.log(theme.muted(\"─\".repeat(40)));\r\n console.log(theme.error(`✗ Command failed with exit code ${execError.code ?? 1}`));\r\n }\r\n}\r\n","import { theme } from \"./theme.js\";\n\n/**\n * Creates a \"hacking\" style animated text where random characters glitch\n * Refined for subtlety and elegance\n */\nexport function createHackingAnimation(text: string): {\n start: () => void;\n stop: () => void;\n} {\n const glitchChars = ['#', '$', '%', '*', '?', '~', '^', '+', '=', ':'];\n let interval: NodeJS.Timeout | null = null;\n let isRunning = false;\n \n const originalText = text;\n const textArray = text.split('');\n \n function glitchText(): string {\n return textArray.map((char, i) => {\n // Skip spaces\n if (char === ' ') return ' ';\n \n // Random chance to glitch (much lower probability for subtlety: 5%)\n if (Math.random() < 0.05) {\n // Use theme colors for glitches\n return Math.random() > 0.5 \n ? theme.glitch1(glitchChars[Math.floor(Math.random() * glitchChars.length)]!)\n : theme.glitch2(glitchChars[Math.floor(Math.random() * glitchChars.length)]!);\n }\n \n // Base text color\n return theme.glitchBase(char);\n }).join('');\n }\n \n function render() {\n if (!isRunning) return;\n \n // Clear line and print glitched text\n process.stdout.write('\\r' + glitchText());\n }\n \n return {\n start() {\n if (isRunning) return;\n isRunning = true;\n \n // Print initial state\n process.stdout.write(theme.glitchBase(originalText));\n \n // Start glitching at slower intervals (120ms) for elegance\n interval = setInterval(render, 120);\n },\n \n stop() {\n if (!isRunning) return;\n isRunning = false;\n \n if (interval) {\n clearInterval(interval);\n interval = null;\n }\n \n // Clear the line\n process.stdout.write('\\r' + ' '.repeat(originalText.length) + '\\r');\n },\n };\n}\n"],"mappings":";;;;AACA,SAAS,eAAe;;;ACDxB,SAAS,YAAY,oBAAoB;AACzC,SAAS,eAAe;AACxB,SAAS,YAAY;AAuBrB,IAAM,aAAa,KAAK,QAAQ,GAAG,QAAQ;AAC3C,IAAM,cAAc,KAAK,YAAY,aAAa;AAElD,SAAS,iBAA6B;AACpC,MAAI,CAAC,WAAW,WAAW,GAAG;AAC5B,WAAO,CAAC;AAAA,EACV;AAEA,MAAI;AACF,UAAM,UAAU,aAAa,aAAa,OAAO;AACjD,WAAO,KAAK,MAAM,OAAO;AAAA,EAC3B,QAAQ;AACN,WAAO,CAAC;AAAA,EACV;AACF;AAEA,SAAS,eAAe,OAAkD;AACxE,MAAI,MAAM,WAAW,YAAY,KAAK,MAAM,SAAS,QAAQ,GAAG;AAC9D,WAAO;AAAA,EACT;AACA,MAAI,MAAM,WAAW,SAAS,KAAK,MAAM,SAAS,KAAK,GAAG;AACxD,WAAO;AAAA,EACT;AACA,MAAI,MAAM,WAAW,SAAS,KAAK,MAAM,SAAS,QAAQ,GAAG;AAC3D,WAAO;AAAA,EACT;AAEA,SAAO;AACT;AAEA,SAAS,UAAU,UAA6C,WAA4B;AAE1F,MAAI,WAAW;AACb,WAAO;AAAA,EACT;AAGA,QAAM,UAAU;AAAA,IACd,WAAW;AAAA,IACX,QAAQ;AAAA,IACR,QAAQ;AAAA,EACV;AAEA,QAAM,aAAa,QAAQ,QAAQ;AACnC,QAAM,SAAS,QAAQ,IAAI,UAAU;AACrC,MAAI,QAAQ;AACV,WAAO;AAAA,EACT;AAGA,SAAO,QAAQ,IAAI,eAAe,KAAK;AACzC;AAEO,SAAS,WAAW,WAAqC;AAC9D,QAAM,OAAO,eAAe;AAG5B,QAAM,QACJ,WAAW,SACX,QAAQ,IAAI,aAAa,KACzB,KAAK,SACL;AAGF,QAAM,WACJ,WAAW,YACX,KAAK,YACL,eAAe,KAAK;AAGtB,QAAM,SAAS,UAAU,UAAU,KAAK,MAAM;AAE9C,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA,kBAAkB,WAAW,oBAAoB,KAAK,oBAAoB;AAAA,IAC1E,SAAS,WAAW,WAAW,KAAK,WAAW;AAAA,EACjD;AACF;AAEO,SAAS,eAAe,QAAoD;AACjF,MAAI,CAAC,OAAO,QAAQ;AAClB,UAAM,SAAS;AAAA,MACb,WAAW;AAAA,MACX,QAAQ;AAAA,MACR,QAAQ;AAAA,IACV,EAAE,OAAO,QAAQ;AAEjB,WAAO;AAAA,MACL,OAAO;AAAA,MACP,OAAO,yBAAyB,MAAM,4CAA4C,WAAW;AAAA,IAC/F;AAAA,EACF;AAEA,SAAO,EAAE,OAAO,KAAK;AACvB;;;ACzHA,SAAS,YAAY,cAAc;AACnC,SAAS,iBAAiB;AAC1B,SAAS,cAAc;AACvB,SAAS,cAAc;;;ACDhB,IAAM,gBAAgB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAqBtB,SAAS,kBACd,SACA,QACA,SACQ;AACR,QAAM,cAAc,QAAQ,cAAc;AAAA,kBAAqB,QAAQ,WAAW,KAAK;AAEvF,SAAO,YAAY,OAAO;AAAA;AAAA;AAAA,EAG1B,UAAU,aAAa;AAAA;AAAA;AAAA,WAGd,QAAQ,KAAK;AAAA,QAChB,QAAQ,EAAE;AAAA,uBACK,QAAQ,GAAG,GAAG,WAAW;AAAA;AAAA;AAGhD;;;ACzCA,SAAS,SAAS;AAGX,IAAM,yBAAyB,EAAE,OAAO;AAAA,EAC7C,aAAa,EACV,OAAO,EACP;AAAA,IACC;AAAA,EACF;AAAA,EACF,kBAAkB,EACf,OAAO,EACP,SAAS,EACT;AAAA,IACC;AAAA,EACF;AACJ,CAAC;;;AFGD,SAAS,SAAS,QAAgB;AAChC,QAAM,YAAY,OAAO,MAAM,SAAS,GAAG,IACvC,OAAO,MAAM,MAAM,GAAG,EAAE,CAAC,IACzB,OAAO;AAEX,UAAQ,OAAO,UAAU;AAAA,IACvB,KAAK;AACH,aAAO,UAAU,SAAS;AAAA,IAC5B,KAAK;AACH,aAAO,OAAO,SAAS;AAAA,IACzB,KAAK;AACH,aAAO,OAAO,SAAS;AAAA,IACzB;AACE,aAAO,UAAU,SAAS;AAAA,EAC9B;AACF;AAcA,eAAsB,qBACpB,SACA,QACA,QACA,aACA,YAA6B,CAAC,GACL;AACzB,QAAM,QAAQ,SAAS,MAAM;AAE7B,QAAM,EAAE,oBAAoB,IAAI,WAAW;AAAA,IACzC;AAAA,IACA,QAAQ;AAAA,IACR,QAAQ,kBAAkB,SAAS,QAAQ,WAAW;AAAA,IACtD,QAAQ,OAAO,OAAO,EAAE,QAAQ,uBAAuB,CAAC;AAAA,EAC1D,CAAC;AAED,MAAI,kBAAkB;AACtB,MAAI,gBAA2C,CAAC;AAEhD,MAAI;AACF,qBAAiB,WAAW,qBAAqB;AAC/C,sBAAgB;AAGhB,UAAI,QAAQ,eAAe,QAAQ,gBAAgB,iBAAiB;AAClE,0BAAkB,QAAQ;AAC1B,kBAAU,sBAAsB,QAAQ,WAAW;AAAA,MACrD;AAAA,IACF;AAAA,EACF,SAAS,OAAO;AACd,cAAU,UAAU,KAAc;AAClC,UAAM;AAAA,EACR;AAGA,QAAM,SAAyB;AAAA,IAC7B,aAAa,cAAc,eAAe;AAAA,IAC1C,YAAY,cAAc,mBACtB;AAAA,MACE,SAAS,cAAc;AAAA,MACvB,aAAa;AAAA;AAAA,IACf,IACA;AAAA,EACN;AAEA,YAAU,aAAa,MAAM;AAC7B,SAAO;AACT;;;AG9FA,SAAS,YAAY;AACrB,SAAS,iBAAiB;AAG1B,IAAM,YAAY,UAAU,IAAI;AAEzB,IAAM,oBAAN,MAAgD;AAAA,EACrD,MAAM,iBAAiD;AAGrD,QAAI;AACF,YAAM,EAAE,OAAO,IAAI,MAAM;AAAA,QACvB;AAAA,QACA,EAAE,UAAU,QAAQ;AAAA,MACtB;AAEA,YAAM,UAAU,OAAO,KAAK;AAC5B,UAAI,CAAC,SAAS;AACZ,eAAO;AAAA,MACT;AAIA,aAAO;AAAA,QACL;AAAA,QACA,QAAQ;AAAA,MACV;AAAA,IACF,QAAQ;AACN,aAAO;AAAA,IACT;AAAA,EACF;AAAA,EAEA,MAAM,eAAe,SAAgE;AACnF,QAAI;AACF,YAAM,EAAE,QAAQ,OAAO,IAAI,MAAM;AAAA,QAC/B,wBAAwB,QAAQ,QAAQ,MAAM,KAAK,CAAC;AAAA,QACpD,EAAE,UAAU,QAAQ;AAAA,MACtB;AACA,aAAO;AAAA,QACL,QAAQ,SAAS;AAAA,QACjB,UAAU;AAAA,MACZ;AAAA,IACF,SAAS,OAAO;AACd,YAAM,YAAY;AAClB,aAAO;AAAA,QACL,SAAS,UAAU,UAAU,OAAO,UAAU,UAAU;AAAA,QACxD,UAAU,UAAU,QAAQ;AAAA,MAC9B;AAAA,IACF;AAAA,EACF;AAAA,EAEA,uBAA+B;AAC7B,WAAO;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAuBT,KAAK;AAAA,EACL;AACF;AAEO,IAAM,aAAa,IAAI,kBAAkB;;;AC/EhD,SAAS,cAAAA,mBAAkB;AAC3B,SAAS,QAAAC,aAAY;AACrB,SAAS,gBAAgB;AASzB,SAAS,cAA2C;AAElD,MAAI,QAAQ,IAAI,cAAc,GAAG;AAC/B,WAAO;AAAA,EACT;AAGA,QAAM,QAAQ,QAAQ,IAAI,OAAO;AACjC,MAAI,OAAO;AACT,QAAI,MAAM,SAAS,MAAM,EAAG,QAAO;AACnC,QAAI,MAAM,SAAS,KAAK,EAAG,QAAO;AAClC,QAAI,MAAM,SAAS,MAAM,EAAG,QAAO;AAAA,EACrC;AAGA,MAAI,QAAQ,IAAI,SAAS,GAAG,SAAS,SAAS,GAAG;AAC/C,WAAO;AAAA,EACT;AAEA,SAAO;AACT;AAEA,SAAS,WAAqC;AAC5C,QAAM,KAAK,SAAS;AAEpB,UAAQ,IAAI;AAAA,IACV,KAAK;AACH,aAAO;AAAA,IACT,KAAK;AACH,aAAO;AAAA,IACT,KAAK;AACH,aAAO;AAAA,IACT;AAEE,aAAO;AAAA,EACX;AACF;AAEA,SAAS,kBAAkB,KAAgD;AAEzE,MAAID,YAAWC,MAAK,KAAK,MAAM,CAAC,GAAG;AACjC,WAAO;AAAA,EACT;AAEA,MAAID,YAAWC,MAAK,KAAK,cAAc,CAAC,GAAG;AACzC,WAAO;AAAA,EACT;AAEA,MAAID,YAAWC,MAAK,KAAK,kBAAkB,CAAC,KAAKD,YAAWC,MAAK,KAAK,gBAAgB,CAAC,GAAG;AACxF,WAAO;AAAA,EACT;AAEA,MAAID,YAAWC,MAAK,KAAK,YAAY,CAAC,GAAG;AACvC,WAAO;AAAA,EACT;AAEA,SAAO;AACT;AAEO,SAAS,oBAAwC;AACtD,QAAM,MAAM,QAAQ,IAAI;AACxB,QAAM,cAAc,kBAAkB,GAAG;AAEzC,SAAO;AAAA,IACL,OAAO,YAAY;AAAA,IACnB,IAAI,SAAS;AAAA,IACb;AAAA,IACA,GAAI,cAAc,EAAE,YAAY,IAAI,CAAC;AAAA,EACvC;AACF;;;AChFA,OAAkB;;;ACAlB,OAAO,WAAW;AAMX,IAAM,SAAS;AAAA,EACpB,OAAO;AAAA;AAAA,EACP,UAAU;AAAA;AAAA,EACV,MAAM;AAAA;AAAA,EACN,OAAO;AAAA;AAAA,EACP,OAAO;AAAA;AAAA,EACP,OAAO;AAAA;AAAA,EACP,QAAQ;AAAA;AAAA,EACR,UAAU;AAAA;AACZ;AAKO,IAAM,QAAQ;AAAA;AAAA,EAEnB,SAAS,MAAM,IAAI,OAAO,KAAK;AAAA,EAC/B,WAAW,MAAM,IAAI,OAAO,KAAK;AAAA,EACjC,SAAS,MAAM,IAAI,OAAO,KAAK;AAAA,EAC/B,OAAO,MAAM,IAAI,OAAO,MAAM;AAAA,EAC9B,SAAS,MAAM,IAAI,OAAO,KAAK;AAAA;AAAA,EAC/B,OAAO,MAAM,IAAI,OAAO,QAAQ;AAAA,EAChC,MAAM,MAAM,IAAI,OAAO,IAAI;AAAA;AAAA;AAAA,EAK3B,QAAQ,MAAM,IAAI,OAAO,KAAK,EAAE;AAAA;AAAA,EAGhC,SAAS,MAAM,IAAI,OAAO,KAAK,EAAE;AAAA;AAAA;AAAA,EAGjC,aAAa,MAAM,IAAI,OAAO,IAAI;AAAA;AAAA,EAGlC,iBAAiB,MAAM,IAAI,OAAO,KAAK,EAAE;AAAA;AAAA,EAGzC,gBAAgB,MAAM,IAAI,OAAO,KAAK;AAAA,EACtC,kBAAkB,MAAM,IAAI,OAAO,KAAK;AAAA,EACxC,eAAe,MAAM,IAAI,OAAO,MAAM;AAAA;AAAA;AAAA,EAItC,SAAS,MAAM,IAAI,OAAO,KAAK;AAAA,EAC/B,SAAS,MAAM,IAAI,OAAO,KAAK;AAAA;AAAA,EAE/B,YAAY,MAAM,IAAI,OAAO,QAAQ;AACvC;;;ADnDA,IAAM,QAAQ;AAAA,EACZ,OAAO;AAAA,EACP,WAAW;AAAA,EACX,SAAS;AAAA,EACT,SAAS;AAAA,EACT,OAAO;AAAA,EACP,SAAS;AAAA,EACT,UAAU;AACZ;AAcA,IAAI,oBAAoB;AAKjB,SAAS,0BAA0B,UAAwB;AAChE,MAAI,CAAC,SAAU;AAGf,QAAM,UAAU,SAAS,MAAM,iBAAiB;AAEhD,MAAI,SAAS;AAEX,YAAQ,OAAO,MAAM,MAAM,YAAY,OAAO,CAAC;AAC/C,wBAAoB,SAAS;AAAA,EAC/B;AACF;AAKO,SAAS,oBAA0B;AACxC,UAAQ,IAAI;AACZ,sBAAoB;AACtB;AAEO,SAAS,gBAAgB,YAAqC;AACnE,UAAQ,IAAI,MAAM,gBAAgB,GAAG,MAAM,SAAS,iBAAiB,CAAC;AACtE,UAAQ,IAAI;AAEZ,UAAQ,IAAI,MAAM,QAAQ,KAAK,MAAM,OAAO,IAAI,WAAW,OAAO,IAAI,CAAC;AACvE,UAAQ,IAAI;AACd;AAEO,SAAS,WAAW,SAAuB;AAChD,UAAQ,MAAM,MAAM,MAAM,GAAG,MAAM,KAAK,IAAI,OAAO,EAAE,CAAC;AACxD;AAEO,SAAS,aAAa,SAAuB;AAClD,UAAQ,IAAI,MAAM,QAAQ,GAAG,MAAM,OAAO,IAAI,OAAO,EAAE,CAAC;AAC1D;AAMO,SAAS,aAAa,OAAe,SAAuB;AACjE,UAAQ,IAAI,MAAM,MAAM,GAAG,KAAK,IAAI,IAAI,MAAM,KAAK,OAAO,CAAC;AAC7D;AAEO,SAAS,aAAa,SAAiB,SAAwB;AACpE,MAAI,SAAS;AACX,YAAQ,IAAI,MAAM,MAAM,WAAW,OAAO,EAAE,CAAC;AAAA,EAC/C;AACF;;;AEhFA,YAAY,cAAc;AAC1B,OAAkB;AAClB,SAAS,QAAAC,aAAY;AACrB,SAAS,aAAAC,kBAAiB;AAG1B,IAAMC,aAAYC,WAAUC,KAAI;AAOhC,eAAsB,eAAe,SAAyC;AAC5E,SAAO,IAAI,QAAQ,CAAC,YAAY;AAE9B,YAAQ,IAAI,MAAM,MAAM,QAAQ,CAAC;AAIjC,YAAQ;AAAA,MACN,MAAM,UAAU,WAAW,IAAI,MAAM,MAAM,cAAc;AAAA,IAC3D;AACA,YAAQ;AAAA,MACN,MAAM,UAAU,WAAW,IAAI,MAAM,MAAM,eAAe;AAAA,IAC5D;AACA,YAAQ;AAAA,MACN,MAAM,UAAU,WAAW,IAAI,MAAM,MAAM,SAAS;AAAA,IACtD;AACA,YAAQ,IAAI;AAGZ,QAAI,QAAQ,MAAM,OAAO;AACvB,cAAQ,MAAM,WAAW,IAAI;AAAA,IAC/B;AACA,YAAQ,MAAM,OAAO;AACrB,YAAQ,MAAM,YAAY,MAAM;AAEhC,UAAM,UAAU,MAAM;AACpB,UAAI,QAAQ,MAAM,OAAO;AACvB,gBAAQ,MAAM,WAAW,KAAK;AAAA,MAChC;AACA,cAAQ,MAAM,MAAM;AACpB,cAAQ,MAAM,mBAAmB,MAAM;AAAA,IACzC;AAEA,YAAQ,MAAM,KAAK,QAAQ,CAAC,QAAgB;AAC1C,cAAQ;AAGR,UAAI,QAAQ,QAAQ,QAAQ,MAAM;AAChC,gBAAQ,EAAE,QAAQ,MAAM,CAAC;AACzB;AAAA,MACF;AAGA,UAAI,QAAQ,UAAY,QAAQ,OAAO,QAAQ,KAAU;AACvD,gBAAQ,IAAI,MAAM,QAAQ,WAAW,CAAC;AACtC,gBAAQ,EAAE,QAAQ,SAAS,CAAC;AAC5B;AAAA,MACF;AAGA,UAAI,QAAQ,OAAO,QAAQ,KAAK;AAC9B,gBAAQ,EAAE,QAAQ,OAAO,CAAC;AAC1B;AAAA,MACF;AAGA,cAAQ,IAAI,MAAM,QAAQ,WAAW,CAAC;AACtC,cAAQ,EAAE,QAAQ,SAAS,CAAC;AAAA,IAC9B,CAAC;AAAA,EACH,CAAC;AACH;AAEA,eAAsB,YAAY,iBAAiD;AACjF,SAAO,IAAI,QAAQ,CAAC,YAAY;AAC9B,UAAM,KAAc,yBAAgB;AAAA,MAClC,OAAO,QAAQ;AAAA,MACf,QAAQ,QAAQ;AAAA,IAClB,CAAC;AAED,YAAQ,IAAI,MAAM,UAAU,4DAA4D,CAAC;AAEzF,OAAG,SAAS,MAAM,MAAM,IAAI,GAAG,CAAC,WAAW;AACzC,SAAG,MAAM;AACT,YAAM,UAAU,OAAO,KAAK;AAC5B,cAAQ,WAAW,eAAe;AAAA,IACpC,CAAC;AAED,OAAG,GAAG,SAAS,MAAM;AACnB,UAAI,CAAC,GAAG,UAAU;AAChB,gBAAQ,IAAI;AAAA,MACd;AAAA,IACF,CAAC;AAAA,EACH,CAAC;AACH;AAEA,eAAsB,WAAW,SAAgC;AAC/D,UAAQ,IAAI,MAAM,UAAU;AAAA,WAAc,OAAO;AAAA,CAAI,CAAC;AACtD,UAAQ,IAAI,MAAM,MAAM,SAAI,OAAO,EAAE,CAAC,CAAC;AAEvC,MAAI;AAEF,UAAM,EAAE,QAAQ,OAAO,IAAI,MAAMF;AAAA,MAC/B,wBAAwB,QAAQ,QAAQ,MAAM,KAAK,CAAC;AAAA,MACpD,EAAE,UAAU,QAAQ;AAAA,IACtB;AAEA,QAAI,OAAQ,SAAQ,IAAI,MAAM;AAC9B,QAAI,OAAQ,SAAQ,MAAM,MAAM,MAAM,MAAM,CAAC;AAE7C,YAAQ,IAAI,MAAM,MAAM,SAAI,OAAO,EAAE,CAAC,CAAC;AACvC,YAAQ,IAAI,MAAM,QAAQ,0BAAqB,CAAC;AAAA,EAClD,SAAS,OAAO;AACd,UAAM,YAAY;AAClB,QAAI,UAAU,OAAQ,SAAQ,IAAI,UAAU,MAAM;AAClD,QAAI,UAAU,OAAQ,SAAQ,MAAM,MAAM,MAAM,UAAU,MAAM,CAAC;AAEjE,YAAQ,IAAI,MAAM,MAAM,SAAI,OAAO,EAAE,CAAC,CAAC;AACvC,YAAQ,IAAI,MAAM,MAAM,wCAAmC,UAAU,QAAQ,CAAC,EAAE,CAAC;AAAA,EACnF;AACF;;;ACpHO,SAAS,uBAAuB,MAGrC;AACA,QAAM,cAAc,CAAC,KAAK,KAAK,KAAK,KAAK,KAAK,KAAK,KAAK,KAAK,KAAK,GAAG;AACrE,MAAI,WAAkC;AACtC,MAAI,YAAY;AAEhB,QAAM,eAAe;AACrB,QAAM,YAAY,KAAK,MAAM,EAAE;AAE/B,WAAS,aAAqB;AAC5B,WAAO,UAAU,IAAI,CAAC,MAAM,MAAM;AAEhC,UAAI,SAAS,IAAK,QAAO;AAGzB,UAAI,KAAK,OAAO,IAAI,MAAM;AAExB,eAAO,KAAK,OAAO,IAAI,MACnB,MAAM,QAAQ,YAAY,KAAK,MAAM,KAAK,OAAO,IAAI,YAAY,MAAM,CAAC,CAAE,IAC1E,MAAM,QAAQ,YAAY,KAAK,MAAM,KAAK,OAAO,IAAI,YAAY,MAAM,CAAC,CAAE;AAAA,MAChF;AAGA,aAAO,MAAM,WAAW,IAAI;AAAA,IAC9B,CAAC,EAAE,KAAK,EAAE;AAAA,EACZ;AAEA,WAAS,SAAS;AAChB,QAAI,CAAC,UAAW;AAGhB,YAAQ,OAAO,MAAM,OAAO,WAAW,CAAC;AAAA,EAC1C;AAEA,SAAO;AAAA,IACL,QAAQ;AACN,UAAI,UAAW;AACf,kBAAY;AAGZ,cAAQ,OAAO,MAAM,MAAM,WAAW,YAAY,CAAC;AAGnD,iBAAW,YAAY,QAAQ,GAAG;AAAA,IACpC;AAAA,IAEA,OAAO;AACL,UAAI,CAAC,UAAW;AAChB,kBAAY;AAEZ,UAAI,UAAU;AACZ,sBAAc,QAAQ;AACtB,mBAAW;AAAA,MACb;AAGA,cAAQ,OAAO,MAAM,OAAO,IAAI,OAAO,aAAa,MAAM,IAAI,IAAI;AAAA,IACpE;AAAA,EACF;AACF;;;AVhDA,OAAkB;AAElB,IAAM,UAAU,IAAI,QAAQ;AAE5B,QACG,KAAK,OAAO,EACZ,YAAY,uEAAuE,EACnF,QAAQ,OAAO;AAElB,QACG,OAAO,2BAA2B,yBAAyB,EAC3D,OAAO,yBAAyB,0CAA0C,EAC1E,OAAO,iBAAiB,gDAAgD,EACxE,OAAO,uBAAuB,uDAAuD,EACrF,OAAO,aAAa,qDAAqD,EACzE,OAAO,iBAAiB,qBAAqB,EAC7C,OAAO,WAAW,+BAA+B,EACjD,OAAO,OAAO,YAQT;AAEJ,MAAI,QAAQ,OAAO;AACjB,YAAQ,IAAI,WAAW,qBAAqB,CAAC;AAC7C;AAAA,EACF;AAGA,QAAM,kBAAoD,CAAC;AAC3D,MAAI,QAAQ,UAAU,OAAW,iBAAgB,QAAQ,QAAQ;AACjE,MAAI,QAAQ,YAAY,OAAW,iBAAgB,UAAU,QAAQ;AACrE,MAAI,QAAQ,QAAQ,OAAW,iBAAgB,mBAAmB,CAAC,QAAQ;AAE3E,QAAM,SAAS,WAAW,eAAe;AAEzC,eAAa,WAAW,KAAK,UAAU,QAAQ,MAAM,CAAC,CAAC,IAAI,OAAO,OAAO;AAEzE,QAAM,aAAa,eAAe,MAAM;AACxC,MAAI,CAAC,WAAW,OAAO;AACrB,eAAW,WAAW,KAAM;AAC5B,YAAQ,IAAI;AAAA,0BAA6B,WAAW,+CAA+C;AACnG,YAAQ,IAAI,wBAAwB;AACpC,YAAQ,IAAI,KAAK,UAAU;AAAA,MACzB,OAAO;AAAA,MACP,QAAQ;AAAA,IACV,GAAG,MAAM,CAAC,CAAC;AACX,YAAQ,KAAK,CAAC;AAAA,EAChB;AAGA,MAAI,UAAU,QAAQ;AACtB,MAAI,SAAS,QAAQ,UAAU;AAE/B,MAAI,CAAC,SAAS;AACZ,iBAAa,kDAAkD,OAAO,OAAO;AAC7E,UAAM,UAAU,MAAM,WAAW,eAAe;AAEhD,QAAI,SAAS;AACX,gBAAU,QAAQ;AAClB,eAAS,QAAQ;AACjB,mBAAa,gBAAgB,OAAO;AAAA,IACtC,OAAO;AACL,mBAAa,iEAAiE;AAC9E,cAAQ,IAAI,UAAU;AACtB,cAAQ,IAAI,wDAAwD;AACpE,cAAQ,IAAI,oCAAoC;AAChD,cAAQ,IAAI,iBAAiB;AAC7B,cAAQ,KAAK,CAAC;AAAA,IAChB;AAAA,EACF;AAGA,MAAI,UAAU,OAAO,SAAS;AAC5B,YAAQ,IAAI,WAAW;AACvB,YAAQ,IAAI,MAAM;AAAA,EACpB;AAEA,UAAQ,IAAI;AACZ,QAAM,YAAY,uBAAuB,4BAA4B;AACrE,YAAU,MAAM;AAGhB,QAAM,aAAa,kBAAkB;AACrC,MAAI,OAAO,SAAS;AAClB,iBAAa,gBAAgB,KAAK,UAAU,YAAY,MAAM,CAAC,CAAC,IAAI,IAAI;AAAA,EAC1E;AAEA,MAAI;AACF,QAAI,eAAe;AACnB,UAAM,SAAS,MAAM,qBAAqB,SAAS,QAAQ,QAAQ,YAAY;AAAA,MAC7E,qBAAqB,CAAC,SAAS;AAC7B,YAAI,cAAc;AAChB,oBAAU,KAAK;AACf,yBAAe;AAAA,QACjB;AAEA,kCAA0B,IAAI;AAAA,MAChC;AAAA,IACF,CAAC;AAED,QAAI,cAAc;AAChB,gBAAU,KAAK;AAAA,IACjB;AAEA,sBAAkB;AAClB,YAAQ,IAAI;AAGZ,QAAI,OAAO,cAAc,CAAC,QAAQ,SAAS;AACzC,sBAAgB,OAAO,UAAU;AAGjC,UAAI,OAAO,kBAAkB;AAC3B,cAAM,eAAe,MAAM,eAAe,OAAO,WAAW,OAAO;AAEnE,YAAI,aAAa,WAAW,OAAO;AACjC,gBAAM,WAAW,OAAO,WAAW,OAAO;AAAA,QAC5C,WAAW,aAAa,WAAW,QAAQ;AACzC,gBAAM,SAAS,MAAM,YAAY,OAAO,WAAW,OAAO;AAC1D,cAAI,QAAQ;AACV,kBAAM,WAAW,MAAM;AAAA,UACzB;AAAA,QACF;AAAA,MACF,OAAO;AAEL,cAAM,WAAW,OAAO,WAAW,OAAO;AAAA,MAC5C;AAAA,IACF;AAAA,EACF,SAAS,OAAO;AACd,cAAU,KAAK;AACf,sBAAkB;AAElB,UAAM,MAAM;AACZ,eAAW,8BAA8B,IAAI,OAAO,EAAE;AAEtD,QAAI,OAAO,WAAW,IAAI,OAAO;AAC/B,cAAQ,IAAI,IAAI,KAAK;AAAA,IACvB;AAEA,YAAQ,KAAK,CAAC;AAAA,EAChB;AACF,CAAC;AAEH,QAAQ,MAAM;","names":["existsSync","join","exec","promisify","execAsync","promisify","exec"]}
|
package/package.json
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "tf-ai",
|
|
3
|
+
"version": "0.1.1",
|
|
4
|
+
"description": "AI-powered terminal assistant that explains errors and suggests fixes",
|
|
5
|
+
"type": "module",
|
|
6
|
+
"main": "dist/index.js",
|
|
7
|
+
"bin": {
|
|
8
|
+
"tf-ai": "dist/index.js"
|
|
9
|
+
},
|
|
10
|
+
"keywords": [
|
|
11
|
+
"cli",
|
|
12
|
+
"ai",
|
|
13
|
+
"terminal",
|
|
14
|
+
"assistant"
|
|
15
|
+
],
|
|
16
|
+
"author": "AspireOne",
|
|
17
|
+
"license": "MIT",
|
|
18
|
+
"repository": {
|
|
19
|
+
"type": "git",
|
|
20
|
+
"url": "git+https://github.com/AspireOne/thefuckai.git"
|
|
21
|
+
},
|
|
22
|
+
"bugs": {
|
|
23
|
+
"url": "https://github.com/AspireOne/thefuckai/issues"
|
|
24
|
+
},
|
|
25
|
+
"homepage": "https://github.com/AspireOne/thefuckai#readme",
|
|
26
|
+
"files": [
|
|
27
|
+
"dist",
|
|
28
|
+
"README.md",
|
|
29
|
+
"LICENSE"
|
|
30
|
+
],
|
|
31
|
+
"engines": {
|
|
32
|
+
"node": ">=18"
|
|
33
|
+
},
|
|
34
|
+
"devDependencies": {
|
|
35
|
+
"@types/node": "^22.0.0",
|
|
36
|
+
"tsup": "^8.0.0",
|
|
37
|
+
"tsx": "^4.0.0",
|
|
38
|
+
"typescript": "^5.0.0"
|
|
39
|
+
},
|
|
40
|
+
"dependencies": {
|
|
41
|
+
"@ai-sdk/anthropic": "^3.0.15",
|
|
42
|
+
"@ai-sdk/google": "^3.0.10",
|
|
43
|
+
"@ai-sdk/openai": "^3.0.12",
|
|
44
|
+
"ai": "^6.0.39",
|
|
45
|
+
"chalk": "^5.0.0",
|
|
46
|
+
"commander": "^12.0.0",
|
|
47
|
+
"openai": "^4.0.0",
|
|
48
|
+
"ora": "^9.0.0",
|
|
49
|
+
"zod": "^4.3.5"
|
|
50
|
+
},
|
|
51
|
+
"scripts": {
|
|
52
|
+
"dev": "tsx src/index.ts",
|
|
53
|
+
"build": "tsup",
|
|
54
|
+
"start": "node dist/index.js"
|
|
55
|
+
}
|
|
56
|
+
}
|