altimate-code 0.4.9 → 0.5.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +36 -0
- package/README.md +22 -60
- package/package.json +54 -14
- package/postinstall.mjs +35 -0
- package/skills/cost-report/SKILL.md +134 -0
- package/skills/data-viz/SKILL.md +135 -0
- package/skills/data-viz/references/component-guide.md +394 -0
- package/skills/dbt-analyze/SKILL.md +130 -0
- package/skills/dbt-analyze/references/altimate-dbt-commands.md +66 -0
- package/skills/dbt-analyze/references/lineage-interpretation.md +58 -0
- package/skills/dbt-develop/SKILL.md +151 -0
- package/skills/dbt-develop/references/altimate-dbt-commands.md +66 -0
- package/skills/dbt-develop/references/common-mistakes.md +49 -0
- package/skills/dbt-develop/references/incremental-strategies.md +118 -0
- package/skills/dbt-develop/references/layer-patterns.md +158 -0
- package/skills/dbt-develop/references/medallion-architecture.md +125 -0
- package/skills/dbt-develop/references/yaml-generation.md +90 -0
- package/skills/dbt-docs/SKILL.md +99 -0
- package/skills/dbt-docs/references/altimate-dbt-commands.md +66 -0
- package/skills/dbt-docs/references/documentation-standards.md +94 -0
- package/skills/dbt-test/SKILL.md +121 -0
- package/skills/dbt-test/references/altimate-dbt-commands.md +66 -0
- package/skills/dbt-test/references/custom-tests.md +59 -0
- package/skills/dbt-test/references/schema-test-patterns.md +103 -0
- package/skills/dbt-test/references/unit-test-guide.md +121 -0
- package/skills/dbt-troubleshoot/SKILL.md +187 -0
- package/skills/dbt-troubleshoot/references/altimate-dbt-commands.md +66 -0
- package/skills/dbt-troubleshoot/references/compilation-errors.md +57 -0
- package/skills/dbt-troubleshoot/references/runtime-errors.md +71 -0
- package/skills/dbt-troubleshoot/references/test-failures.md +95 -0
- package/skills/lineage-diff/SKILL.md +64 -0
- package/skills/pii-audit/SKILL.md +117 -0
- package/skills/query-optimize/SKILL.md +86 -0
- package/skills/schema-migration/SKILL.md +119 -0
- package/skills/sql-review/SKILL.md +118 -0
- package/skills/sql-translate/SKILL.md +68 -0
- package/skills/teach/SKILL.md +54 -0
- package/skills/train/SKILL.md +51 -0
- package/skills/training-status/SKILL.md +45 -0
package/CHANGELOG.md
CHANGED
|
@@ -5,6 +5,42 @@ All notable changes to this project will be documented in this file.
|
|
|
5
5
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
6
6
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
7
|
|
|
8
|
+
## [0.5.1] - 2026-03-19
|
|
9
|
+
|
|
10
|
+
### Added
|
|
11
|
+
|
|
12
|
+
- Simplified agent modes: 3 primary modes (`builder`, `analyst`, `plan`) replacing 7 — cleaner UX with focused roles (#282)
|
|
13
|
+
- SQL write access control — `builder` prompts for approval on write queries, `analyst` blocks them entirely, destructive SQL (`DROP DATABASE`, `TRUNCATE`) hard-blocked (#282)
|
|
14
|
+
- `core_failure` telemetry with PII-safe input signatures — captures tool failures with masked SQL literals and redacted secrets (#245)
|
|
15
|
+
- `peerDependencies` for database drivers in published npm packages (#273)
|
|
16
|
+
- Comprehensive docs restructuring with new Changelog, Getting Started, and Tools reference pages (#284)
|
|
17
|
+
|
|
18
|
+
### Fixed
|
|
19
|
+
|
|
20
|
+
- Replace `escapeSqlString` with parameterized query binds in `finops/schema` modules (#277)
|
|
21
|
+
- Driver error messages now suggest `npm install` instead of `bun add` (#273)
|
|
22
|
+
- System prompt traced only once per session to avoid duplication (#287)
|
|
23
|
+
|
|
24
|
+
### Changed
|
|
25
|
+
|
|
26
|
+
- Bump `@altimateai/altimate-core` to 0.2.5 — adds Rust-side failure telemetry with PII masking
|
|
27
|
+
- Removed 5 agent prompts: `executive`, `migrator`, `researcher`, `trainer`, `validator` (#282)
|
|
28
|
+
- README cleanup and updated branding (#288)
|
|
29
|
+
|
|
30
|
+
## [0.5.0] - 2026-03-18
|
|
31
|
+
|
|
32
|
+
### Added
|
|
33
|
+
|
|
34
|
+
- Smooth streaming mode for TUI response rendering (#281)
|
|
35
|
+
- Ship builtin skills to customers via `postinstall` (#279)
|
|
36
|
+
- `/configure-claude` and `/configure-codex` built-in commands (#235)
|
|
37
|
+
|
|
38
|
+
### Fixed
|
|
39
|
+
|
|
40
|
+
- Brew formula stuck at v0.3.1 — version normalization in publish pipeline (#286)
|
|
41
|
+
- Harden auth field handling for all warehouse drivers (#271)
|
|
42
|
+
- Suppress console logging that corrupts TUI display (#269)
|
|
43
|
+
|
|
8
44
|
## [0.4.9] - 2026-03-18
|
|
9
45
|
|
|
10
46
|
### Added
|
package/README.md
CHANGED
|
@@ -7,19 +7,16 @@
|
|
|
7
7
|
|
|
8
8
|
**The open-source data engineering harness.**
|
|
9
9
|
|
|
10
|
-
The intelligence layer for data engineering AI —
|
|
10
|
+
The intelligence layer for data engineering AI — 100+ deterministic tools for SQL analysis,
|
|
11
11
|
column-level lineage, dbt, FinOps, and warehouse connectivity across every major cloud platform.
|
|
12
12
|
|
|
13
13
|
Run standalone in your terminal, embed underneath Claude Code or Codex, or integrate
|
|
14
14
|
into CI pipelines and orchestration DAGs. Precision data tooling for any LLM.
|
|
15
15
|
|
|
16
|
-
[](https://www.npmjs.com/package/@altimateai/altimate-core)
|
|
18
|
-
[](https://www.npmjs.com/package/@altimateai/altimate-code)
|
|
16
|
+
[](https://www.npmjs.com/package/altimate-code)
|
|
19
17
|
[](./LICENSE)
|
|
20
|
-
[](https://altimateai.github.io/altimate-code)
|
|
18
|
+
[](https://altimate.studio/join-agentic-data-engineering-slack)
|
|
19
|
+
[](https://docs.altimate.sh)
|
|
23
20
|
|
|
24
21
|
</div>
|
|
25
22
|
|
|
@@ -29,7 +26,7 @@ into CI pipelines and orchestration DAGs. Precision data tooling for any LLM.
|
|
|
29
26
|
|
|
30
27
|
```bash
|
|
31
28
|
# npm (recommended)
|
|
32
|
-
npm install -g
|
|
29
|
+
npm install -g altimate-code
|
|
33
30
|
|
|
34
31
|
# Homebrew
|
|
35
32
|
brew install AltimateAI/tap/altimate-code
|
|
@@ -43,8 +40,6 @@ altimate # Launch the TUI
|
|
|
43
40
|
/connect # Interactive setup — choose your provider and enter your API key
|
|
44
41
|
```
|
|
45
42
|
|
|
46
|
-
> **No API key?** Select **Codex** in the `/connect` menu — it's built-in and requires no setup.
|
|
47
|
-
|
|
48
43
|
Or set an environment variable directly:
|
|
49
44
|
```bash
|
|
50
45
|
export ANTHROPIC_API_KEY=your_key # Anthropic Claude
|
|
@@ -58,7 +53,9 @@ altimate /discover
|
|
|
58
53
|
|
|
59
54
|
`/discover` auto-detects dbt projects, warehouse connections (from `~/.dbt/profiles.yml`, Docker, environment variables), and installed tools (dbt, sqlfluff, airflow, dagster, and more). Skip this and start building — you can always run it later.
|
|
60
55
|
|
|
61
|
-
> **
|
|
56
|
+
> **Headless / scripted usage:** `altimate --yolo` auto-approves all permission prompts. Not recommended with live warehouse connections.
|
|
57
|
+
|
|
58
|
+
> **Zero additional setup.** One command install.
|
|
62
59
|
|
|
63
60
|
## Why a specialized harness?
|
|
64
61
|
|
|
@@ -90,7 +87,7 @@ no hallucinated SQL advice, no guessing at schema, no missed PII.
|
|
|
90
87
|
- **FinOps** — credit consumption, expensive query detection, warehouse right-sizing, idle resource cleanup
|
|
91
88
|
- **PII Detection** — 15 categories, 30+ regex patterns, enforced pre-execution
|
|
92
89
|
|
|
93
|
-
**Works seamlessly with Claude Code and Codex.** altimate is the data engineering tool layer — use it standalone in your terminal, or mount it as the harness underneath whatever AI agent you already run. The two are complementary.
|
|
90
|
+
**Works seamlessly with Claude Code and Codex.** Use `/configure-claude` or `/configure-codex` to set up integration in one step. altimate is the data engineering tool layer — use it standalone in your terminal, or mount it as the harness underneath whatever AI agent you already run. The two are complementary.
|
|
94
91
|
|
|
95
92
|
altimate is a fork of [OpenCode](https://github.com/anomalyco/opencode) rebuilt for data teams. Model-agnostic — bring your own LLM or run locally with Ollama.
|
|
96
93
|
|
|
@@ -146,23 +143,19 @@ Teach your AI teammate project-specific patterns, naming conventions, and best p
|
|
|
146
143
|
|
|
147
144
|
## Agent Modes
|
|
148
145
|
|
|
149
|
-
Each
|
|
146
|
+
Each mode has scoped permissions, tool access, and SQL write-access control.
|
|
150
147
|
|
|
151
|
-
|
|
|
148
|
+
| Mode | Role | Access |
|
|
152
149
|
|---|---|---|
|
|
153
|
-
| **Builder** | Create dbt models, SQL pipelines, and data transformations | Full read/write |
|
|
154
|
-
| **Analyst** | Explore data, run SELECT queries, and generate insights | Read-only enforced |
|
|
155
|
-
| **
|
|
156
|
-
| **Migrator** | Cross-warehouse SQL translation, schema migration, dialect conversion | Read/write for migrations |
|
|
157
|
-
| **Researcher** | Deep-dive analysis, documentation research, and knowledge extraction | Read-only |
|
|
158
|
-
| **Trainer** | Teach project-specific patterns, naming conventions, and best practices | Read + write training data |
|
|
159
|
-
| **Executive** | Business-audience summaries — translates findings into revenue, cost, and compliance impact | Read-only |
|
|
150
|
+
| **Builder** | Create dbt models, SQL pipelines, and data transformations | Full read/write (write SQL prompts for approval; `DROP DATABASE`/`DROP SCHEMA`/`TRUNCATE` hard-blocked) |
|
|
151
|
+
| **Analyst** | Explore data, run SELECT queries, FinOps analysis, and generate insights | Read-only enforced (SELECT only, no file writes) |
|
|
152
|
+
| **Plan** | Outline an approach before acting | Minimal (read files only, no SQL or bash) |
|
|
160
153
|
|
|
161
|
-
> **New to altimate?** Start with **Analyst mode** — it's read-only and safe to run against production connections.
|
|
154
|
+
> **New to altimate?** Start with **Analyst mode** — it's read-only and safe to run against production connections. Need specialized workflows (validation, migration, research)? Create [custom agent modes](https://docs.altimate.sh).
|
|
162
155
|
|
|
163
156
|
## Supported Warehouses
|
|
164
157
|
|
|
165
|
-
Snowflake · BigQuery · Databricks · PostgreSQL · Redshift · DuckDB · MySQL · SQL Server
|
|
158
|
+
Snowflake · BigQuery · Databricks · PostgreSQL · Redshift · DuckDB · MySQL · SQL Server · Oracle · SQLite
|
|
166
159
|
|
|
167
160
|
First-class support with schema indexing, query execution, and metadata introspection. SSH tunneling available for secure connections.
|
|
168
161
|
|
|
@@ -172,46 +165,13 @@ Model-agnostic — bring your own provider or run locally.
|
|
|
172
165
|
|
|
173
166
|
Anthropic · OpenAI · Google Gemini · Google Vertex AI · Amazon Bedrock · Azure OpenAI · Mistral · Groq · DeepInfra · Cerebras · Cohere · Together AI · Perplexity · xAI · OpenRouter · Ollama · GitHub Copilot
|
|
174
167
|
|
|
175
|
-
> **No API key?** **Codex** is a built-in provider with no key required. Select it via `/connect` to start immediately.
|
|
176
|
-
|
|
177
168
|
## Skills
|
|
178
169
|
|
|
179
170
|
altimate ships with built-in skills for every common data engineering task — type `/` in the TUI to browse available skills and get autocomplete. No memorization required.
|
|
180
171
|
|
|
181
|
-
## Architecture
|
|
182
|
-
|
|
183
|
-
```
|
|
184
|
-
altimate (TypeScript CLI)
|
|
185
|
-
|
|
|
186
|
-
@altimateai/altimate-core (napi-rs → Rust)
|
|
187
|
-
SQL analysis, lineage, PII, safety — 45 functions, ~2ms per call
|
|
188
|
-
|
|
|
189
|
-
Native Node.js drivers
|
|
190
|
-
10 warehouses: Snowflake, BigQuery, PostgreSQL, Databricks,
|
|
191
|
-
Redshift, MySQL, SQL Server, Oracle, DuckDB, SQLite
|
|
192
|
-
```
|
|
193
|
-
|
|
194
|
-
The CLI handles AI interactions, TUI, and tool orchestration. SQL analysis is powered by the Rust-based `@altimateai/altimate-core` engine via napi-rs bindings (no Python required). Database connectivity uses native Node.js drivers with lazy loading.
|
|
195
|
-
|
|
196
|
-
**No Python dependency**: All 73 tool methods run natively in TypeScript. No pip, venv, or Python installation needed.
|
|
197
|
-
|
|
198
|
-
**dbt-first**: When working in a dbt project, the CLI automatically uses dbt's connection from `profiles.yml` — no separate warehouse configuration needed.
|
|
199
|
-
|
|
200
|
-
### Monorepo structure
|
|
201
|
-
|
|
202
|
-
```
|
|
203
|
-
packages/
|
|
204
|
-
altimate-code/ TypeScript CLI (main entry point)
|
|
205
|
-
drivers/ Shared database drivers (10 warehouses)
|
|
206
|
-
dbt-tools/ dbt integration (TypeScript)
|
|
207
|
-
plugin/ Plugin system
|
|
208
|
-
sdk/ SDKs (includes VS Code extension)
|
|
209
|
-
util/ Shared utilities
|
|
210
|
-
```
|
|
211
|
-
|
|
212
172
|
## Community & Contributing
|
|
213
173
|
|
|
214
|
-
- **Slack**: [
|
|
174
|
+
- **Slack**: [Join Slack](https://altimate.studio/join-agentic-data-engineering-slack) — Real-time chat for questions, showcases, and feature discussion
|
|
215
175
|
- **Issues**: [GitHub Issues](https://github.com/AltimateAI/altimate-code/issues) — Bug reports and feature requests
|
|
216
176
|
- **Discussions**: [GitHub Discussions](https://github.com/AltimateAI/altimate-code/discussions) — Long-form questions and proposals
|
|
217
177
|
- **Security**: See [SECURITY.md](./SECURITY.md) for responsible disclosure
|
|
@@ -220,10 +180,12 @@ Contributions welcome — docs, SQL rules, warehouse connectors, and TUI improve
|
|
|
220
180
|
|
|
221
181
|
**[Read CONTRIBUTING.md →](./CONTRIBUTING.md)**
|
|
222
182
|
|
|
223
|
-
##
|
|
183
|
+
## Changelog
|
|
224
184
|
|
|
225
|
-
- **v0.
|
|
226
|
-
- **v0.4.
|
|
185
|
+
- **v0.5.0** (March 2026) — smooth streaming mode, builtin skills via postinstall, `/configure-claude` and `/configure-codex` commands, warehouse auth hardening
|
|
186
|
+
- **v0.4.9** (March 2026) — Snowflake auth overhaul (all auth methods), dbt tool regression fixes, parallel CI builds
|
|
187
|
+
- **v0.4.2** (March 2026) — yolo mode, Python engine elimination (all-native TypeScript), tool consolidation, path sandboxing hardening, altimate-dbt CLI, unscoped npm package
|
|
188
|
+
- **v0.4.0** (March 2026) — data visualization skill, 100+ tools, training system
|
|
227
189
|
- **v0.3.x** — [See full changelog →](CHANGELOG.md)
|
|
228
190
|
|
|
229
191
|
## License
|
package/package.json
CHANGED
|
@@ -14,23 +14,63 @@
|
|
|
14
14
|
"scripts": {
|
|
15
15
|
"postinstall": "bun ./postinstall.mjs || node ./postinstall.mjs"
|
|
16
16
|
},
|
|
17
|
-
"version": "
|
|
17
|
+
"version": "0.5.1",
|
|
18
18
|
"license": "MIT",
|
|
19
19
|
"dependencies": {
|
|
20
|
-
"@altimateai/altimate-core": "^0.2.
|
|
20
|
+
"@altimateai/altimate-core": "^0.2.5"
|
|
21
21
|
},
|
|
22
22
|
"optionalDependencies": {
|
|
23
|
-
"@altimateai/altimate-code-linux-x64": "
|
|
24
|
-
"@altimateai/altimate-code-windows-arm64": "
|
|
25
|
-
"@altimateai/altimate-code-linux-arm64-musl": "
|
|
26
|
-
"@altimateai/altimate-code-darwin-x64": "
|
|
27
|
-
"@altimateai/altimate-code-windows-x64": "
|
|
28
|
-
"@altimateai/altimate-code-linux-x64-musl": "
|
|
29
|
-
"@altimateai/altimate-code-darwin-x64-baseline": "
|
|
30
|
-
"@altimateai/altimate-code-linux-x64-baseline-musl": "
|
|
31
|
-
"@altimateai/altimate-code-linux-x64-baseline": "
|
|
32
|
-
"@altimateai/altimate-code-linux-arm64": "
|
|
33
|
-
"@altimateai/altimate-code-darwin-arm64": "
|
|
34
|
-
"@altimateai/altimate-code-windows-x64-baseline": "
|
|
23
|
+
"@altimateai/altimate-code-linux-x64": "0.5.1",
|
|
24
|
+
"@altimateai/altimate-code-windows-arm64": "0.5.1",
|
|
25
|
+
"@altimateai/altimate-code-linux-arm64-musl": "0.5.1",
|
|
26
|
+
"@altimateai/altimate-code-darwin-x64": "0.5.1",
|
|
27
|
+
"@altimateai/altimate-code-windows-x64": "0.5.1",
|
|
28
|
+
"@altimateai/altimate-code-linux-x64-musl": "0.5.1",
|
|
29
|
+
"@altimateai/altimate-code-darwin-x64-baseline": "0.5.1",
|
|
30
|
+
"@altimateai/altimate-code-linux-x64-baseline-musl": "0.5.1",
|
|
31
|
+
"@altimateai/altimate-code-linux-x64-baseline": "0.5.1",
|
|
32
|
+
"@altimateai/altimate-code-linux-arm64": "0.5.1",
|
|
33
|
+
"@altimateai/altimate-code-darwin-arm64": "0.5.1",
|
|
34
|
+
"@altimateai/altimate-code-windows-x64-baseline": "0.5.1"
|
|
35
|
+
},
|
|
36
|
+
"peerDependencies": {
|
|
37
|
+
"pg": ">=8",
|
|
38
|
+
"snowflake-sdk": ">=1",
|
|
39
|
+
"@google-cloud/bigquery": ">=8",
|
|
40
|
+
"@databricks/sql": ">=1",
|
|
41
|
+
"mysql2": ">=3",
|
|
42
|
+
"mssql": ">=11",
|
|
43
|
+
"oracledb": ">=6",
|
|
44
|
+
"duckdb": ">=1",
|
|
45
|
+
"better-sqlite3": ">=11"
|
|
46
|
+
},
|
|
47
|
+
"peerDependenciesMeta": {
|
|
48
|
+
"pg": {
|
|
49
|
+
"optional": true
|
|
50
|
+
},
|
|
51
|
+
"snowflake-sdk": {
|
|
52
|
+
"optional": true
|
|
53
|
+
},
|
|
54
|
+
"@google-cloud/bigquery": {
|
|
55
|
+
"optional": true
|
|
56
|
+
},
|
|
57
|
+
"@databricks/sql": {
|
|
58
|
+
"optional": true
|
|
59
|
+
},
|
|
60
|
+
"mysql2": {
|
|
61
|
+
"optional": true
|
|
62
|
+
},
|
|
63
|
+
"mssql": {
|
|
64
|
+
"optional": true
|
|
65
|
+
},
|
|
66
|
+
"oracledb": {
|
|
67
|
+
"optional": true
|
|
68
|
+
},
|
|
69
|
+
"duckdb": {
|
|
70
|
+
"optional": true
|
|
71
|
+
},
|
|
72
|
+
"better-sqlite3": {
|
|
73
|
+
"optional": true
|
|
74
|
+
}
|
|
35
75
|
}
|
|
36
76
|
}
|
package/postinstall.mjs
CHANGED
|
@@ -115,6 +115,39 @@ function printWelcome(version) {
|
|
|
115
115
|
out(bot)
|
|
116
116
|
}
|
|
117
117
|
|
|
118
|
+
function copyDirRecursive(src, dst) {
|
|
119
|
+
fs.mkdirSync(dst, { recursive: true })
|
|
120
|
+
for (const entry of fs.readdirSync(src, { withFileTypes: true })) {
|
|
121
|
+
const srcPath = path.join(src, entry.name)
|
|
122
|
+
const dstPath = path.join(dst, entry.name)
|
|
123
|
+
if (entry.isDirectory()) {
|
|
124
|
+
copyDirRecursive(srcPath, dstPath)
|
|
125
|
+
} else {
|
|
126
|
+
fs.copyFileSync(srcPath, dstPath)
|
|
127
|
+
}
|
|
128
|
+
}
|
|
129
|
+
}
|
|
130
|
+
|
|
131
|
+
/**
|
|
132
|
+
* Copy bundled skills to ~/.altimate/builtin/ on every install/upgrade.
|
|
133
|
+
* The entire directory is wiped and replaced so each release is the single
|
|
134
|
+
* source of truth. Intentionally separate from ~/.altimate/skills/ which users own.
|
|
135
|
+
*/
|
|
136
|
+
function copySkillsToAltimate() {
|
|
137
|
+
try {
|
|
138
|
+
const skillsSrc = path.join(__dirname, "skills")
|
|
139
|
+
if (!fs.existsSync(skillsSrc)) return // skills not in package (shouldn't happen)
|
|
140
|
+
|
|
141
|
+
const builtinDst = path.join(os.homedir(), ".altimate", "builtin")
|
|
142
|
+
|
|
143
|
+
// Full wipe-and-replace — each release owns this directory entirely
|
|
144
|
+
if (fs.existsSync(builtinDst)) fs.rmSync(builtinDst, { recursive: true, force: true })
|
|
145
|
+
copyDirRecursive(skillsSrc, builtinDst)
|
|
146
|
+
} catch {
|
|
147
|
+
// Non-fatal — skills can be installed manually
|
|
148
|
+
}
|
|
149
|
+
}
|
|
150
|
+
|
|
118
151
|
/**
|
|
119
152
|
* Write a marker file so the CLI can show a welcome/upgrade banner on first run.
|
|
120
153
|
* npm v7+ silences postinstall stdout, so the CLI reads this marker at startup instead.
|
|
@@ -144,6 +177,7 @@ async function main() {
|
|
|
144
177
|
// On Windows, the .exe is already included in the package and bin field points to it
|
|
145
178
|
// No postinstall setup needed
|
|
146
179
|
if (version) writeUpgradeMarker(version)
|
|
180
|
+
copySkillsToAltimate()
|
|
147
181
|
return
|
|
148
182
|
}
|
|
149
183
|
|
|
@@ -161,6 +195,7 @@ async function main() {
|
|
|
161
195
|
// Write marker only — npm v7+ suppresses all postinstall output.
|
|
162
196
|
// The CLI picks up the marker and shows the welcome box on first run.
|
|
163
197
|
if (version) writeUpgradeMarker(version)
|
|
198
|
+
copySkillsToAltimate()
|
|
164
199
|
} catch (error) {
|
|
165
200
|
console.error("Failed to setup altimate-code binary:", error.message)
|
|
166
201
|
process.exit(1)
|
|
@@ -0,0 +1,134 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: cost-report
|
|
3
|
+
description: Analyze Snowflake query costs and identify optimization opportunities
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Cost Report
|
|
7
|
+
|
|
8
|
+
## Requirements
|
|
9
|
+
**Agent:** any (read-only analysis)
|
|
10
|
+
**Tools used:** sql_execute, sql_analyze, finops_analyze_credits, finops_expensive_queries, finops_warehouse_advice, finops_unused_resources, finops_query_history
|
|
11
|
+
|
|
12
|
+
Analyze Snowflake warehouse query costs, identify the most expensive queries, detect anti-patterns, and recommend optimizations.
|
|
13
|
+
|
|
14
|
+
## Workflow
|
|
15
|
+
|
|
16
|
+
1. **Query SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY** for the top 20 most expensive queries by credits used:
|
|
17
|
+
|
|
18
|
+
```sql
|
|
19
|
+
SELECT
|
|
20
|
+
query_id,
|
|
21
|
+
query_text,
|
|
22
|
+
user_name,
|
|
23
|
+
warehouse_name,
|
|
24
|
+
query_type,
|
|
25
|
+
credits_used_cloud_services,
|
|
26
|
+
bytes_scanned,
|
|
27
|
+
rows_produced,
|
|
28
|
+
total_elapsed_time,
|
|
29
|
+
execution_status,
|
|
30
|
+
start_time
|
|
31
|
+
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
|
|
32
|
+
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
|
|
33
|
+
AND execution_status = 'SUCCESS'
|
|
34
|
+
AND credits_used_cloud_services > 0
|
|
35
|
+
ORDER BY credits_used_cloud_services DESC
|
|
36
|
+
LIMIT 20;
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
Use `sql_execute` to run this query against the connected Snowflake warehouse.
|
|
40
|
+
|
|
41
|
+
2. **Group and summarize** the results by:
|
|
42
|
+
- **User**: Which users are driving the most cost?
|
|
43
|
+
- **Warehouse**: Which warehouses consume the most credits?
|
|
44
|
+
- **Query type**: SELECT vs INSERT vs CREATE TABLE AS SELECT vs MERGE, etc.
|
|
45
|
+
|
|
46
|
+
Present each grouping as a markdown table.
|
|
47
|
+
|
|
48
|
+
3. **Analyze the top offenders** - For each of the top 10 most expensive queries:
|
|
49
|
+
- Run `sql_analyze` on the query text to detect anti-patterns (SELECT *, missing LIMIT, cartesian products, correlated subqueries, etc.)
|
|
50
|
+
- Summarize anti-patterns found and their severity
|
|
51
|
+
|
|
52
|
+
4. **Classify each query into a cost tier**:
|
|
53
|
+
|
|
54
|
+
| Tier | Credits | Label | Action |
|
|
55
|
+
|------|---------|-------|--------|
|
|
56
|
+
| 1 | < $0.01 | Cheap | No action needed |
|
|
57
|
+
| 2 | $0.01 - $1.00 | Moderate | Review if frequent |
|
|
58
|
+
| 3 | $1.00 - $100.00 | Expensive | Optimize or review warehouse sizing |
|
|
59
|
+
| 4 | > $100.00 | Dangerous | Immediate review required |
|
|
60
|
+
|
|
61
|
+
5. **Warehouse analysis** - Run `finops_warehouse_advice` to check if warehouses used by the top offenders are right-sized.
|
|
62
|
+
|
|
63
|
+
6. **Unused resource detection** - Run `finops_unused_resources` to find:
|
|
64
|
+
- **Stale tables**: Tables not accessed in the last 30+ days (candidates for archival/drop)
|
|
65
|
+
- **Idle warehouses**: Warehouses with no query activity (candidates for suspension/removal)
|
|
66
|
+
|
|
67
|
+
Include findings in the report under a "Waste Detection" section.
|
|
68
|
+
|
|
69
|
+
7. **Query history enrichment** - Run `finops_query_history` to fetch recent execution patterns:
|
|
70
|
+
- Identify frequently-run expensive queries (high frequency × high cost = top optimization target)
|
|
71
|
+
- Find queries that could benefit from result caching or materialization
|
|
72
|
+
|
|
73
|
+
8. **Output the final report** as a structured markdown document:
|
|
74
|
+
|
|
75
|
+
```
|
|
76
|
+
# Snowflake Cost Report (Last 30 Days)
|
|
77
|
+
|
|
78
|
+
## Summary
|
|
79
|
+
- Total credits consumed: X
|
|
80
|
+
- Number of unique queries: Y
|
|
81
|
+
- Most expensive query: Z credits
|
|
82
|
+
|
|
83
|
+
## Cost by User
|
|
84
|
+
| User | Total Credits | Query Count | Avg Credits/Query |
|
|
85
|
+
|------|--------------|-------------|-------------------|
|
|
86
|
+
|
|
87
|
+
## Cost by Warehouse
|
|
88
|
+
| Warehouse | Total Credits | Query Count | Avg Credits/Query |
|
|
89
|
+
|-----------|--------------|-------------|-------------------|
|
|
90
|
+
|
|
91
|
+
## Cost by Query Type
|
|
92
|
+
| Query Type | Total Credits | Query Count | Avg Credits/Query |
|
|
93
|
+
|------------|--------------|-------------|-------------------|
|
|
94
|
+
|
|
95
|
+
## Top 10 Expensive Queries (Detailed Analysis)
|
|
96
|
+
|
|
97
|
+
### Query 1 (X credits) - DANGEROUS
|
|
98
|
+
**User:** user_name | **Warehouse:** wh_name | **Type:** SELECT
|
|
99
|
+
**Anti-patterns found:**
|
|
100
|
+
- SELECT_STAR (warning): Query uses SELECT * ...
|
|
101
|
+
- MISSING_LIMIT (info): ...
|
|
102
|
+
|
|
103
|
+
**Optimization suggestions:**
|
|
104
|
+
1. Select only needed columns
|
|
105
|
+
2. Add LIMIT clause
|
|
106
|
+
3. Consider partitioning strategy
|
|
107
|
+
|
|
108
|
+
**Cost tier:** Tier 1 (based on credits used)
|
|
109
|
+
|
|
110
|
+
...
|
|
111
|
+
|
|
112
|
+
## Waste Detection
|
|
113
|
+
### Unused Tables
|
|
114
|
+
| Table | Last Accessed | Size | Recommendation |
|
|
115
|
+
|-------|--------------|------|----------------|
|
|
116
|
+
|
|
117
|
+
### Idle Warehouses
|
|
118
|
+
| Warehouse | Last Query | Size | Recommendation |
|
|
119
|
+
|-----------|-----------|------|----------------|
|
|
120
|
+
|
|
121
|
+
## Recommendations
|
|
122
|
+
1. Top priority optimizations
|
|
123
|
+
2. Warehouse sizing suggestions
|
|
124
|
+
3. Unused resource cleanup
|
|
125
|
+
4. Scheduling recommendations
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
## Usage
|
|
129
|
+
|
|
130
|
+
The user invokes this skill with:
|
|
131
|
+
- `/cost-report` -- Analyze the last 30 days
|
|
132
|
+
- `/cost-report 7` -- Analyze the last 7 days (adjust the DATEADD interval)
|
|
133
|
+
|
|
134
|
+
Use the tools: `sql_execute`, `sql_analyze`, `finops_analyze_credits`, `finops_expensive_queries`, `finops_warehouse_advice`, `finops_unused_resources`, `finops_query_history`.
|
|
@@ -0,0 +1,135 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: data-viz
|
|
3
|
+
description: >
|
|
4
|
+
Build modern, interactive data visualizations and dashboards using code-based
|
|
5
|
+
component libraries (shadcn/ui, Recharts, Tremor, Nivo, D3, Victory, visx).
|
|
6
|
+
Use this skill whenever the user asks to visualize data, build dashboards,
|
|
7
|
+
create analytics views, chart metrics, tell a data story, build a reporting
|
|
8
|
+
interface, create KPI cards, plot graphs, or explore a dataset — even if they
|
|
9
|
+
mention PowerBI, Tableau, Streamlit, Metabase, Looker, Grafana, or similar
|
|
10
|
+
tools. Also trigger when the user says "make a dashboard", "show me the data",
|
|
11
|
+
"chart this", "visualize trends", "build an analytics page", "data story", or
|
|
12
|
+
anything involving turning raw data into interactive visual interfaces. If the
|
|
13
|
+
task involves presenting data visually — this is the skill. Always prefer
|
|
14
|
+
building a real, interactive, code-based UI over exporting to or recommending
|
|
15
|
+
a BI platform.
|
|
16
|
+
---
|
|
17
|
+
|
|
18
|
+
# AI-First Data Visualization
|
|
19
|
+
|
|
20
|
+
## Philosophy
|
|
21
|
+
|
|
22
|
+
Build production-quality interactive data interfaces with modern component libraries — no vendor lock-in, embeddable anywhere. When no tool is specified, build code-first. When the user explicitly names a BI tool, use it — only suggest code-first if they ask for options or hit a technical blocker.
|
|
23
|
+
|
|
24
|
+
## Technology Stack
|
|
25
|
+
|
|
26
|
+
Full API patterns & code: `references/component-guide.md`
|
|
27
|
+
|
|
28
|
+
### Framework Priority
|
|
29
|
+
|
|
30
|
+
1. **React + Tailwind** — Default when JSX/TSX supported
|
|
31
|
+
2. **HTML + CSS + Vanilla JS** — Fallback (use D3 or Chart.js)
|
|
32
|
+
3. **Python (Plotly/Dash)** — Python-only environments only
|
|
33
|
+
|
|
34
|
+
### Library Selection
|
|
35
|
+
|
|
36
|
+
| Library | Best For |
|
|
37
|
+
|---------|----------|
|
|
38
|
+
| **shadcn/ui charts** | Default first choice — general dashboards, most chart types |
|
|
39
|
+
| **Recharts** | Line, bar, area, composed, radar — fine-grained control |
|
|
40
|
+
| **Tremor** | KPI cards, metric displays, full dashboard layouts |
|
|
41
|
+
| **Nivo** | Heatmaps, treemaps, choropleth, calendar, Sankey, funnel |
|
|
42
|
+
| **visx** | Bespoke custom viz — D3-level control with React |
|
|
43
|
+
| **D3.js** | Force-directed graphs, DAGs, maps — maximum flexibility |
|
|
44
|
+
| **Victory** | When animation quality matters most |
|
|
45
|
+
|
|
46
|
+
**Supporting**: Tailwind CSS · Radix UI · Framer Motion · Lucide React · date-fns · Papaparse · lodash
|
|
47
|
+
|
|
48
|
+
## Building a Visualization
|
|
49
|
+
|
|
50
|
+
### Step 1: Understand the Data Story
|
|
51
|
+
|
|
52
|
+
Before code, identify: **What question does the data answer?** Who is the audience (exec → KPIs only, analyst → drill-down, public → narrative)? **What's the ONE key insight?** Design around it.
|
|
53
|
+
|
|
54
|
+
### Step 2: Choose Chart Type
|
|
55
|
+
|
|
56
|
+
| Data Relationship | Chart Type | Library |
|
|
57
|
+
|---|---|---|
|
|
58
|
+
| Trend over time | Line, Area | shadcn/Recharts |
|
|
59
|
+
| Category comparison | Bar (horizontal if many) | shadcn/Recharts |
|
|
60
|
+
| Part of whole | Donut, Treemap | shadcn/Nivo |
|
|
61
|
+
| Distribution | Histogram, Box, Violin | Nivo/visx |
|
|
62
|
+
| Correlation | Scatter, Bubble | Recharts/visx |
|
|
63
|
+
| Geographic | Choropleth, Dot map | Nivo/D3 |
|
|
64
|
+
| Hierarchical | Treemap, Sunburst | Nivo |
|
|
65
|
+
| Flow / Process | Sankey, Funnel | Nivo/D3 |
|
|
66
|
+
| Single KPI | Metric card, Gauge, Sparkline | Tremor/shadcn |
|
|
67
|
+
| Multi-metric overview | Dashboard grid of cards | Tremor + shadcn |
|
|
68
|
+
| Ranking | Horizontal bar, Bar list | Tremor |
|
|
69
|
+
| Column/model lineage | Force-directed DAG | D3 |
|
|
70
|
+
| Pipeline dependencies | Hierarchical tree, DAG | D3/Nivo |
|
|
71
|
+
| Multi-dimensional quality | Radar/Spider | Recharts |
|
|
72
|
+
| Activity density over time | Calendar heatmap | Nivo |
|
|
73
|
+
| Incremental change breakdown | Waterfall | Recharts (custom) |
|
|
74
|
+
|
|
75
|
+
### Step 3: Build the Interface
|
|
76
|
+
|
|
77
|
+
Start from this layout — remove what the data doesn't need:
|
|
78
|
+
|
|
79
|
+
```
|
|
80
|
+
┌─────────────────────────────────────────┐
|
|
81
|
+
│ Header: Title + Description + Date Range│
|
|
82
|
+
├─────────────────────────────────────────┤
|
|
83
|
+
│ KPI Row: 3-5 metric cards + sparklines │
|
|
84
|
+
├─────────────────────────────────────────┤
|
|
85
|
+
│ Primary Visualization (largest chart) │
|
|
86
|
+
├──────────────────┬──────────────────────┤
|
|
87
|
+
│ Secondary Chart │ Supporting Chart/Tbl │
|
|
88
|
+
├──────────────────┴──────────────────────┤
|
|
89
|
+
│ Detail Table (sortable, filterable) │
|
|
90
|
+
└─────────────────────────────────────────┘
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
A single insight might just be one chart with a headline and annotation. Scale complexity to audience.
|
|
94
|
+
|
|
95
|
+
### Step 4: Design Principles
|
|
96
|
+
|
|
97
|
+
- **Data-ink ratio**: Remove chartjunk — unnecessary gridlines, redundant labels, decorative borders
|
|
98
|
+
- **Color with purpose**: Encode meaning (red=bad, green=good, blue=neutral). Max 5-7 colors. Single-hue gradient for sequential data
|
|
99
|
+
- **Typography hierarchy**: Title → subtitle (muted) → axis labels (small) → data labels
|
|
100
|
+
- **Responsive**: `min-h-[VALUE]` on all charts. Grid stacks on mobile
|
|
101
|
+
- **Animation**: Entry transitions only, `duration-300` to `duration-500`. Never continuous
|
|
102
|
+
- **Accessibility**: `aria-label` on charts, WCAG AA contrast, don't rely on color alone
|
|
103
|
+
|
|
104
|
+
### Step 5: Interactivity & Annotations
|
|
105
|
+
|
|
106
|
+
**Priority**: Tooltips (every chart) → Filtering → Sorting → Drill-down → Cross-filtering → Export → Annotations
|
|
107
|
+
|
|
108
|
+
**Annotations** turn charts into stories. Mark: inflection points, threshold crossings (amber), external events (indigo/red), anomalies (red), achievements (green). **Limit 3 per chart.** Implementation: `references/component-guide.md` → Annotation Patterns.
|
|
109
|
+
|
|
110
|
+
### Step 6: Tell the Story
|
|
111
|
+
|
|
112
|
+
- **Headline states insight**: "Revenue grew 23% QoQ, driven by enterprise" — not "Q3 Revenue Chart"
|
|
113
|
+
- **Annotate key moments** directly on chart
|
|
114
|
+
- **Contextual comparisons**: vs. prior period, vs. target, vs. benchmark
|
|
115
|
+
- **Progressive disclosure**: Overview first — detail on demand
|
|
116
|
+
|
|
117
|
+
## Environment-Specific Guidance
|
|
118
|
+
|
|
119
|
+
| Environment | Approach |
|
|
120
|
+
|---|---|
|
|
121
|
+
| **Claude Artifacts** | React (JSX), single file, default export. Available: `recharts`, `lodash`, `d3`, `lucide-react`, shadcn via `@/components/ui/*`, Tailwind |
|
|
122
|
+
| **Claude Code / Terminal** | Vite + React + Tailwind. Add shadcn/ui + Recharts. Structure: `src/components/charts/`, `src/components/cards/`, `src/data/` |
|
|
123
|
+
| **Python / Jupyter** | Plotly for charts, Plotly Dash for dashboards |
|
|
124
|
+
| **Cursor / Bolt / other IDEs** | Match existing framework. Prefer shadcn/ui if present |
|
|
125
|
+
|
|
126
|
+
## Anti-Patterns
|
|
127
|
+
|
|
128
|
+
- Screenshot/static charts — build interactive components
|
|
129
|
+
- Defaulting to BI tools unprompted — build code-first when no tool specified
|
|
130
|
+
- Default matplotlib — always customize in Python
|
|
131
|
+
- Rainbow palettes — use deliberate, meaningful colors
|
|
132
|
+
- 3D charts — almost never appropriate
|
|
133
|
+
- Pie charts > 5 slices — use horizontal bar
|
|
134
|
+
- Unlabeled dual y-axes — use two separate charts
|
|
135
|
+
- Truncated bar axes — always start at zero
|