@bonnard/cli 0.2.5 → 0.2.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,85 @@
1
+ # @bonnard/cli
2
+
3
+ The Bonnard CLI (`bon`) takes you from zero to a deployed semantic layer in minutes. Define metrics in YAML, validate locally, deploy, and query — from your terminal or AI coding agent.
4
+
5
+ ## Install
6
+
7
+ ```bash
8
+ npm install -g @bonnard/cli
9
+ ```
10
+
11
+ Requires Node.js 20+.
12
+
13
+ ## Quick start
14
+
15
+ ```bash
16
+ bon init # Create project structure + agent templates
17
+ bon datasource add --demo # Add demo dataset (no warehouse needed)
18
+ bon validate # Check syntax
19
+ bon login # Authenticate with Bonnard
20
+ bon deploy -m "Initial deploy" # Deploy to Bonnard
21
+ ```
22
+
23
+ ## Commands
24
+
25
+ | Command | Description |
26
+ |---------|-------------|
27
+ | `bon init` | Create project structure and AI agent templates |
28
+ | `bon login` | Authenticate with Bonnard |
29
+ | `bon logout` | Remove stored credentials |
30
+ | `bon whoami` | Show current login status |
31
+ | `bon datasource add` | Add a data source (interactive) |
32
+ | `bon datasource add --demo` | Add read-only demo dataset |
33
+ | `bon datasource add --from-dbt` | Import from dbt profiles |
34
+ | `bon datasource list` | List configured data sources |
35
+ | `bon datasource remove <name>` | Remove a data source |
36
+ | `bon validate` | Validate cube and view YAML |
37
+ | `bon deploy -m "message"` | Deploy to Bonnard |
38
+ | `bon deployments` | List deployment history |
39
+ | `bon diff <id>` | View changes in a deployment |
40
+ | `bon annotate <id>` | Add context to deployment changes |
41
+ | `bon query '{"measures":["orders.count"]}'` | Query the semantic layer (JSON) |
42
+ | `bon query "SELECT ..." --sql` | Query the semantic layer (SQL) |
43
+ | `bon mcp` | MCP setup instructions for AI agents |
44
+ | `bon mcp test` | Test MCP server connectivity |
45
+ | `bon docs [topic]` | Browse modeling documentation |
46
+ | `bon docs --search "joins"` | Search documentation |
47
+
48
+ ## Agent-ready from the start
49
+
50
+ `bon init` generates context files for your AI coding tools:
51
+
52
+ - **Claude Code** — `.claude/rules/` + get-started skill
53
+ - **Cursor** — `.cursor/rules/` with auto-apply frontmatter
54
+ - **Codex** — `AGENTS.md` + skills folder
55
+
56
+ Your agent understands Bonnard's modeling language from the first prompt.
57
+
58
+ ## Project structure
59
+
60
+ After `bon init`:
61
+
62
+ ```
63
+ my-project/
64
+ ├── bon.yaml # Project configuration
65
+ ├── bonnard/
66
+ │ ├── cubes/ # Cube definitions (measures, dimensions, joins)
67
+ │ └── views/ # View definitions (curated query interfaces)
68
+ └── .bon/ # Local config (gitignored)
69
+ └── datasources.yaml # Data source credentials
70
+ ```
71
+
72
+ ## CI/CD
73
+
74
+ ```bash
75
+ bon deploy --ci -m "CI deploy"
76
+ ```
77
+
78
+ Non-interactive mode for pipelines. Datasources are synced automatically.
79
+
80
+ ## Documentation
81
+
82
+ - [Getting Started](https://docs.bonnard.dev/docs/getting-started)
83
+ - [CLI Reference](https://docs.bonnard.dev/docs/cli)
84
+ - [Modeling Guide](https://docs.bonnard.dev/docs/modeling/cubes)
85
+ - [Querying](https://docs.bonnard.dev/docs/querying)
package/dist/bin/bon.mjs CHANGED
@@ -2698,11 +2698,11 @@ async function showOverview(client) {
2698
2698
  console.log(pc.dim(" bon metabase explore collections"));
2699
2699
  console.log(pc.dim(" bon metabase explore cards"));
2700
2700
  console.log(pc.dim(" bon metabase explore dashboards"));
2701
- console.log(pc.dim(" bon metabase explore card <id>"));
2702
- console.log(pc.dim(" bon metabase explore dashboard <id>"));
2703
- console.log(pc.dim(" bon metabase explore database <id>"));
2704
- console.log(pc.dim(" bon metabase explore table <id>"));
2705
- console.log(pc.dim(" bon metabase explore collection <id>"));
2701
+ console.log(pc.dim(" bon metabase explore card <id-or-name>"));
2702
+ console.log(pc.dim(" bon metabase explore dashboard <id-or-name>"));
2703
+ console.log(pc.dim(" bon metabase explore database <id-or-name>"));
2704
+ console.log(pc.dim(" bon metabase explore table <id-or-name>"));
2705
+ console.log(pc.dim(" bon metabase explore collection <id-or-name>"));
2706
2706
  }
2707
2707
  async function showDatabases(client) {
2708
2708
  const databases = await client.getDatabases();
@@ -2778,7 +2778,7 @@ async function showCards(client) {
2778
2778
  console.log();
2779
2779
  }
2780
2780
  if (models.length === 0 && metrics.length === 0 && questions.length === 0) console.log(pc.dim(" No cards found."));
2781
- console.log(pc.dim("View details: bon metabase explore card <id>"));
2781
+ console.log(pc.dim("View details: bon metabase explore card <id-or-name>"));
2782
2782
  }
2783
2783
  async function showCardDetail(client, id) {
2784
2784
  const card = await client.getCard(id);
@@ -2835,7 +2835,7 @@ async function showDashboards(client) {
2835
2835
  console.log(` ${pc.dim(padColumn("ID", 6))}${pc.dim("NAME")}`);
2836
2836
  for (const d of active) console.log(` ${padColumn(String(d.id), 6)}${d.name}`);
2837
2837
  console.log();
2838
- console.log(pc.dim("View details: bon metabase explore dashboard <id>"));
2838
+ console.log(pc.dim("View details: bon metabase explore dashboard <id-or-name>"));
2839
2839
  }
2840
2840
  async function showDashboardDetail(client, id) {
2841
2841
  const [dashboard, allCards] = await Promise.all([client.getDashboard(id), client.getCards()]);
@@ -2895,7 +2895,7 @@ async function showDatabaseDetail(client, id) {
2895
2895
  }
2896
2896
  console.log();
2897
2897
  }
2898
- console.log(pc.dim("View table fields: bon metabase explore table <id>"));
2898
+ console.log(pc.dim("View table fields: bon metabase explore table <id-or-name>"));
2899
2899
  }
2900
2900
  function classifyFieldType(field) {
2901
2901
  const bt = field.base_type || "";
@@ -3008,8 +3008,105 @@ async function showCollectionDetail(client, id) {
3008
3008
  console.log();
3009
3009
  }
3010
3010
  if (cardItems.length === 0 && dashboardItems.length === 0) console.log(pc.dim(" No items in this collection."));
3011
- console.log(pc.dim("View card SQL: bon metabase explore card <id>"));
3012
- console.log(pc.dim("View dashboard: bon metabase explore dashboard <id>"));
3011
+ console.log(pc.dim("View card SQL: bon metabase explore card <id-or-name>"));
3012
+ console.log(pc.dim("View dashboard: bon metabase explore dashboard <id-or-name>"));
3013
+ }
3014
+ function isNumericId(value) {
3015
+ return /^\d+$/.test(value);
3016
+ }
3017
+ function showDisambiguation(resource, matches) {
3018
+ console.error(pc.yellow(`Multiple ${resource}s match that name:\n`));
3019
+ for (const m of matches) console.log(` ${padColumn(String(m.id), 8)}${m.label}`);
3020
+ console.log();
3021
+ console.log(pc.dim(`Use the numeric ID to be specific: bon metabase explore ${resource} <id>`));
3022
+ process.exit(1);
3023
+ }
3024
+ async function resolveCardId(client, input) {
3025
+ if (isNumericId(input)) return parseInt(input, 10);
3026
+ const cards = await client.getCards();
3027
+ const needle = input.toLowerCase();
3028
+ const matches = cards.filter((c) => c.name?.toLowerCase().includes(needle));
3029
+ if (matches.length === 0) {
3030
+ console.error(pc.red(`No card found matching "${input}"`));
3031
+ process.exit(1);
3032
+ }
3033
+ if (matches.length === 1) return matches[0].id;
3034
+ showDisambiguation("card", matches.map((c) => ({
3035
+ id: c.id,
3036
+ label: c.name
3037
+ })));
3038
+ }
3039
+ async function resolveDashboardId(client, input) {
3040
+ if (isNumericId(input)) return parseInt(input, 10);
3041
+ const dashboards = await client.getDashboards();
3042
+ const needle = input.toLowerCase();
3043
+ const matches = dashboards.filter((d) => d.name?.toLowerCase().includes(needle));
3044
+ if (matches.length === 0) {
3045
+ console.error(pc.red(`No dashboard found matching "${input}"`));
3046
+ process.exit(1);
3047
+ }
3048
+ if (matches.length === 1) return matches[0].id;
3049
+ showDisambiguation("dashboard", matches.map((d) => ({
3050
+ id: d.id,
3051
+ label: d.name
3052
+ })));
3053
+ }
3054
+ async function resolveDatabaseId(client, input) {
3055
+ if (isNumericId(input)) return parseInt(input, 10);
3056
+ const databases = await client.getDatabases();
3057
+ const needle = input.toLowerCase();
3058
+ const matches = databases.filter((d) => d.name?.toLowerCase().includes(needle));
3059
+ if (matches.length === 0) {
3060
+ console.error(pc.red(`No database found matching "${input}"`));
3061
+ process.exit(1);
3062
+ }
3063
+ if (matches.length === 1) return matches[0].id;
3064
+ showDisambiguation("database", matches.map((d) => ({
3065
+ id: d.id,
3066
+ label: `${d.name} (${d.engine})`
3067
+ })));
3068
+ }
3069
+ async function resolveTableId(client, input) {
3070
+ if (isNumericId(input)) return parseInt(input, 10);
3071
+ const databases = await client.getDatabases();
3072
+ const needle = input.toLowerCase();
3073
+ const matches = [];
3074
+ for (const db of databases) {
3075
+ const meta = await client.getDatabaseMetadata(db.id);
3076
+ for (const t of meta.tables) {
3077
+ if (t.visibility_type === "hidden" || t.visibility_type === "retired") continue;
3078
+ if (t.name.toLowerCase().includes(needle)) matches.push({
3079
+ id: t.id,
3080
+ name: t.name,
3081
+ schema: t.schema,
3082
+ dbName: db.name
3083
+ });
3084
+ }
3085
+ }
3086
+ if (matches.length === 0) {
3087
+ console.error(pc.red(`No table found matching "${input}"`));
3088
+ process.exit(1);
3089
+ }
3090
+ if (matches.length === 1) return matches[0].id;
3091
+ showDisambiguation("table", matches.map((m) => ({
3092
+ id: m.id,
3093
+ label: `${m.dbName} / ${m.schema}.${m.name}`
3094
+ })));
3095
+ }
3096
+ async function resolveCollectionId(client, input) {
3097
+ if (isNumericId(input)) return parseInt(input, 10);
3098
+ const collections = await client.getCollections();
3099
+ const needle = input.toLowerCase();
3100
+ const matches = collections.filter((c) => typeof c.id === "number" && c.name?.toLowerCase().includes(needle));
3101
+ if (matches.length === 0) {
3102
+ console.error(pc.red(`No collection found matching "${input}"`));
3103
+ process.exit(1);
3104
+ }
3105
+ if (matches.length === 1) return matches[0].id;
3106
+ showDisambiguation("collection", matches.map((c) => ({
3107
+ id: c.id,
3108
+ label: c.name
3109
+ })));
3013
3110
  }
3014
3111
  const RESOURCES = [
3015
3112
  "databases",
@@ -3047,71 +3144,41 @@ async function metabaseExploreCommand(resource, id) {
3047
3144
  case "dashboards":
3048
3145
  await showDashboards(client);
3049
3146
  break;
3050
- case "card": {
3147
+ case "card":
3051
3148
  if (!id) {
3052
- console.error(pc.red("Card ID required: bon metabase explore card <id>"));
3149
+ console.error(pc.red("Usage: bon metabase explore card <id-or-name>"));
3053
3150
  process.exit(1);
3054
3151
  }
3055
- const cardId = parseInt(id, 10);
3056
- if (isNaN(cardId)) {
3057
- console.error(pc.red(`Invalid card ID: ${id}`));
3058
- process.exit(1);
3059
- }
3060
- await showCardDetail(client, cardId);
3152
+ await showCardDetail(client, await resolveCardId(client, id));
3061
3153
  break;
3062
- }
3063
- case "dashboard": {
3154
+ case "dashboard":
3064
3155
  if (!id) {
3065
- console.error(pc.red("Dashboard ID required: bon metabase explore dashboard <id>"));
3156
+ console.error(pc.red("Usage: bon metabase explore dashboard <id-or-name>"));
3066
3157
  process.exit(1);
3067
3158
  }
3068
- const dashId = parseInt(id, 10);
3069
- if (isNaN(dashId)) {
3070
- console.error(pc.red(`Invalid dashboard ID: ${id}`));
3071
- process.exit(1);
3072
- }
3073
- await showDashboardDetail(client, dashId);
3159
+ await showDashboardDetail(client, await resolveDashboardId(client, id));
3074
3160
  break;
3075
- }
3076
- case "database": {
3161
+ case "database":
3077
3162
  if (!id) {
3078
- console.error(pc.red("Database ID required: bon metabase explore database <id>"));
3079
- process.exit(1);
3080
- }
3081
- const dbId = parseInt(id, 10);
3082
- if (isNaN(dbId)) {
3083
- console.error(pc.red(`Invalid database ID: ${id}`));
3163
+ console.error(pc.red("Usage: bon metabase explore database <id-or-name>"));
3084
3164
  process.exit(1);
3085
3165
  }
3086
- await showDatabaseDetail(client, dbId);
3166
+ await showDatabaseDetail(client, await resolveDatabaseId(client, id));
3087
3167
  break;
3088
- }
3089
- case "table": {
3168
+ case "table":
3090
3169
  if (!id) {
3091
- console.error(pc.red("Table ID required: bon metabase explore table <id>"));
3170
+ console.error(pc.red("Usage: bon metabase explore table <id-or-name>"));
3092
3171
  process.exit(1);
3093
3172
  }
3094
- const tableId = parseInt(id, 10);
3095
- if (isNaN(tableId)) {
3096
- console.error(pc.red(`Invalid table ID: ${id}`));
3097
- process.exit(1);
3098
- }
3099
- await showTableDetail(client, tableId);
3173
+ await showTableDetail(client, await resolveTableId(client, id));
3100
3174
  break;
3101
- }
3102
- case "collection": {
3175
+ case "collection":
3103
3176
  if (!id) {
3104
- console.error(pc.red("Collection ID required: bon metabase explore collection <id>"));
3177
+ console.error(pc.red("Usage: bon metabase explore collection <id-or-name>"));
3105
3178
  process.exit(1);
3106
3179
  }
3107
- const colId = parseInt(id, 10);
3108
- if (isNaN(colId)) {
3109
- console.error(pc.red(`Invalid collection ID: ${id}`));
3110
- process.exit(1);
3111
- }
3112
- await showCollectionDetail(client, colId);
3180
+ await showCollectionDetail(client, await resolveCollectionId(client, id));
3113
3181
  break;
3114
- }
3115
3182
  }
3116
3183
  } catch (err) {
3117
3184
  if (err instanceof MetabaseApiError) {
@@ -3454,10 +3521,10 @@ function buildReport(data) {
3454
3521
  report += `5. **Collection Structure** → Map collections to views (one view per business domain)\n`;
3455
3522
  report += `6. **Table Inventory** → Use field classification (dims/measures/time) to build each cube\n\n`;
3456
3523
  report += `Drill deeper with:\n`;
3457
- report += `- \`bon metabase explore table <id>\` — field types and classification\n`;
3458
- report += `- \`bon metabase explore card <id>\` — SQL and columns\n`;
3459
- report += `- \`bon metabase explore collection <id>\` — cards in a collection\n`;
3460
- report += `- \`bon metabase explore database <id>\` — schemas and tables\n\n`;
3524
+ report += `- \`bon metabase explore table <id-or-name>\` — field types and classification\n`;
3525
+ report += `- \`bon metabase explore card <id-or-name>\` — SQL and columns\n`;
3526
+ report += `- \`bon metabase explore collection <id-or-name>\` — cards in a collection\n`;
3527
+ report += `- \`bon metabase explore database <id-or-name>\` — schemas and tables\n\n`;
3461
3528
  report += `## Summary\n\n`;
3462
3529
  report += `| Metric | Count |\n|--------|-------|\n`;
3463
3530
  report += `| Databases | ${databases.length} |\n`;
@@ -3497,7 +3564,7 @@ function buildReport(data) {
3497
3564
  report += "```\n\n";
3498
3565
  report += `## Top ${topCards.length} Cards by Activity\n\n`;
3499
3566
  report += `Ranked by view count, weighted by recency. Cards not used in the last ${INACTIVE_MONTHS} months are penalized 90%.\n`;
3500
- report += `Use \`bon metabase explore card <id>\` to view SQL and column details for any card.\n\n`;
3567
+ report += `Use \`bon metabase explore card <id-or-name>\` to view SQL and column details for any card.\n\n`;
3501
3568
  report += `| Rank | ID | Views | Last Used | Active | Pattern | Type | Display | Collection | Name |\n`;
3502
3569
  report += `|------|----|-------|-----------|--------|---------|------|---------|------------|------|\n`;
3503
3570
  for (let i = 0; i < topCards.length; i++) {
@@ -3555,8 +3622,18 @@ function buildReport(data) {
3555
3622
  const sortedRefs = Array.from(globalTableRefs.entries()).sort((a, b) => b[1] - a[1]).slice(0, 20);
3556
3623
  report += `## Most Referenced Tables (from SQL)\n\n`;
3557
3624
  report += `Tables most frequently referenced in FROM/JOIN clauses across all cards.\n\n`;
3558
- report += `| Table | References |\n|-------|------------|\n`;
3559
- for (const [table, count] of sortedRefs) report += `| ${table} | ${count} |\n`;
3625
+ const tableIdByRef = /* @__PURE__ */ new Map();
3626
+ for (const db of databases) for (const t of db.tables) {
3627
+ const qualified = `${t.schema}.${t.name}`.toLowerCase();
3628
+ const unqualified = t.name.toLowerCase();
3629
+ if (!tableIdByRef.has(qualified)) tableIdByRef.set(qualified, t.id);
3630
+ if (!tableIdByRef.has(unqualified)) tableIdByRef.set(unqualified, t.id);
3631
+ }
3632
+ report += `| ID | Table | References |\n|------|-------|------------|\n`;
3633
+ for (const [table, count] of sortedRefs) {
3634
+ const tid = tableIdByRef.get(table);
3635
+ report += `| ${tid ?? "—"} | ${table} | ${count} |\n`;
3636
+ }
3560
3637
  report += `\n`;
3561
3638
  }
3562
3639
  const fieldIdLookup = /* @__PURE__ */ new Map();
@@ -3656,9 +3733,9 @@ function buildReport(data) {
3656
3733
  report += `### ${db.name} / ${schema} (${referenced.length} referenced`;
3657
3734
  if (unreferenced > 0) report += `, ${unreferenced} unreferenced`;
3658
3735
  report += `)\n\n`;
3659
- report += `| Table | Fields | Dims | Measures | Time | Refs |\n`;
3660
- report += `|-------|--------|------|----------|------|------|\n`;
3661
- for (const s of referenced) report += `| ${s.name} | ${s.fieldCount} | ${s.dimensions} | ${s.measures} | ${s.timeDimensions} | ${s.refCount} |\n`;
3736
+ report += `| ID | Table | Fields | Dims | Measures | Time | Refs |\n`;
3737
+ report += `|------|-------|--------|------|----------|------|------|\n`;
3738
+ for (const s of referenced) report += `| ${s.id} | ${s.name} | ${s.fieldCount} | ${s.dimensions} | ${s.measures} | ${s.timeDimensions} | ${s.refCount} |\n`;
3662
3739
  if (unreferenced > 0) skippedTables += unreferenced;
3663
3740
  report += `\n`;
3664
3741
  }
@@ -52,13 +52,24 @@
52
52
  - [syntax.references](syntax.references) - Reference columns, members, and cubes
53
53
  - [syntax.context-variables](syntax.context-variables) - CUBE, FILTER_PARAMS, COMPILE_CONTEXT
54
54
 
55
- ## Workflow
55
+ ## Querying
56
56
 
57
- - [workflow](workflow) - End-to-end development workflow
58
- - [workflow.validate](workflow.validate) - Validate cubes and views locally
59
- - [workflow.deploy](workflow.deploy) - Deploy to Bonnard
60
- - [workflow.query](workflow.query) - Query the deployed semantic layer
61
- - [workflow.mcp](workflow.mcp) - Connect AI agents via MCP
57
+ - [querying](querying) - Query your deployed semantic layer
58
+ - [querying.mcp](querying.mcp) - Connect AI agents via MCP
59
+ - [querying.rest-api](querying.rest-api) - REST API and SQL query reference
60
+ - [querying.sdk](querying.sdk) - TypeScript SDK for custom apps
61
+
62
+ ## CLI
63
+
64
+ - [cli](cli) - CLI commands and development workflow
65
+ - [cli.deploy](cli.deploy) - Deploy to Bonnard
66
+ - [cli.validate](cli.validate) - Validate cubes and views locally
67
+
68
+ ## Other
69
+
70
+ - [governance](governance) - User and group-level permissions
71
+ - [catalog](catalog) - Browse your data model in the browser
72
+ - [slack-teams](slack-teams) - AI agents in team chat (coming soon)
62
73
 
63
74
  ## Quick Reference
64
75
 
@@ -0,0 +1,36 @@
1
+ # Catalog
2
+
3
+ > Browse and understand your data model — no code required.
4
+
5
+ The Bonnard catalog gives everyone on your team a live view of your semantic layer. Browse cubes, views, measures, and dimensions from the browser. Understand what data is available before writing a single query.
6
+
7
+ ## What you can explore
8
+
9
+ - **Cubes and Views** — See every deployed source with field counts at a glance
10
+ - **Measures** — Aggregation type, SQL expression, format (currency, percentage), and description
11
+ - **Dimensions** — Data type, time granularity options, and custom metadata
12
+ - **Segments** — Pre-defined filters available for queries
13
+
14
+ ## Field-level detail
15
+
16
+ Click any field to see exactly how it's calculated:
17
+
18
+ - **SQL expression** — The underlying query logic
19
+ - **Type and format** — How the field is aggregated and displayed
20
+ - **Origin cube** — Which cube a view field traces back to
21
+ - **Referenced fields** — Dependencies this field relies on
22
+ - **Custom metadata** — Tags, labels, and annotations set by your data team
23
+
24
+ ## Built for business users
25
+
26
+ The catalog is designed for anyone who needs to understand the data, not just engineers. No YAML, no terminal, no warehouse credentials. Browse the schema, read descriptions, and know exactly what to ask your AI agent for.
27
+
28
+ ## Coming soon
29
+
30
+ - **Relationship visualization** — An interactive visual map showing how cubes connect through joins and shared dimensions
31
+ - **Impact analysis** — Understand which views and measures are affected when you change a cube, before you deploy
32
+
33
+ ## See Also
34
+
35
+ - [views](views) — How to create curated views for your team
36
+ - [cubes.public](cubes.public) — Control which cubes are visible
@@ -0,0 +1,193 @@
1
+ # Deploy
2
+
3
+ > Deploy your cubes and views to the Bonnard platform using the CLI. Once deployed, your semantic layer is queryable via the REST API, MCP for AI agents, and connected BI tools.
4
+
5
+ ## Overview
6
+
7
+ The `bon deploy` command uploads your cubes and views to Bonnard, making them available for querying via the API. It validates and tests connections before deploying, and creates a versioned deployment with change detection.
8
+
9
+ ## Usage
10
+
11
+ ```bash
12
+ bon deploy -m "description of changes"
13
+ ```
14
+
15
+ A `-m` message is **required** — it describes what changed in this deployment.
16
+
17
+ ### Flags
18
+
19
+ | Flag | Description |
20
+ |------|-------------|
21
+ | `-m "message"` | **Required.** Deployment description |
22
+ | `--ci` | Non-interactive mode |
23
+
24
+ Datasources are always synced automatically during deploy.
25
+
26
+ ### CI/CD
27
+
28
+ For automated pipelines, use `--ci` for non-interactive mode:
29
+
30
+ ```bash
31
+ bon deploy --ci -m "CI deploy"
32
+ ```
33
+
34
+ ## Prerequisites
35
+
36
+ 1. **Logged in** — run `bon login` first
37
+ 2. **Valid cubes and views** — must pass `bon validate`
38
+ 3. **Working connections** — data sources must be accessible
39
+
40
+ ## What Happens
41
+
42
+ 1. **Validates** — checks cubes and views for errors
43
+ 2. **Tests connections** — verifies data source access
44
+ 3. **Uploads** — sends cubes and views to Bonnard
45
+ 4. **Detects changes** — compares against the previous deployment
46
+ 5. **Activates** — makes cubes and views available for queries
47
+
48
+ ## Example Output
49
+
50
+ ```
51
+ bon deploy -m "Add revenue metrics"
52
+
53
+ ✓ Validating...
54
+ ✓ bonnard/cubes/orders.yaml
55
+ ✓ bonnard/cubes/users.yaml
56
+ ✓ bonnard/views/orders_overview.yaml
57
+
58
+ ✓ Testing connections...
59
+ ✓ datasource "default" connected
60
+
61
+ ✓ Deploying to Bonnard...
62
+ Uploading 2 cubes, 1 view...
63
+
64
+ ✓ Deploy complete!
65
+
66
+ Changes:
67
+ + orders.total_revenue (measure)
68
+ + orders.avg_order_value (measure)
69
+ ~ orders.count (measure) — type changed
70
+
71
+ ⚠ 1 breaking change detected
72
+ ```
73
+
74
+ ## Change Detection
75
+
76
+ Every deployment is versioned. Bonnard automatically detects:
77
+
78
+ - **Added** — new cubes, views, measures, dimensions
79
+ - **Modified** — changes to type, SQL, format, description
80
+ - **Removed** — deleted fields (flagged as breaking)
81
+ - **Breaking changes** — removed measures/dimensions, type changes
82
+
83
+ ## Reviewing Deployments
84
+
85
+ After deploying, use these commands to review history and changes:
86
+
87
+ ### List deployments
88
+
89
+ ```bash
90
+ bon deployments # Recent deployments
91
+ bon deployments --all # Full history
92
+ ```
93
+
94
+ ### View changes in a deployment
95
+
96
+ ```bash
97
+ bon diff <deployment-id> # All changes
98
+ bon diff <deployment-id> --breaking # Breaking changes only
99
+ ```
100
+
101
+ ### Annotate changes
102
+
103
+ Add reasoning or context to deployment changes:
104
+
105
+ ```bash
106
+ bon annotate <deployment-id> --data '{"object": "note about why this changed"}'
107
+ ```
108
+
109
+ Annotations are visible in the schema catalog and help teammates understand why changes were made.
110
+
111
+ ## Deploy Flow
112
+
113
+ ```
114
+ bon deploy -m "message"
115
+
116
+ ├── 1. bon validate (must pass)
117
+
118
+ ├── 2. Test all datasource connections (must succeed)
119
+
120
+ ├── 3. Upload to Bonnard API
121
+ │ - cubes from bonnard/cubes/
122
+ │ - views from bonnard/views/
123
+ │ - datasource configs
124
+
125
+ ├── 4. Detect changes vs. previous deployment
126
+
127
+ └── 5. Activate deployment
128
+ ```
129
+
130
+ ## Error Handling
131
+
132
+ ### Validation Errors
133
+
134
+ ```
135
+ ✗ Validating...
136
+
137
+ bonnard/cubes/orders.yaml:15:5
138
+ error: Unknown measure type "counts"
139
+
140
+ Deploy aborted. Fix validation errors first.
141
+ ```
142
+
143
+ ### Connection Errors
144
+
145
+ ```
146
+ ✗ Testing connections...
147
+ ✗ datasource "analytics": Connection refused
148
+
149
+ Deploy aborted. Fix connection issues:
150
+ - Check credentials in .bon/datasources.yaml
151
+ - Verify network access to database
152
+ - Run: bon datasource add (to reconfigure)
153
+ ```
154
+
155
+ ### Auth Errors
156
+
157
+ ```
158
+ ✗ Not logged in.
159
+
160
+ Run: bon login
161
+ ```
162
+
163
+ ## What Gets Deployed
164
+
165
+ | Source | Deployed |
166
+ |--------|----------|
167
+ | `bonnard/cubes/*.yaml` | All cube definitions |
168
+ | `bonnard/views/*.yaml` | All view definitions |
169
+ | `.bon/datasources.yaml` | Connection configs (credentials encrypted) |
170
+ | `bon.yaml` | Project settings |
171
+
172
+ ## Deployment Behavior
173
+
174
+ - **Replaces** previous deployment (not additive)
175
+ - **All or nothing** — partial deploys don't happen
176
+ - **Instant** — changes take effect immediately
177
+ - **Versioned** — every deployment is tracked with changes
178
+
179
+ ## Best Practices
180
+
181
+ 1. **Always include a meaningful message** — helps teammates understand what changed
182
+ 2. **Validate first** — run `bon validate` before deploy
183
+ 3. **Test locally** — verify queries work before deploying
184
+ 4. **Use version control** — commit cubes and views before deploying
185
+ 5. **Review after deploy** — use `bon diff` to check for unintended breaking changes
186
+ 6. **Annotate breaking changes** — add context so consumers know what to update
187
+
188
+ ## See Also
189
+
190
+ - cli
191
+ - cli.validate
192
+ - cubes
193
+ - views