@bonnard/cli 0.2.8 → 0.2.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025-present Bonnard (meal-inc)
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/dist/bin/bon.mjs CHANGED
@@ -12,9 +12,9 @@ import crypto from "node:crypto";
12
12
  import { execFileSync } from "node:child_process";
13
13
  import { encode } from "@toon-format/toon";
14
14
 
15
- //#region rolldown:runtime
15
+ //#region \0rolldown/runtime.js
16
16
  var __defProp = Object.defineProperty;
17
- var __exportAll = (all, symbols) => {
17
+ var __exportAll = (all, no_symbols) => {
18
18
  let target = {};
19
19
  for (var name in all) {
20
20
  __defProp(target, name, {
@@ -22,7 +22,7 @@ var __exportAll = (all, symbols) => {
22
22
  enumerable: true
23
23
  });
24
24
  }
25
- if (symbols) {
25
+ if (!no_symbols) {
26
26
  __defProp(target, Symbol.toStringTag, { value: "Module" });
27
27
  }
28
28
  return target;
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@bonnard/cli",
3
- "version": "0.2.8",
3
+ "version": "0.2.9",
4
4
  "type": "module",
5
5
  "bin": {
6
6
  "bon": "./dist/bin/bon.mjs"
@@ -9,7 +9,7 @@
9
9
  "dist"
10
10
  ],
11
11
  "scripts": {
12
- "build": "tsdown src/bin/bon.ts --format esm --out-dir dist/bin && cp -r src/templates dist/ && mkdir -p dist/docs/topics dist/docs/schemas && cp ../content/index.md dist/docs/_index.md && cp ../content/overview.md ../content/getting-started.md dist/docs/topics/ && cp ../content/modeling/*.md dist/docs/topics/ && cp ../content/dashboards/*.md dist/docs/topics/",
12
+ "build": "tsdown src/bin/bon.ts --format esm --out-dir dist/bin && cp -r src/templates dist/ && mkdir -p dist/docs/topics dist/docs/schemas && cp ./content/index.md dist/docs/_index.md && cp ./content/overview.md ./content/getting-started.md dist/docs/topics/ && cp ./content/modeling/*.md dist/docs/topics/ && cp ./content/dashboards/*.md dist/docs/topics/",
13
13
  "dev": "tsdown src/bin/bon.ts --format esm --out-dir dist/bin --watch",
14
14
  "test": "vitest run"
15
15
  },
@@ -1,31 +0,0 @@
1
- # Catalog
2
-
3
- > Browse and understand your data model — no code required.
4
-
5
- The Bonnard catalog gives everyone on your team a live view of your semantic layer. Browse cubes, views, measures, and dimensions from the browser. Understand what data is available before writing a single query.
6
-
7
- ## What you can explore
8
-
9
- - **Cubes and Views** — See every deployed source with field counts at a glance
10
- - **Measures** — Aggregation type, SQL expression, format (currency, percentage), and description
11
- - **Dimensions** — Data type, time granularity options, and custom metadata
12
- - **Segments** — Pre-defined filters available for queries
13
-
14
- ## Field-level detail
15
-
16
- Click any field to see exactly how it's calculated:
17
-
18
- - **SQL expression** — The underlying query logic
19
- - **Type and format** — How the field is aggregated and displayed
20
- - **Origin cube** — Which cube a view field traces back to
21
- - **Referenced fields** — Dependencies this field relies on
22
- - **Custom metadata** — Tags, labels, and annotations set by your data team
23
-
24
- ## Built for business users
25
-
26
- The catalog is designed for anyone who needs to understand the data, not just engineers. No YAML, no terminal, no warehouse credentials. Browse the schema, read descriptions, and know exactly what to ask your AI agent for.
27
-
28
- ## See Also
29
-
30
- - [views](views) — How to create curated views for your team
31
- - [cubes.public](cubes.public) — Control which cubes are visible
@@ -1,59 +0,0 @@
1
- # CLI
2
-
3
- > Built for agent-first development by data engineers.
4
-
5
- The Bonnard CLI (`bon`) takes you from zero to a deployed semantic layer in minutes. Initialize a project, connect your warehouse, define metrics in YAML, validate locally, and deploy — all from your terminal or your AI coding agent.
6
-
7
- ## Agent-ready from the start
8
-
9
- `bon init` generates context files for your AI coding tools automatically:
10
-
11
- - **Claude Code** — `.claude/rules/` + get-started skill
12
- - **Cursor** — `.cursor/rules/` with auto-apply frontmatter
13
- - **Codex** — `AGENTS.md` + skills folder
14
-
15
- Your agent understands Bonnard's modeling language from the first prompt.
16
-
17
- ## Key commands
18
-
19
- ```bash
20
- bon init # Scaffold project + agent context
21
- bon datasource add --demo # Add a demo warehouse instantly
22
- bon datasource add --from-dbt # Import connections from dbt
23
- bon validate # Check YAML syntax
24
- bon deploy -m "description" # Ship to production (message required)
25
- bon query '{"measures":["orders.count"]}' # Test your semantic layer
26
- bon deployments # List deployment history
27
- bon diff <id> # View changes in a deployment
28
- bon annotate <id> --data '{...}' # Add context to deployment changes
29
- bon mcp # Get MCP setup instructions
30
- bon docs cubes.measures # Read modeling docs in terminal
31
- ```
32
-
33
- ## CI/CD ready
34
-
35
- Deploy from GitHub Actions, GitLab CI, or any pipeline:
36
-
37
- ```bash
38
- bon deploy --ci -m "CI deploy"
39
- ```
40
-
41
- Non-interactive mode. Datasources are synced automatically. Fails fast if anything is misconfigured.
42
-
43
- ## Deployment versioning
44
-
45
- Every deploy creates a versioned deployment with automatic change detection — added, modified, removed, and breaking changes are flagged. Review history with `bon deployments`, inspect changes with `bon diff`, and add context with `bon annotate`.
46
-
47
- ## Built-in documentation
48
-
49
- ```bash
50
- bon docs cubes.measures # Read modeling docs in your terminal
51
- bon docs --search "joins" # Search across all topics
52
- ```
53
-
54
- No context-switching. Learn and build in the same workflow.
55
-
56
- ## See Also
57
-
58
- - [workflow.deploy](workflow.deploy) — Deployment details
59
- - [workflow.validate](workflow.validate) — Validation reference
@@ -1,18 +0,0 @@
1
- # Context Graph (Coming Soon)
2
-
3
- > Visualize how your metrics connect. Coming soon.
4
-
5
- The Business Context Graph will provide an interactive visual map of your entire semantic layer — cubes, views, joins, and field dependencies — so you can understand impact before making changes.
6
-
7
- ## Planned capabilities
8
-
9
- - **Relationship visualization** — See how cubes connect through joins and shared dimensions
10
- - **Impact analysis** — Understand which views and measures are affected when you change a cube
11
- - **Field lineage** — Trace any metric back through its dependencies to the source table
12
- - **Search and filter** — Navigate large models by searching for specific fields or cubes
13
-
14
- ## Why it matters
15
-
16
- As your semantic layer grows, understanding the relationships between dozens of cubes and hundreds of fields becomes critical. The Context Graph gives your team a shared mental model of how metrics are defined and connected — reducing errors and speeding up development.
17
-
18
- Interested in early access? Reach out at [hello@bonnard.dev](mailto:hello@bonnard.dev).
@@ -1,83 +0,0 @@
1
- # Governance
2
-
3
- > Control who can see which views, columns, and rows in your semantic layer.
4
-
5
- Bonnard provides admin-managed data governance — control which views, columns, and rows each group of users can access. Policies are configured in the web UI and enforced automatically across MCP queries and the API. Changes take effect within one minute.
6
-
7
- ## How It Works
8
-
9
- ```
10
- Admin configures in web UI:
11
- Groups → Views → Field/Row restrictions
12
-
13
- Enforced automatically:
14
- MCP queries + API → only see what policies allow
15
- ```
16
-
17
- Governance uses **groups** as the unit of access. Each group has a set of **policies** that define which views its members can see, and optionally restrict specific columns or rows within those views.
18
-
19
- ## Groups
20
-
21
- Groups represent teams or roles in your organization — "Sales Team", "Finance", "Executive". Create and manage groups from the **Governance** page in the Bonnard dashboard.
22
-
23
- Each group has:
24
- - **Name** and optional description
25
- - **Color** for visual identification
26
- - **View access** — which views the group can query
27
- - **Members** — which users belong to the group
28
-
29
- Users can belong to multiple groups. Their effective access is the **union** of all group policies.
30
-
31
- ## View-Level Access (Level 1)
32
-
33
- The simplest control: toggle which views a group can see. Unchecked views are completely invisible to group members — they won't appear in `explore_schema` or be queryable.
34
-
35
- From the group detail page, check the views you want to grant access to and click **Save changes**. New policies default to "All fields" with no row filters.
36
-
37
- ## Field-Level Access (Level 2)
38
-
39
- Fine-tune which measures and dimensions a group can see within a view. Click the gear icon on any granted view to open the fine-tune dialog.
40
-
41
- Three modes:
42
- - **All fields** — full access to every measure and dimension (default)
43
- - **Only these** — whitelist specific fields; everything else is hidden
44
- - **All except** — blacklist specific fields; everything else is visible
45
-
46
- Hidden fields are removed from the schema — they don't appear in `explore_schema` and can't be used in queries.
47
-
48
- ## Row-Level Filters (Level 2)
49
-
50
- Restrict which rows a group can see. Add row filters in the fine-tune dialog to limit data by dimension values.
51
-
52
- For example, filter `traffic_source` to `equals B2B, Organic` so the group only sees rows where traffic_source is B2B or Organic. Multiple values in a single filter are OR'd (any match). Multiple separate filters are AND'd (all must match).
53
-
54
- Row filters are applied server-side on every query — users cannot bypass them.
55
-
56
- ## Members
57
-
58
- Assign users to groups from the **Members** tab. Each user shows which groups they belong to and a preview of their effective access (which views they can query, any field or row restrictions).
59
-
60
- Users without any group assignment see nothing — they must be added to at least one group to query governed views.
61
-
62
- ## How Policies Are Enforced
63
-
64
- Policies configured in the web UI are stored in Supabase and injected into the query engine at runtime. When a user queries via MCP or the API:
65
-
66
- 1. Their JWT is enriched with group memberships
67
- 2. The query engine loads policies for those groups
68
- 3. View visibility, field restrictions, and row filters are applied automatically
69
- 4. The user only sees data their policies allow
70
-
71
- No YAML changes are needed — governance is fully managed through the dashboard.
72
-
73
- ## Best Practices
74
-
75
- 1. **Start with broad access, then restrict** — give groups all views first, then fine-tune as needed
76
- 2. **Use groups for teams, not individuals** — easier to manage and audit
77
- 3. **Test with MCP** — after changing policies, query via MCP to verify the restrictions work as expected
78
- 4. **Review after schema deploys** — new views need to be added to group policies to become visible
79
-
80
- ## See Also
81
-
82
- - [features.mcp](features.mcp) — How AI agents query your semantic layer
83
- - [views](views) — Creating curated data views
@@ -1,48 +0,0 @@
1
- # MCP
2
-
3
- > Connect your preferred AI agent to your semantic layer.
4
-
5
- Bonnard exposes your semantic layer as a remote MCP server. Add one URL to your agent platform and it can explore your data model, run queries, and render charts — all through the Model Context Protocol.
6
-
7
- ![MCP chat with visualisations](/images/mcp-chat-demo.gif)
8
-
9
- ## Connect your agent
10
-
11
- Bonnard works with any MCP-compatible client:
12
-
13
- - **Claude Desktop** — Add as a custom connector
14
- - **ChatGPT** — Add via Settings > Apps (Pro/Plus)
15
- - **Cursor** — Add via Settings > MCP or `.cursor/mcp.json`
16
- - **Microsoft Copilot Studio** — Add as an MCP tool with OAuth 2.0 authentication
17
- - **VS Code / GitHub Copilot** — Add via Command Palette or `.vscode/mcp.json`
18
- - **Claude Code** — Add via `claude mcp add` or `.mcp.json`
19
- - **Windsurf** — Add via MCP config
20
- - **Gemini CLI** — Add via `.gemini/settings.json`
21
-
22
- One URL for all of them:
23
-
24
- ```
25
- https://mcp.bonnard.dev/mcp
26
- ```
27
-
28
- On first use, your browser opens to sign in — the agent receives a 30-day token automatically. No API keys, no config files, no secrets to rotate.
29
-
30
- ## Tools your agent gets
31
-
32
- | Tool | What it does |
33
- |------|-------------|
34
- | `explore_schema` | Browse cubes, views, and fields — or search by keyword |
35
- | `query` | Run structured queries with measures, dimensions, filters, and time grouping |
36
- | `sql_query` | Execute raw SQL for complex analysis (CTEs, UNIONs, custom calculations) |
37
- | `describe_field` | Inspect any field's SQL definition, type, format, and dependencies |
38
- | `visualize` | Render line, bar, pie, and KPI charts directly inside the conversation |
39
-
40
- ## Charts in chat
41
-
42
- The `visualize` tool renders interactive charts inline — auto-detected from your query shape. Charts support dark mode, currency and percentage formatting, and multi-series data.
43
-
44
- Ask "show me revenue by region this quarter" and get a formatted chart in your conversation, not a data dump.
45
-
46
- ## See Also
47
-
48
- - [workflow.mcp](workflow.mcp) — Step-by-step setup for each platform
@@ -1,15 +0,0 @@
1
- # Features
2
-
3
- > Everything you need to define, deploy, and query your semantic layer.
4
-
5
- Bonnard is a semantic layer platform built for AI-first analytics. Define your metrics once in YAML, deploy in seconds, and query from anywhere — your IDE, your AI agent, or your own apps.
6
-
7
- - **[MCP](features.mcp)** — Connect your preferred AI agent to your semantic layer
8
- - **[CLI](features.cli)** — Agent-first development workflow for data engineers
9
- - **[Semantic Layer](features.semantic-layer)** — Hosted, queryable metrics layer across all your warehouses
10
- - **[Catalog](features.catalog)** — Browse and understand your data model from the browser
11
- - **[SDK](features.sdk)** — Build custom data apps on top of Bonnard
12
- - **[Governance](features.governance)** — User and group-level permissions for your data
13
- - **[Context Graph](features.context-graph)** — Visual map of how your metrics connect (coming soon)
14
- - **[Slack & Teams](features.slack-teams)** — AI agents in your team chat (coming soon)
15
- - **[Deployment Versioning](workflow.deploy)** — Change detection, diff, and annotations for every deploy
@@ -1,53 +0,0 @@
1
- # SDK
2
-
3
- > Build custom data apps on top of your semantic layer.
4
-
5
- The Bonnard SDK (`@bonnard/sdk`) is a lightweight TypeScript client for querying your deployed semantic layer programmatically. Build dashboards, embedded analytics, internal tools, or data pipelines — all backed by your governed metrics.
6
-
7
- ## Quick start
8
-
9
- ```bash
10
- npm install @bonnard/sdk
11
- ```
12
-
13
- ```typescript
14
- import { createClient } from '@bonnard/sdk';
15
-
16
- const bonnard = createClient({
17
- apiKey: 'your-api-key',
18
- });
19
-
20
- const result = await bonnard.query({
21
- cube: 'orders',
22
- measures: ['revenue', 'count'],
23
- dimensions: ['status'],
24
- timeDimension: {
25
- dimension: 'created_at',
26
- granularity: 'month',
27
- dateRange: ['2025-01-01', '2025-12-31'],
28
- },
29
- });
30
- ```
31
-
32
- ## Type-safe queries
33
-
34
- Full TypeScript support with inference. Measures, dimensions, filters, time dimensions, and sort orders are all typed. Query results include field annotations with titles and types.
35
-
36
- ```typescript
37
- const result = await bonnard.sql<OrderRow>(
38
- `SELECT status, MEASURE(revenue) FROM orders GROUP BY 1`
39
- );
40
- // result.data is OrderRow[]
41
- ```
42
-
43
- ## What you can build
44
-
45
- - **Custom dashboards** — Query your semantic layer from Next.js, React, or any frontend
46
- - **Embedded analytics** — Add governed metrics to your product
47
- - **Data pipelines** — Consume semantic layer data in ETL workflows
48
- - **Internal tools** — Build admin panels backed by consistent metrics
49
-
50
- ## See Also
51
-
52
- - [features.semantic-layer](features.semantic-layer) — Platform overview
53
- - [workflow.query](workflow.query) — Query format reference
@@ -1,56 +0,0 @@
1
- # Semantic Layer
2
-
3
- > Define metrics once. Query from anywhere.
4
-
5
- Bonnard hosts your semantic layer so you don't have to. Define cubes and views in YAML, deploy with `bon deploy`, and query via JSON API, SQL, or MCP — no infrastructure to manage.
6
-
7
- ## Multi-warehouse
8
-
9
- Connect any combination of warehouses through a single semantic layer:
10
-
11
- - **PostgreSQL** — Direct TCP connection
12
- - **Redshift** — Cluster or serverless endpoint
13
- - **Snowflake** — Account-based authentication
14
- - **BigQuery** — GCP service account
15
- - **Databricks** — Token-based workspace connection
16
-
17
- Metrics from different warehouses are queried through the same API. Your consumers never need to know where the data lives.
18
-
19
- ## Two query formats
20
-
21
- **JSON API** — Structured queries with type-safe parameters:
22
-
23
- ```bash
24
- bon query '{
25
- "measures": ["orders.revenue"],
26
- "dimensions": ["orders.status"],
27
- "timeDimensions": [{
28
- "dimension": "orders.created_at",
29
- "granularity": "month",
30
- "dateRange": ["2025-01-01", "2025-12-31"]
31
- }]
32
- }'
33
- ```
34
-
35
- **SQL** — Full Cube SQL syntax for complex analysis:
36
-
37
- ```bash
38
- bon query --sql "SELECT status, MEASURE(revenue) FROM orders GROUP BY 1"
39
- ```
40
-
41
- ## Fully managed
42
-
43
- Your models are stored securely and served from Bonnard's infrastructure. Each organization gets isolated query execution scoped by JWT — no shared data, no noisy neighbors.
44
-
45
- Deploy in seconds. Query in milliseconds.
46
-
47
- ## Built for AI agents
48
-
49
- Views and descriptions are the discovery API for AI agents. When an agent calls `explore_schema`, it sees view names and descriptions — that's all it has to decide where to query. Well-written descriptions with scope, disambiguation, and dimension values make agents accurate. See the design guide principles in the CLI (`/bonnard-design-guide`) for details.
50
-
51
- ## See Also
52
-
53
- - [workflow.query](workflow.query) — Query format reference
54
- - [cubes](cubes) — Cube modeling guide
55
- - [views](views) — View modeling guide
56
- - [features.governance](features.governance) — Access control for views and data
@@ -1,18 +0,0 @@
1
- # Slack & Teams (Coming Soon)
2
-
3
- > AI agents in your team chat. Coming soon.
4
-
5
- Bonnard will bring semantic layer queries directly into Slack and Microsoft Teams — so anyone on your team can ask questions about data without leaving the conversation.
6
-
7
- ## Planned capabilities
8
-
9
- - **Natural language queries** — Ask "what was revenue last month?" in a channel and get an answer with a chart
10
- - **Governed responses** — Every answer goes through your semantic layer, so metrics are always consistent
11
- - **Shared context** — Results posted in channels are visible to the whole team, not siloed in individual AI chats
12
- - **Proactive alerts** — Get notified when key metrics change beyond thresholds you define
13
-
14
- ## Why it matters
15
-
16
- Your team already lives in Slack and Teams. Instead of asking analysts or switching to a BI tool, anyone can get instant, governed answers right where they work. The same semantic layer that powers your AI agents and dashboards powers your team chat.
17
-
18
- Interested in early access? Reach out at [hello@bonnard.dev](mailto:hello@bonnard.dev).
@@ -1,193 +0,0 @@
1
- # Deploy
2
-
3
- > Deploy your cubes and views to the Bonnard platform using the CLI. Once deployed, your semantic layer is queryable via the REST API, MCP for AI agents, and connected BI tools.
4
-
5
- ## Overview
6
-
7
- The `bon deploy` command uploads your cubes and views to Bonnard, making them available for querying via the API. It validates and tests connections before deploying, and creates a versioned deployment with change detection.
8
-
9
- ## Usage
10
-
11
- ```bash
12
- bon deploy -m "description of changes"
13
- ```
14
-
15
- A `-m` message is **required** — it describes what changed in this deployment.
16
-
17
- ### Flags
18
-
19
- | Flag | Description |
20
- |------|-------------|
21
- | `-m "message"` | **Required.** Deployment description |
22
- | `--ci` | Non-interactive mode |
23
-
24
- Datasources are always synced automatically during deploy.
25
-
26
- ### CI/CD
27
-
28
- For automated pipelines, use `--ci` for non-interactive mode:
29
-
30
- ```bash
31
- bon deploy --ci -m "CI deploy"
32
- ```
33
-
34
- ## Prerequisites
35
-
36
- 1. **Logged in** — run `bon login` first
37
- 2. **Valid cubes and views** — must pass `bon validate`
38
- 3. **Working connections** — data sources must be accessible
39
-
40
- ## What Happens
41
-
42
- 1. **Validates** — checks cubes and views for errors
43
- 2. **Tests connections** — verifies data source access
44
- 3. **Uploads** — sends cubes and views to Bonnard
45
- 4. **Detects changes** — compares against the previous deployment
46
- 5. **Activates** — makes cubes and views available for queries
47
-
48
- ## Example Output
49
-
50
- ```
51
- bon deploy -m "Add revenue metrics"
52
-
53
- ✓ Validating...
54
- ✓ bonnard/cubes/orders.yaml
55
- ✓ bonnard/cubes/users.yaml
56
- ✓ bonnard/views/orders_overview.yaml
57
-
58
- ✓ Testing connections...
59
- ✓ datasource "default" connected
60
-
61
- ✓ Deploying to Bonnard...
62
- Uploading 2 cubes, 1 view...
63
-
64
- ✓ Deploy complete!
65
-
66
- Changes:
67
- + orders.total_revenue (measure)
68
- + orders.avg_order_value (measure)
69
- ~ orders.count (measure) — type changed
70
-
71
- ⚠ 1 breaking change detected
72
- ```
73
-
74
- ## Change Detection
75
-
76
- Every deployment is versioned. Bonnard automatically detects:
77
-
78
- - **Added** — new cubes, views, measures, dimensions
79
- - **Modified** — changes to type, SQL, format, description
80
- - **Removed** — deleted fields (flagged as breaking)
81
- - **Breaking changes** — removed measures/dimensions, type changes
82
-
83
- ## Reviewing Deployments
84
-
85
- After deploying, use these commands to review history and changes:
86
-
87
- ### List deployments
88
-
89
- ```bash
90
- bon deployments # Recent deployments
91
- bon deployments --all # Full history
92
- ```
93
-
94
- ### View changes in a deployment
95
-
96
- ```bash
97
- bon diff <deployment-id> # All changes
98
- bon diff <deployment-id> --breaking # Breaking changes only
99
- ```
100
-
101
- ### Annotate changes
102
-
103
- Add reasoning or context to deployment changes:
104
-
105
- ```bash
106
- bon annotate <deployment-id> --data '{"object": "note about why this changed"}'
107
- ```
108
-
109
- Annotations are visible in the schema catalog and help teammates understand why changes were made.
110
-
111
- ## Deploy Flow
112
-
113
- ```
114
- bon deploy -m "message"
115
-
116
- ├── 1. bon validate (must pass)
117
-
118
- ├── 2. Test all datasource connections (must succeed)
119
-
120
- ├── 3. Upload to Bonnard API
121
- │ - cubes from bonnard/cubes/
122
- │ - views from bonnard/views/
123
- │ - datasource configs
124
-
125
- ├── 4. Detect changes vs. previous deployment
126
-
127
- └── 5. Activate deployment
128
- ```
129
-
130
- ## Error Handling
131
-
132
- ### Validation Errors
133
-
134
- ```
135
- ✗ Validating...
136
-
137
- bonnard/cubes/orders.yaml:15:5
138
- error: Unknown measure type "counts"
139
-
140
- Deploy aborted. Fix validation errors first.
141
- ```
142
-
143
- ### Connection Errors
144
-
145
- ```
146
- ✗ Testing connections...
147
- ✗ datasource "analytics": Connection refused
148
-
149
- Deploy aborted. Fix connection issues:
150
- - Check credentials in .bon/datasources.yaml
151
- - Verify network access to database
152
- - Run: bon datasource add (to reconfigure)
153
- ```
154
-
155
- ### Auth Errors
156
-
157
- ```
158
- ✗ Not logged in.
159
-
160
- Run: bon login
161
- ```
162
-
163
- ## What Gets Deployed
164
-
165
- | Source | Deployed |
166
- |--------|----------|
167
- | `bonnard/cubes/*.yaml` | All cube definitions |
168
- | `bonnard/views/*.yaml` | All view definitions |
169
- | `.bon/datasources.yaml` | Connection configs (credentials encrypted) |
170
- | `bon.yaml` | Project settings |
171
-
172
- ## Deployment Behavior
173
-
174
- - **Replaces** previous deployment (not additive)
175
- - **All or nothing** — partial deploys don't happen
176
- - **Instant** — changes take effect immediately
177
- - **Versioned** — every deployment is tracked with changes
178
-
179
- ## Best Practices
180
-
181
- 1. **Always include a meaningful message** — helps teammates understand what changed
182
- 2. **Validate first** — run `bon validate` before deploy
183
- 3. **Test locally** — verify queries work before deploying
184
- 4. **Use version control** — commit cubes and views before deploying
185
- 5. **Review after deploy** — use `bon diff` to check for unintended breaking changes
186
- 6. **Annotate breaking changes** — add context so consumers know what to update
187
-
188
- ## See Also
189
-
190
- - workflow
191
- - workflow.validate
192
- - cubes
193
- - views
@@ -1,179 +0,0 @@
1
- # MCP
2
-
3
- > Connect AI agents like Claude, ChatGPT, and Cursor to your semantic layer using the Model Context Protocol. MCP gives agents governed access to your metrics and dimensions.
4
-
5
- ## Overview
6
-
7
- After deploying with `bon deploy`, AI agents can query your semantic layer through the Model Context Protocol (MCP). Bonnard's MCP server provides tools for exploring your data model and running queries.
8
-
9
- MCP is supported by Claude, ChatGPT, Cursor, Windsurf, VS Code, Gemini, and more.
10
-
11
- ## MCP URL
12
-
13
- ```
14
- https://mcp.bonnard.dev/mcp
15
- ```
16
-
17
- ## Setup
18
-
19
- ### Claude Desktop
20
-
21
- 1. Click the **+** button in the chat input, then select **Connectors > Manage connectors**
22
-
23
- ![Claude Desktop — Connectors menu in chat](/images/claude-chat-connectors.png)
24
-
25
- 2. Click **Add custom connector**
26
- 3. Enter a name (e.g. "Bonnard MCP") and the MCP URL: `https://mcp.bonnard.dev/mcp`
27
- 4. Click **Add**
28
-
29
- ![Claude Desktop — Add custom connector dialog](/images/claude-add-connector.png)
30
-
31
- Once added, enable the Bonnard connector in any chat via the **Connectors** menu.
32
-
33
- Remote MCP servers in Claude Desktop must be added through the Connectors UI, not the JSON config file.
34
-
35
- ### ChatGPT
36
-
37
- Custom MCP servers must be added in the browser at [chatgpt.com](https://chatgpt.com) — the desktop app does not support this.
38
-
39
- 1. Go to [chatgpt.com](https://chatgpt.com) in your browser
40
- 2. Open **Settings > Apps**
41
-
42
- ![ChatGPT — Settings > Apps](/images/chatgpt-apps.png)
43
-
44
- 3. Click **Advanced settings**, enable **Developer mode**, then click **Create app**
45
-
46
- ![ChatGPT — Advanced settings with Developer mode and Create app](/images/advanced-create-app-chatgpt.png)
47
-
48
- 4. Enter a name (e.g. "Bonnard MCP"), the MCP URL `https://mcp.bonnard.dev/mcp`, and select **OAuth** for authentication
49
- 5. Check the acknowledgement box and click **Create**
50
-
51
- ![ChatGPT — Create new app with MCP URL](/images/chatgpt-new-app.png)
52
-
53
- Once created, the Bonnard connector appears under **Enabled apps**:
54
-
55
- ![ChatGPT — Bonnard MCP available in chat](/images/chatgpt-chat-apps.png)
56
-
57
- Available on Pro and Plus plans.
58
-
59
- ### Cursor
60
-
61
- Open **Settings > MCP** and add the server URL, or add to `.cursor/mcp.json` in your project:
62
-
63
- ```json
64
- {
65
- "mcpServers": {
66
- "bonnard": {
67
- "url": "https://mcp.bonnard.dev/mcp"
68
- }
69
- }
70
- }
71
- ```
72
-
73
- On first use, your browser will open to sign in and authorize the connection.
74
-
75
- ### VS Code / GitHub Copilot
76
-
77
- Open the Command Palette and run **MCP: Add Server**, or add to `.vscode/mcp.json` in your project:
78
-
79
- ```json
80
- {
81
- "servers": {
82
- "bonnard": {
83
- "type": "http",
84
- "url": "https://mcp.bonnard.dev/mcp"
85
- }
86
- }
87
- }
88
- ```
89
-
90
- On first use, your browser will open to sign in and authorize the connection.
91
-
92
- ### Claude Code
93
-
94
- Run in your terminal:
95
-
96
- ```bash
97
- claude mcp add --transport http bonnard https://mcp.bonnard.dev/mcp
98
- ```
99
-
100
- Or add to `.mcp.json` in your project:
101
-
102
- ```json
103
- {
104
- "mcpServers": {
105
- "bonnard": {
106
- "type": "http",
107
- "url": "https://mcp.bonnard.dev/mcp"
108
- }
109
- }
110
- }
111
- ```
112
-
113
- ### Windsurf
114
-
115
- Open **Settings > Plugins > Manage plugins > View raw config**, or edit `~/.codeium/windsurf/mcp_config.json`:
116
-
117
- ```json
118
- {
119
- "mcpServers": {
120
- "bonnard": {
121
- "serverUrl": "https://mcp.bonnard.dev/mcp"
122
- }
123
- }
124
- }
125
- ```
126
-
127
- ### Gemini CLI
128
-
129
- Add to `.gemini/settings.json` in your project or `~/.gemini/settings.json` globally:
130
-
131
- ```json
132
- {
133
- "mcpServers": {
134
- "bonnard": {
135
- "url": "https://mcp.bonnard.dev/mcp"
136
- }
137
- }
138
- }
139
- ```
140
-
141
- ## Authentication
142
-
143
- MCP uses OAuth 2.0 with PKCE. When an agent first connects:
144
-
145
- 1. Agent discovers OAuth endpoints automatically
146
- 2. You are redirected to Bonnard to sign in and authorize
147
- 3. Agent receives an access token (valid for 30 days)
148
-
149
- No API keys or manual token management needed.
150
-
151
- ## Available Tools
152
-
153
- Once connected, AI agents can use these MCP tools:
154
-
155
- | Tool | Description |
156
- |------|-------------|
157
- | `explore_schema` | Discover views and cubes, list their measures, dimensions, and segments. Supports browsing a specific source by name or searching across all fields by keyword. |
158
- | `query` | Query the semantic layer with measures, dimensions, time dimensions, filters, segments, and pagination. |
159
- | `sql_query` | Execute raw SQL against the semantic layer using Cube SQL syntax with `MEASURE()` for aggregations. Use for CTEs, UNIONs, and custom calculations. |
160
- | `describe_field` | Get detailed metadata for a field — SQL expression, type, format, origin cube, and referenced fields. |
161
-
162
- ## Testing
163
-
164
- ```bash
165
- # Verify the MCP server is reachable
166
- bon mcp test
167
-
168
- # View connection info
169
- bon mcp
170
- ```
171
-
172
- ## Managing Connections
173
-
174
- Active MCP connections can be viewed and revoked in the Bonnard dashboard under **MCP**.
175
-
176
- ## See Also
177
-
178
- - workflow.deploy
179
- - workflow.query
@@ -1,165 +0,0 @@
1
- # Workflow
2
-
3
- > The end-to-end workflow for building a semantic layer with Bonnard: validate your YAML definitions locally, deploy to the platform, and query your metrics via API or MCP.
4
-
5
- ## Overview
6
-
7
- Building a semantic layer with Bonnard follows a development workflow: initialize a project, connect data sources, define cubes and views, validate, and deploy.
8
-
9
- ## Quick Start
10
-
11
- ```bash
12
- # 1. Initialize project
13
- bon init
14
-
15
- # 2. Add a data source
16
- bon datasource add
17
-
18
- # 3. Create cubes in bonnard/cubes/ and views in bonnard/views/
19
-
20
- # 4. Validate
21
- bon validate
22
-
23
- # 5. Deploy to Bonnard
24
- bon deploy -m "Initial semantic layer"
25
- ```
26
-
27
- ## Project Structure
28
-
29
- After `bon init`, your project has:
30
-
31
- ```
32
- my-project/
33
- ├── bon.yaml # Project configuration
34
- ├── bonnard/ # Semantic layer definitions
35
- │ ├── cubes/ # Cube definitions
36
- │ │ └── orders.yaml
37
- │ └── views/ # View definitions
38
- │ └── orders_overview.yaml
39
- └── .bon/ # Local config (gitignored)
40
- └── datasources.yaml # Data source credentials
41
- ```
42
-
43
- ## Development Cycle
44
-
45
- ### 1. Define Cubes
46
-
47
- Create cubes that map to your database tables:
48
-
49
- ```yaml
50
- # bonnard/cubes/orders.yaml
51
- cubes:
52
- - name: orders
53
- sql_table: public.orders
54
-
55
- measures:
56
- - name: count
57
- type: count
58
-
59
- dimensions:
60
- - name: status
61
- type: string
62
- sql: status
63
- ```
64
-
65
- ### 2. Define Views
66
-
67
- Create views that expose cubes to consumers:
68
-
69
- ```yaml
70
- # bonnard/views/orders_overview.yaml
71
- views:
72
- - name: orders_overview
73
- cubes:
74
- - join_path: orders
75
- includes:
76
- - count
77
- - status
78
- ```
79
-
80
- ### 3. Validate
81
-
82
- Check for syntax errors and test connections:
83
-
84
- ```bash
85
- bon validate
86
- ```
87
-
88
- ### 4. Deploy
89
-
90
- Push cubes and views to Bonnard:
91
-
92
- ```bash
93
- bon deploy -m "Add orders cube and overview view"
94
- ```
95
-
96
- ### 5. Review
97
-
98
- Check what changed in your deployment:
99
-
100
- ```bash
101
- bon deployments # List recent deployments
102
- bon diff <deployment-id> # View all changes
103
- bon diff <deployment-id> --breaking # Breaking changes only
104
- ```
105
-
106
- ## File Organization
107
-
108
- ### One Cube Per File
109
-
110
- ```
111
- bonnard/cubes/
112
- ├── orders.yaml
113
- ├── users.yaml
114
- ├── products.yaml
115
- └── line_items.yaml
116
- ```
117
-
118
- ### Related Cubes Together
119
-
120
- ```
121
- bonnard/cubes/
122
- ├── sales/
123
- │ ├── orders.yaml
124
- │ └── line_items.yaml
125
- ├── users/
126
- │ ├── users.yaml
127
- │ └── profiles.yaml
128
- └── products/
129
- └── products.yaml
130
- ```
131
-
132
- ## Best Practices
133
-
134
- 1. **Start from questions** — collect the most common questions your team asks, then build views that answer them. Don't just mirror your warehouse tables.
135
- 2. **Add filtered measures** — if a dashboard card has a WHERE clause beyond a date range, that filter should be a filtered measure. This is the #1 way to match real dashboard numbers.
136
- 3. **Write descriptions for agents** — descriptions are how AI agents choose which view and measure to use. Lead with scope, cross-reference related views, include dimension values.
137
- 4. **Validate often** — run `bon validate` after each change
138
- 5. **Test with real questions** — after deploying, ask an AI agent via MCP the same questions your team asks. Check it picks the right view and measure.
139
- 6. **Iterate** — expect 2-4 rounds of deploying, testing with questions, and improving descriptions before agents reliably answer the top 10 questions.
140
-
141
- ## Commands Reference
142
-
143
- | Command | Description |
144
- |---------|-------------|
145
- | `bon init` | Create project structure |
146
- | `bon datasource add` | Add a data source |
147
- | `bon datasource add --demo` | Add demo dataset (no warehouse needed) |
148
- | `bon datasource add --from-dbt` | Import from dbt profiles |
149
- | `bon datasource list` | List configured sources |
150
- | `bon validate` | Check cube and view syntax |
151
- | `bon deploy -m "message"` | Deploy to Bonnard (message required) |
152
- | `bon deploy --ci` | Non-interactive deploy |
153
- | `bon deployments` | List deployment history |
154
- | `bon diff <id>` | View changes in a deployment |
155
- | `bon annotate <id>` | Add context to deployment changes |
156
- | `bon query '{...}'` | Query the semantic layer |
157
- | `bon mcp` | MCP setup instructions for AI agents |
158
- | `bon docs` | Browse documentation |
159
-
160
- ## See Also
161
-
162
- - workflow.validate
163
- - workflow.deploy
164
- - cubes
165
- - views
@@ -1,198 +0,0 @@
1
- # Query
2
-
3
- > Query your deployed semantic layer using the Bonnard REST API. Send JSON query objects or SQL strings to retrieve measures and dimensions with filtering, grouping, and time ranges.
4
-
5
- ## Overview
6
-
7
- After deploying with `bon deploy`, you can query the semantic layer using `bon query`. This tests that your cubes and views work correctly and returns data from your warehouse through Bonnard.
8
-
9
- ## Query Formats
10
-
11
- Bonnard supports two query formats:
12
-
13
- ### JSON Format (Default)
14
-
15
- The JSON format uses the REST API structure:
16
-
17
- ```bash
18
- bon query '{"measures": ["orders.count"]}'
19
-
20
- bon query '{
21
- "measures": ["orders.total_revenue"],
22
- "dimensions": ["orders.status"],
23
- "filters": [{
24
- "member": "orders.created_at",
25
- "operator": "inDateRange",
26
- "values": ["2024-01-01", "2024-12-31"]
27
- }]
28
- }'
29
- ```
30
-
31
- **JSON Query Properties:**
32
-
33
- | Property | Description |
34
- |----------|-------------|
35
- | `measures` | Array of measures to calculate (e.g., `["orders.count"]`) |
36
- | `dimensions` | Array of dimensions to group by (e.g., `["orders.status"]`) |
37
- | `filters` | Array of filter objects |
38
- | `timeDimensions` | Time-based grouping with granularity |
39
- | `segments` | Named filters defined in cubes |
40
- | `limit` | Max rows to return |
41
- | `offset` | Skip rows (pagination) |
42
- | `order` | Sort specification |
43
-
44
- **Filter Operators:**
45
-
46
- | Operator | Use Case |
47
- |----------|----------|
48
- | `equals`, `notEquals` | Exact match |
49
- | `contains`, `notContains` | String contains |
50
- | `startsWith`, `endsWith` | String prefix/suffix |
51
- | `gt`, `gte`, `lt`, `lte` | Numeric comparison |
52
- | `inDateRange`, `beforeDate`, `afterDate` | Time filtering |
53
- | `set`, `notSet` | NULL checks |
54
-
55
- ### SQL Format
56
-
57
- The SQL format uses the SQL API, where cubes are tables:
58
-
59
- ```bash
60
- bon query --sql "SELECT status, MEASURE(count) FROM orders GROUP BY 1"
61
-
62
- bon query --sql "SELECT
63
- city,
64
- MEASURE(total_revenue),
65
- MEASURE(avg_order_value)
66
- FROM orders
67
- WHERE status = 'completed'
68
- GROUP BY 1
69
- ORDER BY 2 DESC
70
- LIMIT 10"
71
- ```
72
-
73
- **SQL Syntax Rules:**
74
-
75
- 1. **Cubes/views are tables** — `FROM orders` references the `orders` cube
76
- 2. **Dimensions are columns** — Include in `SELECT` and `GROUP BY`
77
- 3. **Measures use `MEASURE()`** — Or matching aggregates (`SUM`, `COUNT`, etc.)
78
- 4. **Segments are boolean** — Filter with `WHERE is_completed IS TRUE`
79
-
80
- **Examples:**
81
-
82
- ```sql
83
- -- Count orders by status
84
- SELECT status, MEASURE(count) FROM orders GROUP BY 1
85
-
86
- -- Revenue by city with filter
87
- SELECT city, SUM(amount) FROM orders WHERE status = 'shipped' GROUP BY 1
88
-
89
- -- Using time dimension with granularity
90
- SELECT DATE_TRUNC('month', created_at), MEASURE(total_revenue)
91
- FROM orders
92
- GROUP BY 1
93
- ORDER BY 1
94
- ```
95
-
96
- ## CLI Usage
97
-
98
- ```bash
99
- # JSON format (default)
100
- bon query '{"measures": ["orders.count"]}'
101
-
102
- # SQL format
103
- bon query --sql "SELECT MEASURE(count) FROM orders"
104
-
105
- # Limit rows
106
- bon query '{"measures": ["orders.count"], "dimensions": ["orders.city"]}' --limit 10
107
-
108
- # JSON output (instead of table)
109
- bon query '{"measures": ["orders.count"]}' --format json
110
- ```
111
-
112
- ## Output Formats
113
-
114
- ### Table Format (Default)
115
-
116
- ```
117
- ┌─────────┬───────────────┐
118
- │ status │ orders.count │
119
- ├─────────┼───────────────┤
120
- │ pending │ 42 │
121
- │ shipped │ 156 │
122
- │ done │ 892 │
123
- └─────────┴───────────────┘
124
- ```
125
-
126
- ### JSON Format
127
-
128
- ```bash
129
- bon query '{"measures": ["orders.count"]}' --format json
130
- ```
131
-
132
- ```json
133
- [
134
- { "orders.status": "pending", "orders.count": 42 },
135
- { "orders.status": "shipped", "orders.count": 156 },
136
- { "orders.status": "done", "orders.count": 892 }
137
- ]
138
- ```
139
-
140
- ## Common Patterns
141
-
142
- ### Time Series Analysis
143
-
144
- ```bash
145
- bon query '{
146
- "measures": ["orders.total_revenue"],
147
- "timeDimensions": [{
148
- "dimension": "orders.created_at",
149
- "granularity": "month",
150
- "dateRange": ["2024-01-01", "2024-12-31"]
151
- }]
152
- }'
153
- ```
154
-
155
- ### Filtering by Dimension
156
-
157
- ```bash
158
- bon query '{
159
- "measures": ["orders.count"],
160
- "dimensions": ["orders.city"],
161
- "filters": [{
162
- "member": "orders.status",
163
- "operator": "equals",
164
- "values": ["completed"]
165
- }]
166
- }'
167
- ```
168
-
169
- ### Multiple Measures
170
-
171
- ```bash
172
- bon query '{
173
- "measures": ["orders.count", "orders.total_revenue", "orders.avg_order_value"],
174
- "dimensions": ["orders.category"]
175
- }'
176
- ```
177
-
178
- ## Error Handling
179
-
180
- ### Common Errors
181
-
182
- **"Projection references non-aggregate values"**
183
- - All dimensions must be in `GROUP BY`
184
- - All measures must use `MEASURE()` or matching aggregate
185
-
186
- **"Cube not found"**
187
- - Check cube name matches deployed cube
188
- - Run `bon deploy` if cubes changed
189
-
190
- **"Not logged in"**
191
- - Run `bon login` first
192
-
193
- ## See Also
194
-
195
- - [workflow.deploy](workflow.deploy) - Deploy before querying
196
- - [cubes.measures](cubes.measures) - Define measures
197
- - [cubes.dimensions](cubes.dimensions) - Define dimensions
198
- - [views](views) - Create focused query interfaces
@@ -1,125 +0,0 @@
1
- # Validate
2
-
3
- > Run validation checks on your cubes and views before deploying to catch YAML syntax errors, missing references, circular joins, and other issues. Use `bon validate` from the CLI.
4
-
5
- ## Overview
6
-
7
- The `bon validate` command checks your YAML cubes and views for syntax errors and schema violations. Run this before deploying to catch issues early.
8
-
9
- ## Usage
10
-
11
- ```bash
12
- bon validate
13
- ```
14
-
15
- ## What Gets Validated
16
-
17
- ### YAML Syntax
18
-
19
- - Valid YAML format
20
- - Proper indentation
21
- - Correct quoting
22
-
23
- ### Schema Compliance
24
-
25
- - Required fields present (name, type, sql)
26
- - Valid field values (known measure types, relationship types)
27
- - Consistent naming conventions
28
-
29
- ### Reference Integrity
30
-
31
- - Referenced cubes exist
32
- - Referenced members exist
33
- - Join relationships are valid
34
-
35
- ## Example Output
36
-
37
- ### Success
38
-
39
- ```
40
- ✓ Validating YAML syntax...
41
- ✓ Checking bonnard/cubes/orders.yaml
42
- ✓ Checking bonnard/cubes/users.yaml
43
- ✓ Checking bonnard/views/orders_overview.yaml
44
-
45
- All cubes and views valid.
46
- ```
47
-
48
- ### Errors
49
-
50
- ```
51
- ✗ Validating YAML syntax...
52
-
53
- bonnard/cubes/orders.yaml:15:5
54
- error: Unknown measure type "counts"
55
-
56
- Did you mean "count"?
57
-
58
- 14: measures:
59
- 15: - name: order_count
60
- 16: type: counts <-- here
61
- 17: sql: id
62
-
63
- 1 error found.
64
- ```
65
-
66
- ## Common Errors
67
-
68
- ### Missing Required Field
69
-
70
- ```yaml
71
- # Error: "sql" is required
72
- measures:
73
- - name: count
74
- type: count
75
- # Missing: sql: id
76
- ```
77
-
78
- ### Invalid Type
79
-
80
- ```yaml
81
- # Error: Unknown dimension type "text"
82
- dimensions:
83
- - name: status
84
- type: text # Should be: string
85
- sql: status
86
- ```
87
-
88
- ### Reference Not Found
89
-
90
- ```yaml
91
- # Error: Cube "user" not found (did you mean "users"?)
92
- joins:
93
- - name: user
94
- relationship: many_to_one
95
- sql: "{CUBE}.user_id = {user.id}"
96
- ```
97
-
98
- ### YAML Syntax
99
-
100
- ```yaml
101
- # Error: Bad indentation
102
- measures:
103
- - name: count # Should be indented
104
- type: count
105
- ```
106
-
107
- ## Exit Codes
108
-
109
- | Code | Meaning |
110
- |------|---------|
111
- | 0 | All validations passed |
112
- | 1 | Validation errors found |
113
-
114
- ## Best Practices
115
-
116
- 1. **Run before every deploy** — `bon validate && bon deploy`
117
- 2. **Add to CI/CD** — validate on pull requests
118
- 3. **Fix errors first** — don't deploy with validation errors
119
- 4. **Test connections** — connections are tested automatically during `bon deploy`
120
-
121
- ## See Also
122
-
123
- - workflow
124
- - workflow.deploy
125
- - syntax