@bonnard/cli 0.2.3 → 0.2.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/bin/bon.mjs +64 -55
- package/dist/bin/{push-mZujN1Ik.mjs → push-Bv9AFGc2.mjs} +1 -1
- package/dist/bin/{validate-BdqZBH2n.mjs → validate-Bc8zGNw7.mjs} +75 -3
- package/dist/docs/topics/features.cli.md +2 -2
- package/dist/docs/topics/features.governance.md +58 -59
- package/dist/docs/topics/features.semantic-layer.md +6 -0
- package/dist/docs/topics/views.md +17 -9
- package/dist/docs/topics/workflow.deploy.md +5 -4
- package/dist/docs/topics/workflow.md +6 -5
- package/dist/templates/claude/skills/bonnard-design-guide/SKILL.md +233 -0
- package/dist/templates/claude/skills/bonnard-get-started/SKILL.md +50 -16
- package/dist/templates/claude/skills/bonnard-metabase-migrate/SKILL.md +29 -10
- package/dist/templates/cursor/rules/bonnard-design-guide.mdc +232 -0
- package/dist/templates/cursor/rules/bonnard-get-started.mdc +50 -16
- package/dist/templates/cursor/rules/bonnard-metabase-migrate.mdc +29 -10
- package/dist/templates/shared/bonnard.md +30 -13
- package/package.json +1 -1
|
@@ -0,0 +1,233 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: bonnard-design-guide
|
|
3
|
+
description: Design principles for building semantic layers that work well for AI agents and business users. Use when building views, writing descriptions, or improving agent accuracy.
|
|
4
|
+
allowed-tools: Bash(bon *)
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Semantic Layer Design Guide
|
|
8
|
+
|
|
9
|
+
This guide covers design decisions that determine whether your semantic layer
|
|
10
|
+
works for end users and AI agents. It complements the setup guides
|
|
11
|
+
(`/bonnard-get-started`, `/bonnard-metabase-migrate`) which cover mechanics.
|
|
12
|
+
|
|
13
|
+
Read this before building views, or revisit it when agents return wrong
|
|
14
|
+
answers or users can't find the right metrics.
|
|
15
|
+
|
|
16
|
+
## Principle 1: Start from Questions, Not Tables
|
|
17
|
+
|
|
18
|
+
The natural instinct is: look at tables, build cubes, expose everything.
|
|
19
|
+
This produces a semantic layer that mirrors your warehouse schema —
|
|
20
|
+
technically correct but useless to anyone who doesn't already know the schema.
|
|
21
|
+
|
|
22
|
+
**Instead, start from what people ask:**
|
|
23
|
+
|
|
24
|
+
1. Collect the 10-20 most common questions your team asks about data
|
|
25
|
+
2. For each question, identify which tables/columns are needed to answer it
|
|
26
|
+
3. Group questions by audience (who asks them)
|
|
27
|
+
4. Build views that answer those question groups
|
|
28
|
+
|
|
29
|
+
If you have a BI tool (Metabase, Looker, Tableau), your top dashboards
|
|
30
|
+
by view count are the best source of real questions. If not, ask each team:
|
|
31
|
+
"What 3 numbers do you check every week?"
|
|
32
|
+
|
|
33
|
+
**Why this matters:** A semantic layer built from questions has 5-10 focused
|
|
34
|
+
views. One built from tables has 30+ views that agents struggle to choose
|
|
35
|
+
between. Fewer, well-scoped views with clear descriptions outperform
|
|
36
|
+
comprehensive but undifferentiated coverage.
|
|
37
|
+
|
|
38
|
+
## Principle 2: Views Are for Audiences, Not Tables
|
|
39
|
+
|
|
40
|
+
A common mistake is creating one view per cube (table). This produces views
|
|
41
|
+
like `orders_view`, `users_view`, `invoices_view` — which is just the
|
|
42
|
+
warehouse schema with extra steps.
|
|
43
|
+
|
|
44
|
+
**Views should represent how a team thinks about data:**
|
|
45
|
+
|
|
46
|
+
```
|
|
47
|
+
BAD (model-centric): GOOD (audience-centric):
|
|
48
|
+
views/ views/
|
|
49
|
+
orders_view.yaml (1 cube) management.yaml (revenue + users)
|
|
50
|
+
users_view.yaml (1 cube) sales.yaml (opportunities + invoices)
|
|
51
|
+
invoices_view.yaml (1 cube) product.yaml (users + funnel + contracts)
|
|
52
|
+
opportunities_view.yaml (1 cube) partners.yaml (partners + billing)
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
The same cube often appears in multiple views. An `opportunities` cube might
|
|
56
|
+
contribute to `sales_opportunities` (filtered to active offers), a management
|
|
57
|
+
KPI view, and a partner performance view. Each view exposes different
|
|
58
|
+
measures and dimensions because different audiences need different slices.
|
|
59
|
+
|
|
60
|
+
**Name views by what they answer**, not what table they wrap:
|
|
61
|
+
`sales_pipeline` not `opportunities_overview`.
|
|
62
|
+
|
|
63
|
+
## Principle 3: Add Filtered Measures
|
|
64
|
+
|
|
65
|
+
This is the single most impactful thing you can do. Look at any real
|
|
66
|
+
dashboard: it almost never shows `COUNT(*)` from a raw table. It shows
|
|
67
|
+
"active offers" (type=Angebot AND status!=Cancelled), "unpaid invoices"
|
|
68
|
+
(status NOT IN terminal states).
|
|
69
|
+
|
|
70
|
+
Without filtered measures, agents return unfiltered totals that don't match
|
|
71
|
+
what users see in their dashboards. Users lose trust immediately.
|
|
72
|
+
|
|
73
|
+
**For every important dashboard card, check the WHERE clause:**
|
|
74
|
+
|
|
75
|
+
```yaml
|
|
76
|
+
# Dashboard shows "Active Offers: 7,500"
|
|
77
|
+
# But raw COUNT(*) on opportunities returns 29,000
|
|
78
|
+
# The card SQL has: WHERE type = 'Angebot' AND status != 'Abgesagt'
|
|
79
|
+
|
|
80
|
+
measures:
|
|
81
|
+
- name: count
|
|
82
|
+
type: count
|
|
83
|
+
description: Total opportunities (all types and statuses)
|
|
84
|
+
|
|
85
|
+
- name: active_offer_count
|
|
86
|
+
type: count
|
|
87
|
+
description: Non-cancelled offers only (type=Angebot, status!=Abgesagt)
|
|
88
|
+
filters:
|
|
89
|
+
- sql: "{CUBE}.type = 'Angebot'"
|
|
90
|
+
- sql: "{CUBE}.status != 'Abgesagt'"
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
**Common patterns that need filtered measures:**
|
|
94
|
+
- Status filters: active/open/pending items vs all items
|
|
95
|
+
- Type filters: specific transaction types vs all transactions
|
|
96
|
+
- Boolean flags: completed, paid, verified subsets
|
|
97
|
+
- Exclusions: excluding test data, internal users, cancelled records
|
|
98
|
+
|
|
99
|
+
A good rule of thumb: if a BI dashboard card has a WHERE clause beyond just
|
|
100
|
+
a date range, that filter should probably be a filtered measure.
|
|
101
|
+
|
|
102
|
+
## Principle 4: Descriptions Are the Discovery API
|
|
103
|
+
|
|
104
|
+
For AI agents, descriptions are not documentation — they are the **primary
|
|
105
|
+
mechanism for choosing which view and measure to use**. When an agent calls
|
|
106
|
+
`explore_schema`, it sees view names and descriptions. That's all it has
|
|
107
|
+
to decide where to query.
|
|
108
|
+
|
|
109
|
+
### View descriptions must answer three questions:
|
|
110
|
+
|
|
111
|
+
1. **What's in here?** — Lead with the scope and content
|
|
112
|
+
2. **When should I use this?** — The default use case
|
|
113
|
+
3. **When should I use something else?** — Explicit disambiguation
|
|
114
|
+
|
|
115
|
+
```yaml
|
|
116
|
+
# BAD: Generic, doesn't help agent choose
|
|
117
|
+
description: User metrics and dimensions
|
|
118
|
+
|
|
119
|
+
# GOOD: Scoped, navigational, includes data values
|
|
120
|
+
description: >-
|
|
121
|
+
Sales pipeline — active and historical opportunities with contract values
|
|
122
|
+
and assignee details. Default view for pipeline questions, deal counts,
|
|
123
|
+
and contract value analysis. Use the type dimension (values: Angebot,
|
|
124
|
+
Auftrag) to filter by opportunity type. For invoice-level detail, use
|
|
125
|
+
sales_invoices instead.
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
### Description writing rules:
|
|
129
|
+
|
|
130
|
+
**Lead with scope, not mechanics.** "All users across both products" not
|
|
131
|
+
"Wraps the all_users cube." Agents match question keywords against
|
|
132
|
+
description keywords.
|
|
133
|
+
|
|
134
|
+
**Include actual dimension values.** If a dimension has known categorical
|
|
135
|
+
values, list them: `(values: erben, vererben)`, `(values: B2B, Organic)`.
|
|
136
|
+
This helps agents map user language to filter values.
|
|
137
|
+
|
|
138
|
+
**Use the same vocabulary as your users.** If your team says "testaments"
|
|
139
|
+
not "will_and_testament", the description should say "testaments."
|
|
140
|
+
|
|
141
|
+
**Cross-reference related views.** When two views could plausibly answer the
|
|
142
|
+
same question, both descriptions should point to each other: "For company-wide
|
|
143
|
+
totals, use company_users instead." This is the most effective way to
|
|
144
|
+
prevent agents from picking the wrong view.
|
|
145
|
+
|
|
146
|
+
### Measure descriptions should say what's included/excluded:
|
|
147
|
+
|
|
148
|
+
```yaml
|
|
149
|
+
# BAD
|
|
150
|
+
description: Count of orders
|
|
151
|
+
|
|
152
|
+
# GOOD
|
|
153
|
+
description: >-
|
|
154
|
+
Orders with status 'completed' or 'shipped' only.
|
|
155
|
+
Excludes cancelled and pending. For all orders, use total_order_count.
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
## Principle 5: Build Cross-Entity Views When Users Think Across Tables
|
|
159
|
+
|
|
160
|
+
Sometimes the most common question doesn't map to any single table.
|
|
161
|
+
"How many total signups do we have?" might require combining users from
|
|
162
|
+
two separate product tables.
|
|
163
|
+
|
|
164
|
+
**Options:**
|
|
165
|
+
|
|
166
|
+
1. **Database view/UNION table** — If your warehouse has a combined
|
|
167
|
+
view, build a cube on it. Cleanest approach.
|
|
168
|
+
2. **Multiple cubes in one view via joins** — If cubes can be joined
|
|
169
|
+
through foreign keys, include both in a view using `join_path`.
|
|
170
|
+
3. **Separate views with clear descriptions** — If data can't be combined,
|
|
171
|
+
create separate views and describe how they relate.
|
|
172
|
+
|
|
173
|
+
Don't force everything into one view. It's better to have an agent make
|
|
174
|
+
two clear queries than one confused query against an over-joined view.
|
|
175
|
+
|
|
176
|
+
## Principle 6: Test with Natural Language, Not Just Numbers
|
|
177
|
+
|
|
178
|
+
Verifying that `bon query '{"measures": ["orders.count"]}'` returns the
|
|
179
|
+
right number is necessary but not sufficient. The actual failure mode is:
|
|
180
|
+
|
|
181
|
+
> User asks "how many active offers do we have?"
|
|
182
|
+
> Agent queries `orders.count` instead of `sales_pipeline.active_offer_count`
|
|
183
|
+
> Returns 29,000 instead of 7,500
|
|
184
|
+
|
|
185
|
+
**To test properly, give real questions to an AI agent via MCP and check:**
|
|
186
|
+
|
|
187
|
+
1. Did it find the right **view**?
|
|
188
|
+
2. Did it pick the right **measure**?
|
|
189
|
+
3. Did it apply the right **filters/date range**?
|
|
190
|
+
4. Is the final **number** correct?
|
|
191
|
+
|
|
192
|
+
Steps 1-3 are where most failures happen, caused by description and view
|
|
193
|
+
structure problems — not wrong data.
|
|
194
|
+
|
|
195
|
+
**Build a small eval set:**
|
|
196
|
+
- Write 5-10 questions that your users actually ask
|
|
197
|
+
- For each question, record the expected view, measure, and answer
|
|
198
|
+
- Run each question through an agent 3-5 times (agents are non-deterministic)
|
|
199
|
+
- If pass rate is below 80%, the issue is almost always the view description
|
|
200
|
+
or view structure, not the data
|
|
201
|
+
|
|
202
|
+
## Principle 7: Iterate — The First Deploy Is Never Right
|
|
203
|
+
|
|
204
|
+
The semantic layer is not a one-time build. The effective workflow is:
|
|
205
|
+
|
|
206
|
+
```
|
|
207
|
+
Build views -> Deploy -> Test with questions -> Find agent mistakes
|
|
208
|
+
^ |
|
|
209
|
+
+---- Improve descriptions/measures <----------+
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
Expect 2-4 iterations before agents reliably answer the top 10 questions.
|
|
213
|
+
Each iteration typically involves:
|
|
214
|
+
|
|
215
|
+
- Rewriting 1-2 view descriptions to improve disambiguation
|
|
216
|
+
- Adding 1-3 filtered measures that match dashboard WHERE clauses
|
|
217
|
+
- Occasionally restructuring a view (splitting or merging)
|
|
218
|
+
|
|
219
|
+
**Don't try to get it perfect before deploying.** Deploy early with your
|
|
220
|
+
best guess, test with real questions, and fix what breaks.
|
|
221
|
+
|
|
222
|
+
## Quick Checklist
|
|
223
|
+
|
|
224
|
+
Before deploying, review each view against this checklist:
|
|
225
|
+
|
|
226
|
+
- [ ] **View name** describes the audience/use case, not the underlying table
|
|
227
|
+
- [ ] **View description** leads with scope ("All users..." / "Sales team only...")
|
|
228
|
+
- [ ] **View description** cross-references related views ("For X, use Y instead")
|
|
229
|
+
- [ ] **View description** includes key dimension values where helpful
|
|
230
|
+
- [ ] **Filtered measures** exist for every dashboard card with a WHERE clause
|
|
231
|
+
- [ ] **Measure descriptions** say what's included AND excluded
|
|
232
|
+
- [ ] **Dimension descriptions** include example values for categorical fields
|
|
233
|
+
- [ ] **No two views** could plausibly answer the same question without disambiguation
|
|
@@ -30,7 +30,7 @@ bon datasource add --name my_warehouse --type postgres \
|
|
|
30
30
|
bon datasource add
|
|
31
31
|
```
|
|
32
32
|
|
|
33
|
-
Supported types: `postgres`
|
|
33
|
+
Supported types: `postgres`, `redshift`, `snowflake`, `bigquery`, `databricks`.
|
|
34
34
|
|
|
35
35
|
The demo option adds a read-only Contoso retail dataset with tables like
|
|
36
36
|
`fact_sales`, `dim_product`, `dim_store`, and `dim_customer`.
|
|
@@ -41,9 +41,13 @@ The connection will be tested automatically during `bon deploy`.
|
|
|
41
41
|
|
|
42
42
|
Before creating cubes, understand what tables and columns are available in your warehouse.
|
|
43
43
|
|
|
44
|
+
**Important:** `bon query` is for querying the **deployed semantic layer** — it does NOT
|
|
45
|
+
access your database directly. Use your warehouse's native tools to explore tables.
|
|
46
|
+
|
|
44
47
|
**Options for exploring your data:**
|
|
45
|
-
- Use your database
|
|
48
|
+
- Use your database CLI (e.g., `psql` for Postgres/Redshift, `snowsql` for Snowflake, `bq` for BigQuery) to list tables and columns
|
|
46
49
|
- Check your dbt docs or existing documentation for table schemas
|
|
50
|
+
- Ask the user for table names and column details if you don't have direct database access
|
|
47
51
|
- For the demo dataset, the tables are: `contoso.fact_sales`, `contoso.dim_product`, `contoso.dim_store`, `contoso.dim_customer`
|
|
48
52
|
|
|
49
53
|
Note the table names, column names, and data types — you'll use these in Phase 3.
|
|
@@ -79,6 +83,15 @@ cubes:
|
|
|
79
83
|
sql: total_cost
|
|
80
84
|
description: Sum of product costs
|
|
81
85
|
|
|
86
|
+
# Filtered measures — match what users actually see on dashboards
|
|
87
|
+
- name: return_count
|
|
88
|
+
type: count
|
|
89
|
+
description: >-
|
|
90
|
+
Sales transactions with returns only (return_quantity > 0).
|
|
91
|
+
For all transactions, use count.
|
|
92
|
+
filters:
|
|
93
|
+
- sql: "{CUBE}.return_quantity > 0"
|
|
94
|
+
|
|
82
95
|
dimensions:
|
|
83
96
|
- name: sales_key
|
|
84
97
|
type: number
|
|
@@ -97,8 +110,11 @@ cubes:
|
|
|
97
110
|
```
|
|
98
111
|
|
|
99
112
|
Key rules:
|
|
100
|
-
- Every cube needs a `primary_key` dimension
|
|
101
|
-
- Every measure and dimension should have a `description`
|
|
113
|
+
- Every cube needs a `primary_key` dimension — **it must be unique**. If no column is naturally unique, use `sql` with `ROW_NUMBER()` instead of `sql_table` and add a synthetic key. Non-unique PKs cause dimension queries to silently return empty results.
|
|
114
|
+
- Every measure and dimension should have a `description` — descriptions are how AI agents discover and choose metrics (see `/bonnard-design-guide`)
|
|
115
|
+
- **Add filtered measures** for any metric users track with filters (e.g., "online revenue" vs "total revenue"). If a dashboard card has a WHERE clause beyond a date range, that filter should be a filtered measure.
|
|
116
|
+
- Measure descriptions should say what's **included and excluded** — e.g., "Online channel only. For all channels, use total_revenue."
|
|
117
|
+
- Dimension descriptions should include **example values** for categorical fields — e.g., "Sales channel (values: Online, Store, Reseller)"
|
|
102
118
|
- Set `data_source` to match the datasource name from Phase 1
|
|
103
119
|
- Use `sql_table` with the full `schema.table` path
|
|
104
120
|
- Use `sql_table` for simple table references, `sql` for complex queries
|
|
@@ -108,26 +124,38 @@ for all 12 measure types, `bon docs cubes.dimensions.types` for dimension types.
|
|
|
108
124
|
|
|
109
125
|
## Phase 4: Create a View
|
|
110
126
|
|
|
111
|
-
Views expose a curated subset of measures and dimensions for
|
|
112
|
-
|
|
127
|
+
Views expose a curated subset of measures and dimensions for specific
|
|
128
|
+
audiences. **Name views by what they answer** (e.g., `sales_performance`),
|
|
129
|
+
not by what table they wrap (e.g., `sales_view`). A view can combine
|
|
130
|
+
measures and dimensions from multiple cubes via `join_path`.
|
|
113
131
|
|
|
114
|
-
|
|
132
|
+
The view **description** is critical — it's how AI agents decide which view
|
|
133
|
+
to query. It should answer: what's in here, when to use it, and when to
|
|
134
|
+
use something else instead. Include key dimension values where helpful.
|
|
135
|
+
|
|
136
|
+
Example using demo data — `bonnard/views/sales_performance.yaml`:
|
|
115
137
|
|
|
116
138
|
```yaml
|
|
117
139
|
views:
|
|
118
|
-
- name:
|
|
119
|
-
description:
|
|
140
|
+
- name: sales_performance
|
|
141
|
+
description: >-
|
|
142
|
+
Retail sales metrics — revenue, cost, and transaction counts by product,
|
|
143
|
+
store, and date. Default view for sales questions. Includes return_count
|
|
144
|
+
for return analysis. For customer demographics, use customer_insights
|
|
145
|
+
instead.
|
|
120
146
|
cubes:
|
|
121
147
|
- join_path: sales
|
|
122
148
|
includes:
|
|
123
149
|
- count
|
|
124
150
|
- total_revenue
|
|
151
|
+
- return_count
|
|
125
152
|
- total_cost
|
|
126
153
|
- date
|
|
127
154
|
- sales_quantity
|
|
128
155
|
```
|
|
129
156
|
|
|
130
|
-
Use `bon docs views` for the full reference.
|
|
157
|
+
Use `bon docs views` for the full reference. See `/bonnard-design-guide`
|
|
158
|
+
for principles on naming, descriptions, and view structure.
|
|
131
159
|
|
|
132
160
|
## Phase 5: Validate
|
|
133
161
|
|
|
@@ -163,21 +191,26 @@ After deploying, the output shows what changed (added/modified/removed) and
|
|
|
163
191
|
flags any breaking changes. Use `bon deployments` to see history and
|
|
164
192
|
`bon diff <id>` to review changes from any deployment.
|
|
165
193
|
|
|
166
|
-
## Phase 7: Test with
|
|
194
|
+
## Phase 7: Test with Queries
|
|
167
195
|
|
|
168
|
-
Verify the deployment works using the
|
|
196
|
+
Verify the deployment works using the **view name** from Phase 4:
|
|
169
197
|
|
|
170
198
|
```bash
|
|
171
199
|
# Simple count
|
|
172
|
-
bon query '{"measures": ["
|
|
200
|
+
bon query '{"measures": ["sales_performance.count"]}'
|
|
173
201
|
|
|
174
202
|
# With a dimension
|
|
175
|
-
bon query '{"measures": ["
|
|
203
|
+
bon query '{"measures": ["sales_performance.total_revenue"], "dimensions": ["sales_performance.date"]}'
|
|
176
204
|
|
|
177
205
|
# SQL format
|
|
178
|
-
bon query --sql "SELECT MEASURE(total_revenue) FROM
|
|
206
|
+
bon query --sql "SELECT MEASURE(total_revenue) FROM sales_performance"
|
|
179
207
|
```
|
|
180
208
|
|
|
209
|
+
**Test with natural language too.** If you've set up MCP (Phase 8), ask
|
|
210
|
+
an AI agent questions like "what's our total revenue?" and check whether
|
|
211
|
+
it picks the right view and measure. If it picks the wrong view, the issue
|
|
212
|
+
is usually the view description — see `/bonnard-design-guide` Principle 4.
|
|
213
|
+
|
|
181
214
|
## Phase 8: Connect AI Agents (Optional)
|
|
182
215
|
|
|
183
216
|
Set up MCP so AI agents can query the semantic layer:
|
|
@@ -193,8 +226,9 @@ and other MCP clients. The MCP URL is `https://mcp.bonnard.dev/mcp`.
|
|
|
193
226
|
|
|
194
227
|
After the first cube is working:
|
|
195
228
|
|
|
229
|
+
- **Iterate on descriptions** — test with real questions via MCP, fix agent mistakes by improving view descriptions and adding filtered measures (`/bonnard-design-guide`)
|
|
196
230
|
- Add more cubes for other tables
|
|
197
231
|
- Add joins between cubes (`bon docs cubes.joins`)
|
|
232
|
+
- Build audience-centric views that combine multiple cubes — e.g., a `management` view that pulls from `sales`, `users`, and `products` cubes
|
|
198
233
|
- Add calculated measures (`bon docs cubes.measures.calculated`)
|
|
199
234
|
- Add segments for common filters (`bon docs cubes.segments`)
|
|
200
|
-
- Build dashboards (`bon docs dashboards`)
|
|
@@ -65,7 +65,7 @@ bon datasource add --from-dbt
|
|
|
65
65
|
bon datasource add
|
|
66
66
|
```
|
|
67
67
|
|
|
68
|
-
Supported types: `postgres`
|
|
68
|
+
Supported types: `postgres`, `redshift`, `snowflake`, `bigquery`, `databricks`.
|
|
69
69
|
|
|
70
70
|
The connection will be tested automatically during `bon deploy`.
|
|
71
71
|
|
|
@@ -122,11 +122,12 @@ highest-referenced table and work down. Create one file per cube in
|
|
|
122
122
|
For each cube:
|
|
123
123
|
1. Set `sql_table` to the full `schema.table` path
|
|
124
124
|
2. Set `data_source` to the datasource name from Phase 3
|
|
125
|
-
3. Add a `primary_key` dimension
|
|
125
|
+
3. Add a `primary_key` dimension — **must be unique**. If no column is naturally unique, use `sql` with `ROW_NUMBER()` instead of `sql_table` and add a synthetic key
|
|
126
126
|
4. Add time dimensions for date/datetime columns
|
|
127
127
|
5. Add measures based on card SQL patterns (Phase 4)
|
|
128
|
-
6. Add
|
|
129
|
-
7. Add
|
|
128
|
+
6. **Add filtered measures** for every card with a WHERE clause beyond date range — e.g., "active offers" (status != cancelled). This is the #1 way to match dashboard numbers.
|
|
129
|
+
7. Add dimensions for columns used as filters (template vars from Phase 2)
|
|
130
|
+
8. Add `description` to every measure and dimension — descriptions should say what's **included and excluded**. Dimension descriptions should include **example values** for categorical fields.
|
|
130
131
|
|
|
131
132
|
Example — `bonnard/cubes/orders.yaml`:
|
|
132
133
|
|
|
@@ -181,15 +182,26 @@ Use `bon docs cubes.joins` for the full reference.
|
|
|
181
182
|
|
|
182
183
|
Map Metabase collections to views. Each top-level collection (business domain)
|
|
183
184
|
from the analysis report becomes a view that composes the relevant cubes.
|
|
185
|
+
**Name views by what they answer** (e.g., `sales_pipeline`), not by what
|
|
186
|
+
table they wrap (e.g., `orders_view`).
|
|
184
187
|
|
|
185
188
|
Create one file per view in `bonnard/views/`.
|
|
186
189
|
|
|
190
|
+
The view **description** is critical — it's how AI agents decide which view
|
|
191
|
+
to query. It should answer: what's in here, when to use it, and when to use
|
|
192
|
+
something else instead. Cross-reference related views to prevent agents from
|
|
193
|
+
picking the wrong one.
|
|
194
|
+
|
|
187
195
|
Example — `bonnard/views/sales_analytics.yaml`:
|
|
188
196
|
|
|
189
197
|
```yaml
|
|
190
198
|
views:
|
|
191
199
|
- name: sales_analytics
|
|
192
|
-
description:
|
|
200
|
+
description: >-
|
|
201
|
+
Sales team view — order revenue, counts, and status breakdowns with
|
|
202
|
+
customer region. Default view for revenue and order questions. Use the
|
|
203
|
+
status dimension (values: pending, completed, cancelled) to filter.
|
|
204
|
+
For customer-level analysis, use customer_insights instead.
|
|
193
205
|
cubes:
|
|
194
206
|
- join_path: orders
|
|
195
207
|
includes:
|
|
@@ -205,7 +217,8 @@ views:
|
|
|
205
217
|
- region
|
|
206
218
|
```
|
|
207
219
|
|
|
208
|
-
Use `bon docs views` for the full reference.
|
|
220
|
+
Use `bon docs views` for the full reference. See `/bonnard-design-guide`
|
|
221
|
+
for principles on view naming, descriptions, and structure.
|
|
209
222
|
|
|
210
223
|
## Phase 7: Validate and Deploy
|
|
211
224
|
|
|
@@ -236,23 +249,29 @@ equivalent queries:
|
|
|
236
249
|
|
|
237
250
|
```bash
|
|
238
251
|
# Run a semantic layer query
|
|
239
|
-
bon query '{"measures": ["
|
|
252
|
+
bon query '{"measures": ["sales_analytics.total_revenue"], "dimensions": ["sales_analytics.status"]}'
|
|
240
253
|
|
|
241
254
|
# SQL format
|
|
242
|
-
bon query --sql "SELECT status, MEASURE(total_revenue) FROM
|
|
255
|
+
bon query --sql "SELECT status, MEASURE(total_revenue) FROM sales_analytics GROUP BY 1"
|
|
243
256
|
```
|
|
244
257
|
|
|
245
258
|
Compare the numbers with the corresponding Metabase card. If they don't match:
|
|
246
259
|
- Check the SQL in the card (`bon metabase explore card <id>`) for filters or transformations
|
|
247
260
|
- Ensure the measure type matches the aggregation (SUM vs COUNT vs AVG)
|
|
248
|
-
- Check for WHERE clauses that should be
|
|
261
|
+
- Check for WHERE clauses that should be filtered measures (not segments)
|
|
262
|
+
|
|
263
|
+
**Test with natural language too.** Set up MCP (`bon mcp`) and ask an agent
|
|
264
|
+
the same questions your Metabase dashboards answer. Check whether it picks
|
|
265
|
+
the right **view** and **measure** — most failures are description problems,
|
|
266
|
+
not data problems. See `/bonnard-design-guide` Principle 6.
|
|
249
267
|
|
|
250
268
|
## Next Steps
|
|
251
269
|
|
|
252
270
|
After the core migration is working:
|
|
253
271
|
|
|
272
|
+
- **Iterate on descriptions** — test with real questions via MCP, fix agent mistakes by improving view descriptions and adding filtered measures (`/bonnard-design-guide`)
|
|
254
273
|
- Add remaining tables as cubes (work down the reference count list)
|
|
274
|
+
- Build audience-centric views that combine multiple cubes — match how Metabase collections organize data by business domain
|
|
255
275
|
- Add calculated measures for complex card SQL (`bon docs cubes.measures.calculated`)
|
|
256
|
-
- Add segments for common WHERE clauses (`bon docs cubes.segments`)
|
|
257
276
|
- Set up MCP for AI agent access (`bon mcp`)
|
|
258
277
|
- Review and iterate with `bon deployments` and `bon diff <id>`
|