@bonnard/cli 0.2.3 → 0.2.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,232 @@
1
+ ---
2
+ description: "Design principles for building semantic layers that work well for AI agents and business users. Use when building views, writing descriptions, or improving agent accuracy."
3
+ alwaysApply: false
4
+ ---
5
+
6
+ # Semantic Layer Design Guide
7
+
8
+ This guide covers design decisions that determine whether your semantic layer
9
+ works for end users and AI agents. It complements the setup guides
10
+ (`bonnard-get-started`, `bonnard-metabase-migrate`) which cover mechanics.
11
+
12
+ Read this before building views, or revisit it when agents return wrong
13
+ answers or users can't find the right metrics.
14
+
15
+ ## Principle 1: Start from Questions, Not Tables
16
+
17
+ The natural instinct is: look at tables, build cubes, expose everything.
18
+ This produces a semantic layer that mirrors your warehouse schema —
19
+ technically correct but useless to anyone who doesn't already know the schema.
20
+
21
+ **Instead, start from what people ask:**
22
+
23
+ 1. Collect the 10-20 most common questions your team asks about data
24
+ 2. For each question, identify which tables/columns are needed to answer it
25
+ 3. Group questions by audience (who asks them)
26
+ 4. Build views that answer those question groups
27
+
28
+ If you have a BI tool (Metabase, Looker, Tableau), your top dashboards
29
+ by view count are the best source of real questions. If not, ask each team:
30
+ "What 3 numbers do you check every week?"
31
+
32
+ **Why this matters:** A semantic layer built from questions has 5-10 focused
33
+ views. One built from tables has 30+ views that agents struggle to choose
34
+ between. Fewer, well-scoped views with clear descriptions outperform
35
+ comprehensive but undifferentiated coverage.
36
+
37
+ ## Principle 2: Views Are for Audiences, Not Tables
38
+
39
+ A common mistake is creating one view per cube (table). This produces views
40
+ like `orders_view`, `users_view`, `invoices_view` — which is just the
41
+ warehouse schema with extra steps.
42
+
43
+ **Views should represent how a team thinks about data:**
44
+
45
+ ```
46
+ BAD (model-centric): GOOD (audience-centric):
47
+ views/ views/
48
+ orders_view.yaml (1 cube) management.yaml (revenue + users)
49
+ users_view.yaml (1 cube) sales.yaml (opportunities + invoices)
50
+ invoices_view.yaml (1 cube) product.yaml (users + funnel + contracts)
51
+ opportunities_view.yaml (1 cube) partners.yaml (partners + billing)
52
+ ```
53
+
54
+ The same cube often appears in multiple views. An `opportunities` cube might
55
+ contribute to `sales_opportunities` (filtered to active offers), a management
56
+ KPI view, and a partner performance view. Each view exposes different
57
+ measures and dimensions because different audiences need different slices.
58
+
59
+ **Name views by what they answer**, not what table they wrap:
60
+ `sales_pipeline` not `opportunities_overview`.
61
+
62
+ ## Principle 3: Add Filtered Measures
63
+
64
+ This is the single most impactful thing you can do. Look at any real
65
+ dashboard: it almost never shows `COUNT(*)` from a raw table. It shows
66
+ "active offers" (type=Angebot AND status!=Cancelled), "unpaid invoices"
67
+ (status NOT IN terminal states).
68
+
69
+ Without filtered measures, agents return unfiltered totals that don't match
70
+ what users see in their dashboards. Users lose trust immediately.
71
+
72
+ **For every important dashboard card, check the WHERE clause:**
73
+
74
+ ```yaml
75
+ # Dashboard shows "Active Offers: 7,500"
76
+ # But raw COUNT(*) on opportunities returns 29,000
77
+ # The card SQL has: WHERE type = 'Angebot' AND status != 'Abgesagt'
78
+
79
+ measures:
80
+ - name: count
81
+ type: count
82
+ description: Total opportunities (all types and statuses)
83
+
84
+ - name: active_offer_count
85
+ type: count
86
+ description: Non-cancelled offers only (type=Angebot, status!=Abgesagt)
87
+ filters:
88
+ - sql: "{CUBE}.type = 'Angebot'"
89
+ - sql: "{CUBE}.status != 'Abgesagt'"
90
+ ```
91
+
92
+ **Common patterns that need filtered measures:**
93
+ - Status filters: active/open/pending items vs all items
94
+ - Type filters: specific transaction types vs all transactions
95
+ - Boolean flags: completed, paid, verified subsets
96
+ - Exclusions: excluding test data, internal users, cancelled records
97
+
98
+ A good rule of thumb: if a BI dashboard card has a WHERE clause beyond just
99
+ a date range, that filter should probably be a filtered measure.
100
+
101
+ ## Principle 4: Descriptions Are the Discovery API
102
+
103
+ For AI agents, descriptions are not documentation — they are the **primary
104
+ mechanism for choosing which view and measure to use**. When an agent calls
105
+ `explore_schema`, it sees view names and descriptions. That's all it has
106
+ to decide where to query.
107
+
108
+ ### View descriptions must answer three questions:
109
+
110
+ 1. **What's in here?** — Lead with the scope and content
111
+ 2. **When should I use this?** — The default use case
112
+ 3. **When should I use something else?** — Explicit disambiguation
113
+
114
+ ```yaml
115
+ # BAD: Generic, doesn't help agent choose
116
+ description: User metrics and dimensions
117
+
118
+ # GOOD: Scoped, navigational, includes data values
119
+ description: >-
120
+ Sales pipeline — active and historical opportunities with contract values
121
+ and assignee details. Default view for pipeline questions, deal counts,
122
+ and contract value analysis. Use the type dimension (values: Angebot,
123
+ Auftrag) to filter by opportunity type. For invoice-level detail, use
124
+ sales_invoices instead.
125
+ ```
126
+
127
+ ### Description writing rules:
128
+
129
+ **Lead with scope, not mechanics.** "All users across both products" not
130
+ "Wraps the all_users cube." Agents match question keywords against
131
+ description keywords.
132
+
133
+ **Include actual dimension values.** If a dimension has known categorical
134
+ values, list them: `(values: erben, vererben)`, `(values: B2B, Organic)`.
135
+ This helps agents map user language to filter values.
136
+
137
+ **Use the same vocabulary as your users.** If your team says "testaments"
138
+ not "will_and_testament", the description should say "testaments."
139
+
140
+ **Cross-reference related views.** When two views could plausibly answer the
141
+ same question, both descriptions should point to each other: "For company-wide
142
+ totals, use company_users instead." This is the most effective way to
143
+ prevent agents from picking the wrong view.
144
+
145
+ ### Measure descriptions should say what's included/excluded:
146
+
147
+ ```yaml
148
+ # BAD
149
+ description: Count of orders
150
+
151
+ # GOOD
152
+ description: >-
153
+ Orders with status 'completed' or 'shipped' only.
154
+ Excludes cancelled and pending. For all orders, use total_order_count.
155
+ ```
156
+
157
+ ## Principle 5: Build Cross-Entity Views When Users Think Across Tables
158
+
159
+ Sometimes the most common question doesn't map to any single table.
160
+ "How many total signups do we have?" might require combining users from
161
+ two separate product tables.
162
+
163
+ **Options:**
164
+
165
+ 1. **Database view/UNION table** — If your warehouse has a combined
166
+ view, build a cube on it. Cleanest approach.
167
+ 2. **Multiple cubes in one view via joins** — If cubes can be joined
168
+ through foreign keys, include both in a view using `join_path`.
169
+ 3. **Separate views with clear descriptions** — If data can't be combined,
170
+ create separate views and describe how they relate.
171
+
172
+ Don't force everything into one view. It's better to have an agent make
173
+ two clear queries than one confused query against an over-joined view.
174
+
175
+ ## Principle 6: Test with Natural Language, Not Just Numbers
176
+
177
+ Verifying that `bon query '{"measures": ["orders.count"]}'` returns the
178
+ right number is necessary but not sufficient. The actual failure mode is:
179
+
180
+ > User asks "how many active offers do we have?"
181
+ > Agent queries `orders.count` instead of `sales_pipeline.active_offer_count`
182
+ > Returns 29,000 instead of 7,500
183
+
184
+ **To test properly, give real questions to an AI agent via MCP and check:**
185
+
186
+ 1. Did it find the right **view**?
187
+ 2. Did it pick the right **measure**?
188
+ 3. Did it apply the right **filters/date range**?
189
+ 4. Is the final **number** correct?
190
+
191
+ Steps 1-3 are where most failures happen, caused by description and view
192
+ structure problems — not wrong data.
193
+
194
+ **Build a small eval set:**
195
+ - Write 5-10 questions that your users actually ask
196
+ - For each question, record the expected view, measure, and answer
197
+ - Run each question through an agent 3-5 times (agents are non-deterministic)
198
+ - If pass rate is below 80%, the issue is almost always the view description
199
+ or view structure, not the data
200
+
201
+ ## Principle 7: Iterate — The First Deploy Is Never Right
202
+
203
+ The semantic layer is not a one-time build. The effective workflow is:
204
+
205
+ ```
206
+ Build views -> Deploy -> Test with questions -> Find agent mistakes
207
+ ^ |
208
+ +---- Improve descriptions/measures <----------+
209
+ ```
210
+
211
+ Expect 2-4 iterations before agents reliably answer the top 10 questions.
212
+ Each iteration typically involves:
213
+
214
+ - Rewriting 1-2 view descriptions to improve disambiguation
215
+ - Adding 1-3 filtered measures that match dashboard WHERE clauses
216
+ - Occasionally restructuring a view (splitting or merging)
217
+
218
+ **Don't try to get it perfect before deploying.** Deploy early with your
219
+ best guess, test with real questions, and fix what breaks.
220
+
221
+ ## Quick Checklist
222
+
223
+ Before deploying, review each view against this checklist:
224
+
225
+ - [ ] **View name** describes the audience/use case, not the underlying table
226
+ - [ ] **View description** leads with scope ("All users..." / "Sales team only...")
227
+ - [ ] **View description** cross-references related views ("For X, use Y instead")
228
+ - [ ] **View description** includes key dimension values where helpful
229
+ - [ ] **Filtered measures** exist for every dashboard card with a WHERE clause
230
+ - [ ] **Measure descriptions** say what's included AND excluded
231
+ - [ ] **Dimension descriptions** include example values for categorical fields
232
+ - [ ] **No two views** could plausibly answer the same question without disambiguation
@@ -29,7 +29,7 @@ bon datasource add --name my_warehouse --type postgres \
29
29
  bon datasource add
30
30
  ```
31
31
 
32
- Supported types: `postgres` (also works for Redshift), `snowflake`, `bigquery`, `databricks`.
32
+ Supported types: `postgres`, `redshift`, `snowflake`, `bigquery`, `databricks`.
33
33
 
34
34
  The demo option adds a read-only Contoso retail dataset with tables like
35
35
  `fact_sales`, `dim_product`, `dim_store`, and `dim_customer`.
@@ -40,9 +40,13 @@ The connection will be tested automatically during `bon deploy`.
40
40
 
41
41
  Before creating cubes, understand what tables and columns are available in your warehouse.
42
42
 
43
+ **Important:** `bon query` is for querying the **deployed semantic layer** — it does NOT
44
+ access your database directly. Use your warehouse's native tools to explore tables.
45
+
43
46
  **Options for exploring your data:**
44
- - Use your database IDE or CLI (e.g., `psql`, Snowflake web UI, BigQuery console) to browse tables
47
+ - Use your database CLI (e.g., `psql` for Postgres/Redshift, `snowsql` for Snowflake, `bq` for BigQuery) to list tables and columns
45
48
  - Check your dbt docs or existing documentation for table schemas
49
+ - Ask the user for table names and column details if you don't have direct database access
46
50
  - For the demo dataset, the tables are: `contoso.fact_sales`, `contoso.dim_product`, `contoso.dim_store`, `contoso.dim_customer`
47
51
 
48
52
  Note the table names, column names, and data types — you'll use these in Phase 3.
@@ -78,6 +82,15 @@ cubes:
78
82
  sql: total_cost
79
83
  description: Sum of product costs
80
84
 
85
+ # Filtered measures — match what users actually see on dashboards
86
+ - name: return_count
87
+ type: count
88
+ description: >-
89
+ Sales transactions with returns only (return_quantity > 0).
90
+ For all transactions, use count.
91
+ filters:
92
+ - sql: "{CUBE}.return_quantity > 0"
93
+
81
94
  dimensions:
82
95
  - name: sales_key
83
96
  type: number
@@ -96,8 +109,11 @@ cubes:
96
109
  ```
97
110
 
98
111
  Key rules:
99
- - Every cube needs a `primary_key` dimension
100
- - Every measure and dimension should have a `description`
112
+ - Every cube needs a `primary_key` dimension — **it must be unique**. If no column is naturally unique, use `sql` with `ROW_NUMBER()` instead of `sql_table` and add a synthetic key. Non-unique PKs cause dimension queries to silently return empty results.
113
+ - Every measure and dimension should have a `description` — descriptions are how AI agents discover and choose metrics (see `bonnard-design-guide` rule)
114
+ - **Add filtered measures** for any metric users track with filters (e.g., "online revenue" vs "total revenue"). If a dashboard card has a WHERE clause beyond a date range, that filter should be a filtered measure.
115
+ - Measure descriptions should say what's **included and excluded** — e.g., "Online channel only. For all channels, use total_revenue."
116
+ - Dimension descriptions should include **example values** for categorical fields — e.g., "Sales channel (values: Online, Store, Reseller)"
101
117
  - Set `data_source` to match the datasource name from Phase 1
102
118
  - Use `sql_table` with the full `schema.table` path
103
119
  - Use `sql_table` for simple table references, `sql` for complex queries
@@ -107,26 +123,38 @@ for all 12 measure types, `bon docs cubes.dimensions.types` for dimension types.
107
123
 
108
124
  ## Phase 4: Create a View
109
125
 
110
- Views expose a curated subset of measures and dimensions for consumers.
111
- Create a file in `bonnard/views/` that references the cube from Phase 3.
126
+ Views expose a curated subset of measures and dimensions for specific
127
+ audiences. **Name views by what they answer** (e.g., `sales_performance`),
128
+ not by what table they wrap (e.g., `sales_view`). A view can combine
129
+ measures and dimensions from multiple cubes via `join_path`.
112
130
 
113
- Example using demo data`bonnard/views/sales_overview.yaml`:
131
+ The view **description** is critical it's how AI agents decide which view
132
+ to query. It should answer: what's in here, when to use it, and when to
133
+ use something else instead. Include key dimension values where helpful.
134
+
135
+ Example using demo data — `bonnard/views/sales_performance.yaml`:
114
136
 
115
137
  ```yaml
116
138
  views:
117
- - name: sales_overview
118
- description: High-level sales metrics and attributes
139
+ - name: sales_performance
140
+ description: >-
141
+ Retail sales metrics — revenue, cost, and transaction counts by product,
142
+ store, and date. Default view for sales questions. Includes return_count
143
+ for return analysis. For customer demographics, use customer_insights
144
+ instead.
119
145
  cubes:
120
146
  - join_path: sales
121
147
  includes:
122
148
  - count
123
149
  - total_revenue
150
+ - return_count
124
151
  - total_cost
125
152
  - date
126
153
  - sales_quantity
127
154
  ```
128
155
 
129
- Use `bon docs views` for the full reference.
156
+ Use `bon docs views` for the full reference. See the `bonnard-design-guide`
157
+ rule for principles on naming, descriptions, and view structure.
130
158
 
131
159
  ## Phase 5: Validate
132
160
 
@@ -162,21 +190,26 @@ After deploying, the output shows what changed (added/modified/removed) and
162
190
  flags any breaking changes. Use `bon deployments` to see history and
163
191
  `bon diff <id>` to review changes from any deployment.
164
192
 
165
- ## Phase 7: Test with a Query
193
+ ## Phase 7: Test with Queries
166
194
 
167
- Verify the deployment works using the cube name from Phase 3:
195
+ Verify the deployment works using the **view name** from Phase 4:
168
196
 
169
197
  ```bash
170
198
  # Simple count
171
- bon query '{"measures": ["sales.count"]}'
199
+ bon query '{"measures": ["sales_performance.count"]}'
172
200
 
173
201
  # With a dimension
174
- bon query '{"measures": ["sales.total_revenue"], "dimensions": ["sales.date"]}'
202
+ bon query '{"measures": ["sales_performance.total_revenue"], "dimensions": ["sales_performance.date"]}'
175
203
 
176
204
  # SQL format
177
- bon query --sql "SELECT MEASURE(total_revenue) FROM sales"
205
+ bon query --sql "SELECT MEASURE(total_revenue) FROM sales_performance"
178
206
  ```
179
207
 
208
+ **Test with natural language too.** If you've set up MCP (Phase 8), ask
209
+ an AI agent questions like "what's our total revenue?" and check whether
210
+ it picks the right view and measure. If it picks the wrong view, the issue
211
+ is usually the view description — see the `bonnard-design-guide` rule.
212
+
180
213
  ## Phase 8: Connect AI Agents (Optional)
181
214
 
182
215
  Set up MCP so AI agents can query the semantic layer:
@@ -192,8 +225,9 @@ and other MCP clients. The MCP URL is `https://mcp.bonnard.dev/mcp`.
192
225
 
193
226
  After the first cube is working:
194
227
 
228
+ - **Iterate on descriptions** — test with real questions via MCP, fix agent mistakes by improving view descriptions and adding filtered measures (see `bonnard-design-guide` rule)
195
229
  - Add more cubes for other tables
196
230
  - Add joins between cubes (`bon docs cubes.joins`)
231
+ - Build audience-centric views that combine multiple cubes — e.g., a `management` view that pulls from `sales`, `users`, and `products` cubes
197
232
  - Add calculated measures (`bon docs cubes.measures.calculated`)
198
233
  - Add segments for common filters (`bon docs cubes.segments`)
199
- - Build dashboards (`bon docs dashboards`)
@@ -64,7 +64,7 @@ bon datasource add --from-dbt
64
64
  bon datasource add
65
65
  ```
66
66
 
67
- Supported types: `postgres` (also works for Redshift), `snowflake`, `bigquery`, `databricks`.
67
+ Supported types: `postgres`, `redshift`, `snowflake`, `bigquery`, `databricks`.
68
68
 
69
69
  The connection will be tested automatically during `bon deploy`.
70
70
 
@@ -121,11 +121,12 @@ highest-referenced table and work down. Create one file per cube in
121
121
  For each cube:
122
122
  1. Set `sql_table` to the full `schema.table` path
123
123
  2. Set `data_source` to the datasource name from Phase 3
124
- 3. Add a `primary_key` dimension
124
+ 3. Add a `primary_key` dimension — **must be unique**. If no column is naturally unique, use `sql` with `ROW_NUMBER()` instead of `sql_table` and add a synthetic key
125
125
  4. Add time dimensions for date/datetime columns
126
126
  5. Add measures based on card SQL patterns (Phase 4)
127
- 6. Add dimensions for columns used as filters (template vars from Phase 2)
128
- 7. Add `description` to every measure and dimension
127
+ 6. **Add filtered measures** for every card with a WHERE clause beyond date range — e.g., "active offers" (status != cancelled). This is the #1 way to match dashboard numbers.
128
+ 7. Add dimensions for columns used as filters (template vars from Phase 2)
129
+ 8. Add `description` to every measure and dimension — descriptions should say what's **included and excluded**. Dimension descriptions should include **example values** for categorical fields.
129
130
 
130
131
  Example — `bonnard/cubes/orders.yaml`:
131
132
 
@@ -180,15 +181,26 @@ Use `bon docs cubes.joins` for the full reference.
180
181
 
181
182
  Map Metabase collections to views. Each top-level collection (business domain)
182
183
  from the analysis report becomes a view that composes the relevant cubes.
184
+ **Name views by what they answer** (e.g., `sales_pipeline`), not by what
185
+ table they wrap (e.g., `orders_view`).
183
186
 
184
187
  Create one file per view in `bonnard/views/`.
185
188
 
189
+ The view **description** is critical — it's how AI agents decide which view
190
+ to query. It should answer: what's in here, when to use it, and when to use
191
+ something else instead. Cross-reference related views to prevent agents from
192
+ picking the wrong one.
193
+
186
194
  Example — `bonnard/views/sales_analytics.yaml`:
187
195
 
188
196
  ```yaml
189
197
  views:
190
198
  - name: sales_analytics
191
- description: Sales metrics and dimensions for the sales team
199
+ description: >-
200
+ Sales team view — order revenue, counts, and status breakdowns with
201
+ customer region. Default view for revenue and order questions. Use the
202
+ status dimension (values: pending, completed, cancelled) to filter.
203
+ For customer-level analysis, use customer_insights instead.
192
204
  cubes:
193
205
  - join_path: orders
194
206
  includes:
@@ -204,7 +216,8 @@ views:
204
216
  - region
205
217
  ```
206
218
 
207
- Use `bon docs views` for the full reference.
219
+ Use `bon docs views` for the full reference. See the `bonnard-design-guide`
220
+ rule for principles on view naming, descriptions, and structure.
208
221
 
209
222
  ## Phase 7: Validate and Deploy
210
223
 
@@ -235,23 +248,29 @@ equivalent queries:
235
248
 
236
249
  ```bash
237
250
  # Run a semantic layer query
238
- bon query '{"measures": ["orders.total_revenue"], "dimensions": ["orders.status"]}'
251
+ bon query '{"measures": ["sales_analytics.total_revenue"], "dimensions": ["sales_analytics.status"]}'
239
252
 
240
253
  # SQL format
241
- bon query --sql "SELECT status, MEASURE(total_revenue) FROM orders GROUP BY 1"
254
+ bon query --sql "SELECT status, MEASURE(total_revenue) FROM sales_analytics GROUP BY 1"
242
255
  ```
243
256
 
244
257
  Compare the numbers with the corresponding Metabase card. If they don't match:
245
258
  - Check the SQL in the card (`bon metabase explore card <id>`) for filters or transformations
246
259
  - Ensure the measure type matches the aggregation (SUM vs COUNT vs AVG)
247
- - Check for WHERE clauses that should be segments or pre-filters
260
+ - Check for WHERE clauses that should be filtered measures (not segments)
261
+
262
+ **Test with natural language too.** Set up MCP (`bon mcp`) and ask an agent
263
+ the same questions your Metabase dashboards answer. Check whether it picks
264
+ the right **view** and **measure** — most failures are description problems,
265
+ not data problems. See the `bonnard-design-guide` rule for Principle 6.
248
266
 
249
267
  ## Next Steps
250
268
 
251
269
  After the core migration is working:
252
270
 
271
+ - **Iterate on descriptions** — test with real questions via MCP, fix agent mistakes by improving view descriptions and adding filtered measures (see `bonnard-design-guide` rule)
253
272
  - Add remaining tables as cubes (work down the reference count list)
273
+ - Build audience-centric views that combine multiple cubes — match how Metabase collections organize data by business domain
254
274
  - Add calculated measures for complex card SQL (`bon docs cubes.measures.calculated`)
255
- - Add segments for common WHERE clauses (`bon docs cubes.segments`)
256
275
  - Set up MCP for AI agent access (`bon mcp`)
257
276
  - Review and iterate with `bon deployments` and `bon diff <id>`
@@ -48,9 +48,10 @@ bon datasource add --demo
48
48
  ```
49
49
 
50
50
  This adds a read-only **Contoso** retail database (Postgres) with tables:
51
- - `fact_sales` — transactions with sales_amount, unit_price, sales_quantity, date_key
51
+ - `fact_sales` — transactions with sales_amount, total_cost, sales_quantity, return_quantity, return_amount, discount_amount, date_key, channel_key, store_key, product_key
52
52
  - `dim_product` — product_name, brand_name, manufacturer, unit_cost, unit_price
53
- - `dim_store` — store_name, store_type, employee_count, selling_area_size
53
+ - `dim_store` — store_name, store_type, employee_count, selling_area_size, status
54
+ - `dim_channel` — channel_name (values: Store, Online, Catalog, Reseller)
54
55
  - `dim_customer` — first_name, last_name, gender, yearly_income, education, occupation
55
56
 
56
57
  All tables are in the `contoso` schema. The datasource is named `contoso_demo`.
@@ -65,11 +66,11 @@ All tables are in the `contoso` schema. The datasource is named `contoso_demo`.
65
66
  | `bon datasource add --from-dbt` | Import from dbt profiles |
66
67
  | `bon validate` | Validate YAML syntax, warn on missing descriptions and `data_source` |
67
68
  | `bon deploy -m "message"` | Deploy to Bonnard (requires login, message required) |
68
- | `bon deploy --ci` | Non-interactive deploy (fails on missing datasources) |
69
+ | `bon deploy --ci` | Non-interactive deploy |
69
70
  | `bon deployments` | List recent deployments (add `--all` for full history) |
70
71
  | `bon diff <deployment-id>` | Show changes in a deployment (`--breaking` for breaking only) |
71
72
  | `bon annotate <deployment-id>` | Add reasoning/context to deployment changes |
72
- | `bon query '{...}'` | Execute a semantic layer query (JSON or `--sql` format) |
73
+ | `bon query '{...}'` | Query the deployed semantic layer (requires `bon deploy` first, not for raw DB access) |
73
74
  | `bon mcp` | Show MCP setup instructions for AI agents |
74
75
  | `bon docs` | Browse documentation |
75
76
  | `bon metabase connect` | Connect to a Metabase instance (API key) |
@@ -96,15 +97,19 @@ Topics follow dot notation (e.g., `cubes.dimensions.time`). Use `--recursive` to
96
97
 
97
98
  ## Workflow
98
99
 
99
- 1. **Setup datasource** — `bon datasource add --from-dbt` or manual
100
- 2. **Create cubes** — Define measures/dimensions in `bonnard/cubes/*.yaml`
101
- 3. **Create views** — Compose cubes in `bonnard/views/*.yaml`
102
- 4. **Validate** — `bon validate`
103
- 5. **Deploy** — `bon login` then `bon deploy -m "description of changes"`
104
- 6. **Review** — `bon deployments` to list, `bon diff <id>` to inspect changes
100
+ 1. **Start from questions** — Collect the most common questions your team asks about data. Group them by audience.
101
+ 2. **Setup datasource** — `bon datasource add --from-dbt` or manual
102
+ 3. **Create cubes** — Define measures/dimensions in `bonnard/cubes/*.yaml`. Add filtered measures for any metric with a WHERE clause.
103
+ 4. **Create views** — Compose cubes in `bonnard/views/*.yaml`. Name views by audience/use case, not by table.
104
+ 5. **Write descriptions** — Descriptions are how AI agents choose views and measures. Lead with scope, cross-reference related views, include dimension values.
105
+ 6. **Validate** — `bon validate`
106
+ 7. **Deploy** — `bon login` then `bon deploy -m "description of changes"`
107
+ 8. **Test with questions** — Query via MCP with real user questions. Check the agent picks the right view and measure.
108
+ 9. **Iterate** — Fix agent mistakes by improving descriptions and adding filtered measures. Expect 2-4 iterations.
105
109
 
106
110
  For a guided walkthrough: `/bonnard-get-started`
107
111
  For projects migrating from Metabase: `/bonnard-metabase-migrate`
112
+ For design principles: `/bonnard-design-guide`
108
113
 
109
114
  ## Deployment & Change Tracking
110
115
 
@@ -117,10 +122,22 @@ Every deploy creates a versioned deployment with change detection:
117
122
  - **Diff**: `bon diff <id>` shows all changes; `bon diff <id> --breaking` filters to breaking only
118
123
  - **Annotate**: `bon annotate <id> --data '{"object": "note"}'` adds context to changes
119
124
 
120
- For CI/CD pipelines, use `bon deploy --ci -m "message"` (non-interactive, fails on issues) or `bon deploy --push-datasources -m "message"` to auto-push missing datasources.
125
+ For CI/CD pipelines, use `bon deploy --ci -m "message"` (non-interactive, fails on issues). Datasources are always synced automatically during deploy.
121
126
 
122
- ## Best Practices
127
+ ## Design Principles
128
+
129
+ Summary — see `/bonnard-design-guide` for examples and details.
130
+
131
+ 1. **Start from questions, not tables** — collect the 10-20 most common questions, build views that answer them
132
+ 2. **Views are for audiences, not tables** — name by use case (`sales_pipeline`), not by table (`orders_view`)
133
+ 3. **Add filtered measures** — if a dashboard card has a WHERE clause beyond date range, make it a filtered measure
134
+ 4. **Descriptions are the discovery API** — lead with scope, cross-reference related views, include dimension values
135
+ 5. **Build cross-entity views** — combine cubes when users think across tables; don't force one view per cube
136
+ 6. **Test with natural language** — ask an agent real questions via MCP; check it picks the right view and measure
137
+ 7. **Iterate** — expect 2-4 rounds; fix agent mistakes by improving descriptions, not data
138
+
139
+ ## Technical Gotchas
123
140
 
124
141
  - **Always set `data_source`** on cubes — without it, cubes silently use the default warehouse, which breaks when multiple warehouses are added later. `bon validate` warns about this.
125
- - **Add descriptions** to all cubes, views, measures, and dimensions these power AI agent discovery and the schema catalog.
142
+ - **Primary keys must be unique** Cube deduplicates on the primary key. If the column isn't unique (e.g., a date with multiple rows per day), dimension queries silently return empty results. For tables without a natural unique column, use a `sql` query with `ROW_NUMBER()` to generate a synthetic key.
126
143
  - **Use `sql_table` with full schema path** (e.g., `schema.table_name`) for clarity.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@bonnard/cli",
3
- "version": "0.2.3",
3
+ "version": "0.2.5",
4
4
  "type": "module",
5
5
  "bin": {
6
6
  "bon": "./dist/bin/bon.mjs"