@bonnard/cli 0.2.4 → 0.2.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +85 -0
- package/dist/bin/bon.mjs +15 -2
- package/dist/bin/{validate-BdqZBH2n.mjs → validate-Bc8zGNw7.mjs} +75 -3
- package/dist/docs/_index.md +17 -6
- package/dist/docs/topics/catalog.md +36 -0
- package/dist/docs/topics/cli.deploy.md +193 -0
- package/dist/docs/topics/cli.md +113 -0
- package/dist/docs/topics/cli.validate.md +125 -0
- package/dist/docs/topics/cubes.data-source.md +1 -1
- package/dist/docs/topics/features.governance.md +58 -59
- package/dist/docs/topics/features.semantic-layer.md +6 -0
- package/dist/docs/topics/getting-started.md +2 -2
- package/dist/docs/topics/governance.md +83 -0
- package/dist/docs/topics/overview.md +49 -0
- package/dist/docs/topics/querying.mcp.md +200 -0
- package/dist/docs/topics/querying.md +11 -0
- package/dist/docs/topics/querying.rest-api.md +198 -0
- package/dist/docs/topics/querying.sdk.md +53 -0
- package/dist/docs/topics/slack-teams.md +18 -0
- package/dist/docs/topics/views.md +17 -9
- package/dist/docs/topics/workflow.md +6 -5
- package/dist/templates/claude/skills/bonnard-design-guide/SKILL.md +233 -0
- package/dist/templates/claude/skills/bonnard-get-started/SKILL.md +49 -15
- package/dist/templates/claude/skills/bonnard-metabase-migrate/SKILL.md +28 -9
- package/dist/templates/cursor/rules/bonnard-design-guide.mdc +232 -0
- package/dist/templates/cursor/rules/bonnard-get-started.mdc +49 -15
- package/dist/templates/cursor/rules/bonnard-metabase-migrate.mdc +28 -9
- package/dist/templates/shared/bonnard.md +28 -11
- package/package.json +2 -2
|
@@ -0,0 +1,198 @@
|
|
|
1
|
+
# REST API
|
|
2
|
+
|
|
3
|
+
> Query your deployed semantic layer using the Bonnard REST API. Send JSON query objects or SQL strings to retrieve measures and dimensions with filtering, grouping, and time ranges.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
After deploying with `bon deploy`, you can query the semantic layer using `bon query`. This tests that your cubes and views work correctly and returns data from your warehouse through Bonnard.
|
|
8
|
+
|
|
9
|
+
## Query Formats
|
|
10
|
+
|
|
11
|
+
Bonnard supports two query formats:
|
|
12
|
+
|
|
13
|
+
### JSON Format (Default)
|
|
14
|
+
|
|
15
|
+
The JSON format uses the REST API structure:
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
bon query '{"measures": ["orders.count"]}'
|
|
19
|
+
|
|
20
|
+
bon query '{
|
|
21
|
+
"measures": ["orders.total_revenue"],
|
|
22
|
+
"dimensions": ["orders.status"],
|
|
23
|
+
"filters": [{
|
|
24
|
+
"member": "orders.created_at",
|
|
25
|
+
"operator": "inDateRange",
|
|
26
|
+
"values": ["2024-01-01", "2024-12-31"]
|
|
27
|
+
}]
|
|
28
|
+
}'
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
**JSON Query Properties:**
|
|
32
|
+
|
|
33
|
+
| Property | Description |
|
|
34
|
+
|----------|-------------|
|
|
35
|
+
| `measures` | Array of measures to calculate (e.g., `["orders.count"]`) |
|
|
36
|
+
| `dimensions` | Array of dimensions to group by (e.g., `["orders.status"]`) |
|
|
37
|
+
| `filters` | Array of filter objects |
|
|
38
|
+
| `timeDimensions` | Time-based grouping with granularity |
|
|
39
|
+
| `segments` | Named filters defined in cubes |
|
|
40
|
+
| `limit` | Max rows to return |
|
|
41
|
+
| `offset` | Skip rows (pagination) |
|
|
42
|
+
| `order` | Sort specification |
|
|
43
|
+
|
|
44
|
+
**Filter Operators:**
|
|
45
|
+
|
|
46
|
+
| Operator | Use Case |
|
|
47
|
+
|----------|----------|
|
|
48
|
+
| `equals`, `notEquals` | Exact match |
|
|
49
|
+
| `contains`, `notContains` | String contains |
|
|
50
|
+
| `startsWith`, `endsWith` | String prefix/suffix |
|
|
51
|
+
| `gt`, `gte`, `lt`, `lte` | Numeric comparison |
|
|
52
|
+
| `inDateRange`, `beforeDate`, `afterDate` | Time filtering |
|
|
53
|
+
| `set`, `notSet` | NULL checks |
|
|
54
|
+
|
|
55
|
+
### SQL Format
|
|
56
|
+
|
|
57
|
+
The SQL format uses the SQL API, where cubes are tables:
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
bon query --sql "SELECT status, MEASURE(count) FROM orders GROUP BY 1"
|
|
61
|
+
|
|
62
|
+
bon query --sql "SELECT
|
|
63
|
+
city,
|
|
64
|
+
MEASURE(total_revenue),
|
|
65
|
+
MEASURE(avg_order_value)
|
|
66
|
+
FROM orders
|
|
67
|
+
WHERE status = 'completed'
|
|
68
|
+
GROUP BY 1
|
|
69
|
+
ORDER BY 2 DESC
|
|
70
|
+
LIMIT 10"
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
**SQL Syntax Rules:**
|
|
74
|
+
|
|
75
|
+
1. **Cubes/views are tables** — `FROM orders` references the `orders` cube
|
|
76
|
+
2. **Dimensions are columns** — Include in `SELECT` and `GROUP BY`
|
|
77
|
+
3. **Measures use `MEASURE()`** — Or matching aggregates (`SUM`, `COUNT`, etc.)
|
|
78
|
+
4. **Segments are boolean** — Filter with `WHERE is_completed IS TRUE`
|
|
79
|
+
|
|
80
|
+
**Examples:**
|
|
81
|
+
|
|
82
|
+
```sql
|
|
83
|
+
-- Count orders by status
|
|
84
|
+
SELECT status, MEASURE(count) FROM orders GROUP BY 1
|
|
85
|
+
|
|
86
|
+
-- Revenue by city with filter
|
|
87
|
+
SELECT city, SUM(amount) FROM orders WHERE status = 'shipped' GROUP BY 1
|
|
88
|
+
|
|
89
|
+
-- Using time dimension with granularity
|
|
90
|
+
SELECT DATE_TRUNC('month', created_at), MEASURE(total_revenue)
|
|
91
|
+
FROM orders
|
|
92
|
+
GROUP BY 1
|
|
93
|
+
ORDER BY 1
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
## CLI Usage
|
|
97
|
+
|
|
98
|
+
```bash
|
|
99
|
+
# JSON format (default)
|
|
100
|
+
bon query '{"measures": ["orders.count"]}'
|
|
101
|
+
|
|
102
|
+
# SQL format
|
|
103
|
+
bon query --sql "SELECT MEASURE(count) FROM orders"
|
|
104
|
+
|
|
105
|
+
# Limit rows
|
|
106
|
+
bon query '{"measures": ["orders.count"], "dimensions": ["orders.city"]}' --limit 10
|
|
107
|
+
|
|
108
|
+
# JSON output (instead of table)
|
|
109
|
+
bon query '{"measures": ["orders.count"]}' --format json
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
## Output Formats
|
|
113
|
+
|
|
114
|
+
### Table Format (Default)
|
|
115
|
+
|
|
116
|
+
```
|
|
117
|
+
┌─────────┬───────────────┐
|
|
118
|
+
│ status │ orders.count │
|
|
119
|
+
├─────────┼───────────────┤
|
|
120
|
+
│ pending │ 42 │
|
|
121
|
+
│ shipped │ 156 │
|
|
122
|
+
│ done │ 892 │
|
|
123
|
+
└─────────┴───────────────┘
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
### JSON Format
|
|
127
|
+
|
|
128
|
+
```bash
|
|
129
|
+
bon query '{"measures": ["orders.count"]}' --format json
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
```json
|
|
133
|
+
[
|
|
134
|
+
{ "orders.status": "pending", "orders.count": 42 },
|
|
135
|
+
{ "orders.status": "shipped", "orders.count": 156 },
|
|
136
|
+
{ "orders.status": "done", "orders.count": 892 }
|
|
137
|
+
]
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
## Common Patterns
|
|
141
|
+
|
|
142
|
+
### Time Series Analysis
|
|
143
|
+
|
|
144
|
+
```bash
|
|
145
|
+
bon query '{
|
|
146
|
+
"measures": ["orders.total_revenue"],
|
|
147
|
+
"timeDimensions": [{
|
|
148
|
+
"dimension": "orders.created_at",
|
|
149
|
+
"granularity": "month",
|
|
150
|
+
"dateRange": ["2024-01-01", "2024-12-31"]
|
|
151
|
+
}]
|
|
152
|
+
}'
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
### Filtering by Dimension
|
|
156
|
+
|
|
157
|
+
```bash
|
|
158
|
+
bon query '{
|
|
159
|
+
"measures": ["orders.count"],
|
|
160
|
+
"dimensions": ["orders.city"],
|
|
161
|
+
"filters": [{
|
|
162
|
+
"member": "orders.status",
|
|
163
|
+
"operator": "equals",
|
|
164
|
+
"values": ["completed"]
|
|
165
|
+
}]
|
|
166
|
+
}'
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
### Multiple Measures
|
|
170
|
+
|
|
171
|
+
```bash
|
|
172
|
+
bon query '{
|
|
173
|
+
"measures": ["orders.count", "orders.total_revenue", "orders.avg_order_value"],
|
|
174
|
+
"dimensions": ["orders.category"]
|
|
175
|
+
}'
|
|
176
|
+
```
|
|
177
|
+
|
|
178
|
+
## Error Handling
|
|
179
|
+
|
|
180
|
+
### Common Errors
|
|
181
|
+
|
|
182
|
+
**"Projection references non-aggregate values"**
|
|
183
|
+
- All dimensions must be in `GROUP BY`
|
|
184
|
+
- All measures must use `MEASURE()` or matching aggregate
|
|
185
|
+
|
|
186
|
+
**"Cube not found"**
|
|
187
|
+
- Check cube name matches deployed cube
|
|
188
|
+
- Run `bon deploy` if cubes changed
|
|
189
|
+
|
|
190
|
+
**"Not logged in"**
|
|
191
|
+
- Run `bon login` first
|
|
192
|
+
|
|
193
|
+
## See Also
|
|
194
|
+
|
|
195
|
+
- [cli.deploy](cli.deploy) - Deploy before querying
|
|
196
|
+
- [cubes.measures](cubes.measures) - Define measures
|
|
197
|
+
- [cubes.dimensions](cubes.dimensions) - Define dimensions
|
|
198
|
+
- [views](views) - Create focused query interfaces
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
# SDK
|
|
2
|
+
|
|
3
|
+
> Build custom data apps on top of your semantic layer.
|
|
4
|
+
|
|
5
|
+
The Bonnard SDK (`@bonnard/sdk`) is a lightweight TypeScript client for querying your deployed semantic layer programmatically. Build dashboards, embedded analytics, internal tools, or data pipelines — all backed by your governed metrics.
|
|
6
|
+
|
|
7
|
+
## Quick start
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
npm install @bonnard/sdk
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
```typescript
|
|
14
|
+
import { createClient } from '@bonnard/sdk';
|
|
15
|
+
|
|
16
|
+
const bonnard = createClient({
|
|
17
|
+
apiKey: 'your-api-key',
|
|
18
|
+
});
|
|
19
|
+
|
|
20
|
+
const result = await bonnard.query({
|
|
21
|
+
cube: 'orders',
|
|
22
|
+
measures: ['revenue', 'count'],
|
|
23
|
+
dimensions: ['status'],
|
|
24
|
+
timeDimension: {
|
|
25
|
+
dimension: 'created_at',
|
|
26
|
+
granularity: 'month',
|
|
27
|
+
dateRange: ['2025-01-01', '2025-12-31'],
|
|
28
|
+
},
|
|
29
|
+
});
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
## Type-safe queries
|
|
33
|
+
|
|
34
|
+
Full TypeScript support with inference. Measures, dimensions, filters, time dimensions, and sort orders are all typed. Query results include field annotations with titles and types.
|
|
35
|
+
|
|
36
|
+
```typescript
|
|
37
|
+
const result = await bonnard.sql<OrderRow>(
|
|
38
|
+
`SELECT status, MEASURE(revenue) FROM orders GROUP BY 1`
|
|
39
|
+
);
|
|
40
|
+
// result.data is OrderRow[]
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
## What you can build
|
|
44
|
+
|
|
45
|
+
- **Custom dashboards** — Query your semantic layer from Next.js, React, or any frontend
|
|
46
|
+
- **Embedded analytics** — Add governed metrics to your product
|
|
47
|
+
- **Data pipelines** — Consume semantic layer data in ETL workflows
|
|
48
|
+
- **Internal tools** — Build admin panels backed by consistent metrics
|
|
49
|
+
|
|
50
|
+
## See Also
|
|
51
|
+
|
|
52
|
+
- [Overview](/docs/overview) — Platform overview
|
|
53
|
+
- [querying.rest-api](querying.rest-api) — Query format reference
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
# Slack & Teams (Coming Soon)
|
|
2
|
+
|
|
3
|
+
> AI agents in your team chat. Coming soon.
|
|
4
|
+
|
|
5
|
+
Bonnard will bring semantic layer queries directly into Slack and Microsoft Teams — so anyone on your team can ask questions about data without leaving the conversation.
|
|
6
|
+
|
|
7
|
+
## Planned capabilities
|
|
8
|
+
|
|
9
|
+
- **Natural language queries** — Ask "what was revenue last month?" in a channel and get an answer with a chart
|
|
10
|
+
- **Governed responses** — Every answer goes through your semantic layer, so metrics are always consistent
|
|
11
|
+
- **Shared context** — Results posted in channels are visible to the whole team, not siloed in individual AI chats
|
|
12
|
+
- **Proactive alerts** — Get notified when key metrics change beyond thresholds you define
|
|
13
|
+
|
|
14
|
+
## Why it matters
|
|
15
|
+
|
|
16
|
+
Your team already lives in Slack and Teams. Instead of asking analysts or switching to a BI tool, anyone can get instant, governed answers right where they work. The same semantic layer that powers your AI agents and dashboards powers your team chat.
|
|
17
|
+
|
|
18
|
+
Interested in early access? Reach out at [hello@bonnard.dev](mailto:hello@bonnard.dev).
|
|
@@ -6,11 +6,18 @@
|
|
|
6
6
|
|
|
7
7
|
Views are facades that expose selected measures and dimensions from one or more cubes. They define which data is available to consumers, control join paths, and organize members into logical groups.
|
|
8
8
|
|
|
9
|
+
**Views should represent how a team thinks about data**, not mirror your warehouse tables. Name views by what they answer (`sales_pipeline`, `customer_insights`) rather than what table they wrap (`orders_view`, `users_view`). A good semantic layer has 5-10 focused views, not 30+ thin wrappers.
|
|
10
|
+
|
|
9
11
|
## Example
|
|
10
12
|
|
|
11
13
|
```yaml
|
|
12
14
|
views:
|
|
13
|
-
- name:
|
|
15
|
+
- name: sales_analytics
|
|
16
|
+
description: >-
|
|
17
|
+
Sales team view — order revenue, counts, and status breakdowns with
|
|
18
|
+
customer details. Default view for revenue and order questions. Use the
|
|
19
|
+
status dimension (values: pending, completed, cancelled) to filter.
|
|
20
|
+
For customer-level analysis, use customer_insights instead.
|
|
14
21
|
cubes:
|
|
15
22
|
- join_path: orders
|
|
16
23
|
includes:
|
|
@@ -39,13 +46,13 @@ views:
|
|
|
39
46
|
|
|
40
47
|
## Why Use Views?
|
|
41
48
|
|
|
42
|
-
### 1.
|
|
49
|
+
### 1. Curate for Audiences
|
|
43
50
|
|
|
44
|
-
Expose only
|
|
51
|
+
Expose only the measures and dimensions a specific audience needs. A single `orders` cube might contribute to a `sales_analytics` view (revenue by status), a `management_kpis` view (high-level totals), and a `finance_reporting` view (invoice amounts). Each view shows different slices of the same data.
|
|
45
52
|
|
|
46
53
|
```yaml
|
|
47
54
|
views:
|
|
48
|
-
- name:
|
|
55
|
+
- name: sales_analytics
|
|
49
56
|
cubes:
|
|
50
57
|
- join_path: orders
|
|
51
58
|
includes:
|
|
@@ -124,11 +131,12 @@ bonnard/views/
|
|
|
124
131
|
|
|
125
132
|
## Best Practices
|
|
126
133
|
|
|
127
|
-
1. **
|
|
128
|
-
2. **
|
|
129
|
-
3. **
|
|
130
|
-
4. **
|
|
131
|
-
5. **
|
|
134
|
+
1. **Name views by audience/use case** — `sales_pipeline` not `opportunities_view`. Views represent how teams think about data, not your warehouse schema.
|
|
135
|
+
2. **Write descriptions that help AI agents choose** — Lead with scope ("Sales team — revenue, order counts..."), cross-reference related views ("For customer demographics, use customer_insights instead"), and include key dimension values.
|
|
136
|
+
3. **Combine multiple cubes** — A view should pull from whichever cubes answer a team's questions. Don't limit views to one cube each.
|
|
137
|
+
4. **Be explicit with includes** — list members rather than using "*"
|
|
138
|
+
5. **Alias for clarity** — rename members when needed
|
|
139
|
+
6. **Organize with folders** — group related members together
|
|
132
140
|
|
|
133
141
|
## See Also
|
|
134
142
|
|
|
@@ -131,11 +131,12 @@ bonnard/cubes/
|
|
|
131
131
|
|
|
132
132
|
## Best Practices
|
|
133
133
|
|
|
134
|
-
1. **Start
|
|
135
|
-
2. **
|
|
136
|
-
3. **
|
|
137
|
-
4. **
|
|
138
|
-
5. **Test with
|
|
134
|
+
1. **Start from questions** — collect the most common questions your team asks, then build views that answer them. Don't just mirror your warehouse tables.
|
|
135
|
+
2. **Add filtered measures** — if a dashboard card has a WHERE clause beyond a date range, that filter should be a filtered measure. This is the #1 way to match real dashboard numbers.
|
|
136
|
+
3. **Write descriptions for agents** — descriptions are how AI agents choose which view and measure to use. Lead with scope, cross-reference related views, include dimension values.
|
|
137
|
+
4. **Validate often** — run `bon validate` after each change
|
|
138
|
+
5. **Test with real questions** — after deploying, ask an AI agent via MCP the same questions your team asks. Check it picks the right view and measure.
|
|
139
|
+
6. **Iterate** — expect 2-4 rounds of deploying, testing with questions, and improving descriptions before agents reliably answer the top 10 questions.
|
|
139
140
|
|
|
140
141
|
## Commands Reference
|
|
141
142
|
|
|
@@ -0,0 +1,233 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: bonnard-design-guide
|
|
3
|
+
description: Design principles for building semantic layers that work well for AI agents and business users. Use when building views, writing descriptions, or improving agent accuracy.
|
|
4
|
+
allowed-tools: Bash(bon *)
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Semantic Layer Design Guide
|
|
8
|
+
|
|
9
|
+
This guide covers design decisions that determine whether your semantic layer
|
|
10
|
+
works for end users and AI agents. It complements the setup guides
|
|
11
|
+
(`/bonnard-get-started`, `/bonnard-metabase-migrate`) which cover mechanics.
|
|
12
|
+
|
|
13
|
+
Read this before building views, or revisit it when agents return wrong
|
|
14
|
+
answers or users can't find the right metrics.
|
|
15
|
+
|
|
16
|
+
## Principle 1: Start from Questions, Not Tables
|
|
17
|
+
|
|
18
|
+
The natural instinct is: look at tables, build cubes, expose everything.
|
|
19
|
+
This produces a semantic layer that mirrors your warehouse schema —
|
|
20
|
+
technically correct but useless to anyone who doesn't already know the schema.
|
|
21
|
+
|
|
22
|
+
**Instead, start from what people ask:**
|
|
23
|
+
|
|
24
|
+
1. Collect the 10-20 most common questions your team asks about data
|
|
25
|
+
2. For each question, identify which tables/columns are needed to answer it
|
|
26
|
+
3. Group questions by audience (who asks them)
|
|
27
|
+
4. Build views that answer those question groups
|
|
28
|
+
|
|
29
|
+
If you have a BI tool (Metabase, Looker, Tableau), your top dashboards
|
|
30
|
+
by view count are the best source of real questions. If not, ask each team:
|
|
31
|
+
"What 3 numbers do you check every week?"
|
|
32
|
+
|
|
33
|
+
**Why this matters:** A semantic layer built from questions has 5-10 focused
|
|
34
|
+
views. One built from tables has 30+ views that agents struggle to choose
|
|
35
|
+
between. Fewer, well-scoped views with clear descriptions outperform
|
|
36
|
+
comprehensive but undifferentiated coverage.
|
|
37
|
+
|
|
38
|
+
## Principle 2: Views Are for Audiences, Not Tables
|
|
39
|
+
|
|
40
|
+
A common mistake is creating one view per cube (table). This produces views
|
|
41
|
+
like `orders_view`, `users_view`, `invoices_view` — which is just the
|
|
42
|
+
warehouse schema with extra steps.
|
|
43
|
+
|
|
44
|
+
**Views should represent how a team thinks about data:**
|
|
45
|
+
|
|
46
|
+
```
|
|
47
|
+
BAD (model-centric): GOOD (audience-centric):
|
|
48
|
+
views/ views/
|
|
49
|
+
orders_view.yaml (1 cube) management.yaml (revenue + users)
|
|
50
|
+
users_view.yaml (1 cube) sales.yaml (opportunities + invoices)
|
|
51
|
+
invoices_view.yaml (1 cube) product.yaml (users + funnel + contracts)
|
|
52
|
+
opportunities_view.yaml (1 cube) partners.yaml (partners + billing)
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
The same cube often appears in multiple views. An `opportunities` cube might
|
|
56
|
+
contribute to `sales_opportunities` (filtered to active offers), a management
|
|
57
|
+
KPI view, and a partner performance view. Each view exposes different
|
|
58
|
+
measures and dimensions because different audiences need different slices.
|
|
59
|
+
|
|
60
|
+
**Name views by what they answer**, not what table they wrap:
|
|
61
|
+
`sales_pipeline` not `opportunities_overview`.
|
|
62
|
+
|
|
63
|
+
## Principle 3: Add Filtered Measures
|
|
64
|
+
|
|
65
|
+
This is the single most impactful thing you can do. Look at any real
|
|
66
|
+
dashboard: it almost never shows `COUNT(*)` from a raw table. It shows
|
|
67
|
+
"active offers" (type=Angebot AND status!=Cancelled), "unpaid invoices"
|
|
68
|
+
(status NOT IN terminal states).
|
|
69
|
+
|
|
70
|
+
Without filtered measures, agents return unfiltered totals that don't match
|
|
71
|
+
what users see in their dashboards. Users lose trust immediately.
|
|
72
|
+
|
|
73
|
+
**For every important dashboard card, check the WHERE clause:**
|
|
74
|
+
|
|
75
|
+
```yaml
|
|
76
|
+
# Dashboard shows "Active Offers: 7,500"
|
|
77
|
+
# But raw COUNT(*) on opportunities returns 29,000
|
|
78
|
+
# The card SQL has: WHERE type = 'Angebot' AND status != 'Abgesagt'
|
|
79
|
+
|
|
80
|
+
measures:
|
|
81
|
+
- name: count
|
|
82
|
+
type: count
|
|
83
|
+
description: Total opportunities (all types and statuses)
|
|
84
|
+
|
|
85
|
+
- name: active_offer_count
|
|
86
|
+
type: count
|
|
87
|
+
description: Non-cancelled offers only (type=Angebot, status!=Abgesagt)
|
|
88
|
+
filters:
|
|
89
|
+
- sql: "{CUBE}.type = 'Angebot'"
|
|
90
|
+
- sql: "{CUBE}.status != 'Abgesagt'"
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
**Common patterns that need filtered measures:**
|
|
94
|
+
- Status filters: active/open/pending items vs all items
|
|
95
|
+
- Type filters: specific transaction types vs all transactions
|
|
96
|
+
- Boolean flags: completed, paid, verified subsets
|
|
97
|
+
- Exclusions: excluding test data, internal users, cancelled records
|
|
98
|
+
|
|
99
|
+
A good rule of thumb: if a BI dashboard card has a WHERE clause beyond just
|
|
100
|
+
a date range, that filter should probably be a filtered measure.
|
|
101
|
+
|
|
102
|
+
## Principle 4: Descriptions Are the Discovery API
|
|
103
|
+
|
|
104
|
+
For AI agents, descriptions are not documentation — they are the **primary
|
|
105
|
+
mechanism for choosing which view and measure to use**. When an agent calls
|
|
106
|
+
`explore_schema`, it sees view names and descriptions. That's all it has
|
|
107
|
+
to decide where to query.
|
|
108
|
+
|
|
109
|
+
### View descriptions must answer three questions:
|
|
110
|
+
|
|
111
|
+
1. **What's in here?** — Lead with the scope and content
|
|
112
|
+
2. **When should I use this?** — The default use case
|
|
113
|
+
3. **When should I use something else?** — Explicit disambiguation
|
|
114
|
+
|
|
115
|
+
```yaml
|
|
116
|
+
# BAD: Generic, doesn't help agent choose
|
|
117
|
+
description: User metrics and dimensions
|
|
118
|
+
|
|
119
|
+
# GOOD: Scoped, navigational, includes data values
|
|
120
|
+
description: >-
|
|
121
|
+
Sales pipeline — active and historical opportunities with contract values
|
|
122
|
+
and assignee details. Default view for pipeline questions, deal counts,
|
|
123
|
+
and contract value analysis. Use the type dimension (values: Angebot,
|
|
124
|
+
Auftrag) to filter by opportunity type. For invoice-level detail, use
|
|
125
|
+
sales_invoices instead.
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
### Description writing rules:
|
|
129
|
+
|
|
130
|
+
**Lead with scope, not mechanics.** "All users across both products" not
|
|
131
|
+
"Wraps the all_users cube." Agents match question keywords against
|
|
132
|
+
description keywords.
|
|
133
|
+
|
|
134
|
+
**Include actual dimension values.** If a dimension has known categorical
|
|
135
|
+
values, list them: `(values: erben, vererben)`, `(values: B2B, Organic)`.
|
|
136
|
+
This helps agents map user language to filter values.
|
|
137
|
+
|
|
138
|
+
**Use the same vocabulary as your users.** If your team says "testaments"
|
|
139
|
+
not "will_and_testament", the description should say "testaments."
|
|
140
|
+
|
|
141
|
+
**Cross-reference related views.** When two views could plausibly answer the
|
|
142
|
+
same question, both descriptions should point to each other: "For company-wide
|
|
143
|
+
totals, use company_users instead." This is the most effective way to
|
|
144
|
+
prevent agents from picking the wrong view.
|
|
145
|
+
|
|
146
|
+
### Measure descriptions should say what's included/excluded:
|
|
147
|
+
|
|
148
|
+
```yaml
|
|
149
|
+
# BAD
|
|
150
|
+
description: Count of orders
|
|
151
|
+
|
|
152
|
+
# GOOD
|
|
153
|
+
description: >-
|
|
154
|
+
Orders with status 'completed' or 'shipped' only.
|
|
155
|
+
Excludes cancelled and pending. For all orders, use total_order_count.
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
## Principle 5: Build Cross-Entity Views When Users Think Across Tables
|
|
159
|
+
|
|
160
|
+
Sometimes the most common question doesn't map to any single table.
|
|
161
|
+
"How many total signups do we have?" might require combining users from
|
|
162
|
+
two separate product tables.
|
|
163
|
+
|
|
164
|
+
**Options:**
|
|
165
|
+
|
|
166
|
+
1. **Database view/UNION table** — If your warehouse has a combined
|
|
167
|
+
view, build a cube on it. Cleanest approach.
|
|
168
|
+
2. **Multiple cubes in one view via joins** — If cubes can be joined
|
|
169
|
+
through foreign keys, include both in a view using `join_path`.
|
|
170
|
+
3. **Separate views with clear descriptions** — If data can't be combined,
|
|
171
|
+
create separate views and describe how they relate.
|
|
172
|
+
|
|
173
|
+
Don't force everything into one view. It's better to have an agent make
|
|
174
|
+
two clear queries than one confused query against an over-joined view.
|
|
175
|
+
|
|
176
|
+
## Principle 6: Test with Natural Language, Not Just Numbers
|
|
177
|
+
|
|
178
|
+
Verifying that `bon query '{"measures": ["orders.count"]}'` returns the
|
|
179
|
+
right number is necessary but not sufficient. The actual failure mode is:
|
|
180
|
+
|
|
181
|
+
> User asks "how many active offers do we have?"
|
|
182
|
+
> Agent queries `orders.count` instead of `sales_pipeline.active_offer_count`
|
|
183
|
+
> Returns 29,000 instead of 7,500
|
|
184
|
+
|
|
185
|
+
**To test properly, give real questions to an AI agent via MCP and check:**
|
|
186
|
+
|
|
187
|
+
1. Did it find the right **view**?
|
|
188
|
+
2. Did it pick the right **measure**?
|
|
189
|
+
3. Did it apply the right **filters/date range**?
|
|
190
|
+
4. Is the final **number** correct?
|
|
191
|
+
|
|
192
|
+
Steps 1-3 are where most failures happen, caused by description and view
|
|
193
|
+
structure problems — not wrong data.
|
|
194
|
+
|
|
195
|
+
**Build a small eval set:**
|
|
196
|
+
- Write 5-10 questions that your users actually ask
|
|
197
|
+
- For each question, record the expected view, measure, and answer
|
|
198
|
+
- Run each question through an agent 3-5 times (agents are non-deterministic)
|
|
199
|
+
- If pass rate is below 80%, the issue is almost always the view description
|
|
200
|
+
or view structure, not the data
|
|
201
|
+
|
|
202
|
+
## Principle 7: Iterate — The First Deploy Is Never Right
|
|
203
|
+
|
|
204
|
+
The semantic layer is not a one-time build. The effective workflow is:
|
|
205
|
+
|
|
206
|
+
```
|
|
207
|
+
Build views -> Deploy -> Test with questions -> Find agent mistakes
|
|
208
|
+
^ |
|
|
209
|
+
+---- Improve descriptions/measures <----------+
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
Expect 2-4 iterations before agents reliably answer the top 10 questions.
|
|
213
|
+
Each iteration typically involves:
|
|
214
|
+
|
|
215
|
+
- Rewriting 1-2 view descriptions to improve disambiguation
|
|
216
|
+
- Adding 1-3 filtered measures that match dashboard WHERE clauses
|
|
217
|
+
- Occasionally restructuring a view (splitting or merging)
|
|
218
|
+
|
|
219
|
+
**Don't try to get it perfect before deploying.** Deploy early with your
|
|
220
|
+
best guess, test with real questions, and fix what breaks.
|
|
221
|
+
|
|
222
|
+
## Quick Checklist
|
|
223
|
+
|
|
224
|
+
Before deploying, review each view against this checklist:
|
|
225
|
+
|
|
226
|
+
- [ ] **View name** describes the audience/use case, not the underlying table
|
|
227
|
+
- [ ] **View description** leads with scope ("All users..." / "Sales team only...")
|
|
228
|
+
- [ ] **View description** cross-references related views ("For X, use Y instead")
|
|
229
|
+
- [ ] **View description** includes key dimension values where helpful
|
|
230
|
+
- [ ] **Filtered measures** exist for every dashboard card with a WHERE clause
|
|
231
|
+
- [ ] **Measure descriptions** say what's included AND excluded
|
|
232
|
+
- [ ] **Dimension descriptions** include example values for categorical fields
|
|
233
|
+
- [ ] **No two views** could plausibly answer the same question without disambiguation
|