@bonnard/cli 0.2.5 → 0.2.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,85 @@
1
+ # @bonnard/cli
2
+
3
+ The Bonnard CLI (`bon`) takes you from zero to a deployed semantic layer in minutes. Define metrics in YAML, validate locally, deploy, and query — from your terminal or AI coding agent.
4
+
5
+ ## Install
6
+
7
+ ```bash
8
+ npm install -g @bonnard/cli
9
+ ```
10
+
11
+ Requires Node.js 20+.
12
+
13
+ ## Quick start
14
+
15
+ ```bash
16
+ bon init # Create project structure + agent templates
17
+ bon datasource add --demo # Add demo dataset (no warehouse needed)
18
+ bon validate # Check syntax
19
+ bon login # Authenticate with Bonnard
20
+ bon deploy -m "Initial deploy" # Deploy to Bonnard
21
+ ```
22
+
23
+ ## Commands
24
+
25
+ | Command | Description |
26
+ |---------|-------------|
27
+ | `bon init` | Create project structure and AI agent templates |
28
+ | `bon login` | Authenticate with Bonnard |
29
+ | `bon logout` | Remove stored credentials |
30
+ | `bon whoami` | Show current login status |
31
+ | `bon datasource add` | Add a data source (interactive) |
32
+ | `bon datasource add --demo` | Add read-only demo dataset |
33
+ | `bon datasource add --from-dbt` | Import from dbt profiles |
34
+ | `bon datasource list` | List configured data sources |
35
+ | `bon datasource remove <name>` | Remove a data source |
36
+ | `bon validate` | Validate cube and view YAML |
37
+ | `bon deploy -m "message"` | Deploy to Bonnard |
38
+ | `bon deployments` | List deployment history |
39
+ | `bon diff <id>` | View changes in a deployment |
40
+ | `bon annotate <id>` | Add context to deployment changes |
41
+ | `bon query '{"measures":["orders.count"]}'` | Query the semantic layer (JSON) |
42
+ | `bon query "SELECT ..." --sql` | Query the semantic layer (SQL) |
43
+ | `bon mcp` | MCP setup instructions for AI agents |
44
+ | `bon mcp test` | Test MCP server connectivity |
45
+ | `bon docs [topic]` | Browse modeling documentation |
46
+ | `bon docs --search "joins"` | Search documentation |
47
+
48
+ ## Agent-ready from the start
49
+
50
+ `bon init` generates context files for your AI coding tools:
51
+
52
+ - **Claude Code** — `.claude/rules/` + get-started skill
53
+ - **Cursor** — `.cursor/rules/` with auto-apply frontmatter
54
+ - **Codex** — `AGENTS.md` + skills folder
55
+
56
+ Your agent understands Bonnard's modeling language from the first prompt.
57
+
58
+ ## Project structure
59
+
60
+ After `bon init`:
61
+
62
+ ```
63
+ my-project/
64
+ ├── bon.yaml # Project configuration
65
+ ├── bonnard/
66
+ │ ├── cubes/ # Cube definitions (measures, dimensions, joins)
67
+ │ └── views/ # View definitions (curated query interfaces)
68
+ └── .bon/ # Local config (gitignored)
69
+ └── datasources.yaml # Data source credentials
70
+ ```
71
+
72
+ ## CI/CD
73
+
74
+ ```bash
75
+ bon deploy --ci -m "CI deploy"
76
+ ```
77
+
78
+ Non-interactive mode for pipelines. Datasources are synced automatically.
79
+
80
+ ## Documentation
81
+
82
+ - [Getting Started](https://docs.bonnard.dev/docs/getting-started)
83
+ - [CLI Reference](https://docs.bonnard.dev/docs/cli)
84
+ - [Modeling Guide](https://docs.bonnard.dev/docs/modeling/cubes)
85
+ - [Querying](https://docs.bonnard.dev/docs/querying)
@@ -52,13 +52,24 @@
52
52
  - [syntax.references](syntax.references) - Reference columns, members, and cubes
53
53
  - [syntax.context-variables](syntax.context-variables) - CUBE, FILTER_PARAMS, COMPILE_CONTEXT
54
54
 
55
- ## Workflow
55
+ ## Querying
56
56
 
57
- - [workflow](workflow) - End-to-end development workflow
58
- - [workflow.validate](workflow.validate) - Validate cubes and views locally
59
- - [workflow.deploy](workflow.deploy) - Deploy to Bonnard
60
- - [workflow.query](workflow.query) - Query the deployed semantic layer
61
- - [workflow.mcp](workflow.mcp) - Connect AI agents via MCP
57
+ - [querying](querying) - Query your deployed semantic layer
58
+ - [querying.mcp](querying.mcp) - Connect AI agents via MCP
59
+ - [querying.rest-api](querying.rest-api) - REST API and SQL query reference
60
+ - [querying.sdk](querying.sdk) - TypeScript SDK for custom apps
61
+
62
+ ## CLI
63
+
64
+ - [cli](cli) - CLI commands and development workflow
65
+ - [cli.deploy](cli.deploy) - Deploy to Bonnard
66
+ - [cli.validate](cli.validate) - Validate cubes and views locally
67
+
68
+ ## Other
69
+
70
+ - [governance](governance) - User and group-level permissions
71
+ - [catalog](catalog) - Browse your data model in the browser
72
+ - [slack-teams](slack-teams) - AI agents in team chat (coming soon)
62
73
 
63
74
  ## Quick Reference
64
75
 
@@ -0,0 +1,36 @@
1
+ # Catalog
2
+
3
+ > Browse and understand your data model — no code required.
4
+
5
+ The Bonnard catalog gives everyone on your team a live view of your semantic layer. Browse cubes, views, measures, and dimensions from the browser. Understand what data is available before writing a single query.
6
+
7
+ ## What you can explore
8
+
9
+ - **Cubes and Views** — See every deployed source with field counts at a glance
10
+ - **Measures** — Aggregation type, SQL expression, format (currency, percentage), and description
11
+ - **Dimensions** — Data type, time granularity options, and custom metadata
12
+ - **Segments** — Pre-defined filters available for queries
13
+
14
+ ## Field-level detail
15
+
16
+ Click any field to see exactly how it's calculated:
17
+
18
+ - **SQL expression** — The underlying query logic
19
+ - **Type and format** — How the field is aggregated and displayed
20
+ - **Origin cube** — Which cube a view field traces back to
21
+ - **Referenced fields** — Dependencies this field relies on
22
+ - **Custom metadata** — Tags, labels, and annotations set by your data team
23
+
24
+ ## Built for business users
25
+
26
+ The catalog is designed for anyone who needs to understand the data, not just engineers. No YAML, no terminal, no warehouse credentials. Browse the schema, read descriptions, and know exactly what to ask your AI agent for.
27
+
28
+ ## Coming soon
29
+
30
+ - **Relationship visualization** — An interactive visual map showing how cubes connect through joins and shared dimensions
31
+ - **Impact analysis** — Understand which views and measures are affected when you change a cube, before you deploy
32
+
33
+ ## See Also
34
+
35
+ - [views](views) — How to create curated views for your team
36
+ - [cubes.public](cubes.public) — Control which cubes are visible
@@ -0,0 +1,193 @@
1
+ # Deploy
2
+
3
+ > Deploy your cubes and views to the Bonnard platform using the CLI. Once deployed, your semantic layer is queryable via the REST API, MCP for AI agents, and connected BI tools.
4
+
5
+ ## Overview
6
+
7
+ The `bon deploy` command uploads your cubes and views to Bonnard, making them available for querying via the API. It validates and tests connections before deploying, and creates a versioned deployment with change detection.
8
+
9
+ ## Usage
10
+
11
+ ```bash
12
+ bon deploy -m "description of changes"
13
+ ```
14
+
15
+ A `-m` message is **required** — it describes what changed in this deployment.
16
+
17
+ ### Flags
18
+
19
+ | Flag | Description |
20
+ |------|-------------|
21
+ | `-m "message"` | **Required.** Deployment description |
22
+ | `--ci` | Non-interactive mode |
23
+
24
+ Datasources are always synced automatically during deploy.
25
+
26
+ ### CI/CD
27
+
28
+ For automated pipelines, use `--ci` for non-interactive mode:
29
+
30
+ ```bash
31
+ bon deploy --ci -m "CI deploy"
32
+ ```
33
+
34
+ ## Prerequisites
35
+
36
+ 1. **Logged in** — run `bon login` first
37
+ 2. **Valid cubes and views** — must pass `bon validate`
38
+ 3. **Working connections** — data sources must be accessible
39
+
40
+ ## What Happens
41
+
42
+ 1. **Validates** — checks cubes and views for errors
43
+ 2. **Tests connections** — verifies data source access
44
+ 3. **Uploads** — sends cubes and views to Bonnard
45
+ 4. **Detects changes** — compares against the previous deployment
46
+ 5. **Activates** — makes cubes and views available for queries
47
+
48
+ ## Example Output
49
+
50
+ ```
51
+ bon deploy -m "Add revenue metrics"
52
+
53
+ ✓ Validating...
54
+ ✓ bonnard/cubes/orders.yaml
55
+ ✓ bonnard/cubes/users.yaml
56
+ ✓ bonnard/views/orders_overview.yaml
57
+
58
+ ✓ Testing connections...
59
+ ✓ datasource "default" connected
60
+
61
+ ✓ Deploying to Bonnard...
62
+ Uploading 2 cubes, 1 view...
63
+
64
+ ✓ Deploy complete!
65
+
66
+ Changes:
67
+ + orders.total_revenue (measure)
68
+ + orders.avg_order_value (measure)
69
+ ~ orders.count (measure) — type changed
70
+
71
+ ⚠ 1 breaking change detected
72
+ ```
73
+
74
+ ## Change Detection
75
+
76
+ Every deployment is versioned. Bonnard automatically detects:
77
+
78
+ - **Added** — new cubes, views, measures, dimensions
79
+ - **Modified** — changes to type, SQL, format, description
80
+ - **Removed** — deleted fields (flagged as breaking)
81
+ - **Breaking changes** — removed measures/dimensions, type changes
82
+
83
+ ## Reviewing Deployments
84
+
85
+ After deploying, use these commands to review history and changes:
86
+
87
+ ### List deployments
88
+
89
+ ```bash
90
+ bon deployments # Recent deployments
91
+ bon deployments --all # Full history
92
+ ```
93
+
94
+ ### View changes in a deployment
95
+
96
+ ```bash
97
+ bon diff <deployment-id> # All changes
98
+ bon diff <deployment-id> --breaking # Breaking changes only
99
+ ```
100
+
101
+ ### Annotate changes
102
+
103
+ Add reasoning or context to deployment changes:
104
+
105
+ ```bash
106
+ bon annotate <deployment-id> --data '{"object": "note about why this changed"}'
107
+ ```
108
+
109
+ Annotations are visible in the schema catalog and help teammates understand why changes were made.
110
+
111
+ ## Deploy Flow
112
+
113
+ ```
114
+ bon deploy -m "message"
115
+
116
+ ├── 1. bon validate (must pass)
117
+
118
+ ├── 2. Test all datasource connections (must succeed)
119
+
120
+ ├── 3. Upload to Bonnard API
121
+ │ - cubes from bonnard/cubes/
122
+ │ - views from bonnard/views/
123
+ │ - datasource configs
124
+
125
+ ├── 4. Detect changes vs. previous deployment
126
+
127
+ └── 5. Activate deployment
128
+ ```
129
+
130
+ ## Error Handling
131
+
132
+ ### Validation Errors
133
+
134
+ ```
135
+ ✗ Validating...
136
+
137
+ bonnard/cubes/orders.yaml:15:5
138
+ error: Unknown measure type "counts"
139
+
140
+ Deploy aborted. Fix validation errors first.
141
+ ```
142
+
143
+ ### Connection Errors
144
+
145
+ ```
146
+ ✗ Testing connections...
147
+ ✗ datasource "analytics": Connection refused
148
+
149
+ Deploy aborted. Fix connection issues:
150
+ - Check credentials in .bon/datasources.yaml
151
+ - Verify network access to database
152
+ - Run: bon datasource add (to reconfigure)
153
+ ```
154
+
155
+ ### Auth Errors
156
+
157
+ ```
158
+ ✗ Not logged in.
159
+
160
+ Run: bon login
161
+ ```
162
+
163
+ ## What Gets Deployed
164
+
165
+ | Source | Deployed |
166
+ |--------|----------|
167
+ | `bonnard/cubes/*.yaml` | All cube definitions |
168
+ | `bonnard/views/*.yaml` | All view definitions |
169
+ | `.bon/datasources.yaml` | Connection configs (credentials encrypted) |
170
+ | `bon.yaml` | Project settings |
171
+
172
+ ## Deployment Behavior
173
+
174
+ - **Replaces** previous deployment (not additive)
175
+ - **All or nothing** — partial deploys don't happen
176
+ - **Instant** — changes take effect immediately
177
+ - **Versioned** — every deployment is tracked with changes
178
+
179
+ ## Best Practices
180
+
181
+ 1. **Always include a meaningful message** — helps teammates understand what changed
182
+ 2. **Validate first** — run `bon validate` before deploy
183
+ 3. **Test locally** — verify queries work before deploying
184
+ 4. **Use version control** — commit cubes and views before deploying
185
+ 5. **Review after deploy** — use `bon diff` to check for unintended breaking changes
186
+ 6. **Annotate breaking changes** — add context so consumers know what to update
187
+
188
+ ## See Also
189
+
190
+ - cli
191
+ - cli.validate
192
+ - cubes
193
+ - views
@@ -0,0 +1,113 @@
1
+ # CLI
2
+
3
+ > Built for agent-first development by data engineers.
4
+
5
+ The Bonnard CLI (`bon`) takes you from zero to a deployed semantic layer in minutes. Initialize a project, connect your warehouse, define metrics in YAML, validate locally, and deploy — all from your terminal or your AI coding agent.
6
+
7
+ ## Agent-ready from the start
8
+
9
+ `bon init` generates context files for your AI coding tools automatically:
10
+
11
+ - **Claude Code** — `.claude/rules/` + get-started skill
12
+ - **Cursor** — `.cursor/rules/` with auto-apply frontmatter
13
+ - **Codex** — `AGENTS.md` + skills folder
14
+
15
+ Your agent understands Bonnard's modeling language from the first prompt.
16
+
17
+ ## Project Structure
18
+
19
+ After `bon init`, your project has:
20
+
21
+ ```
22
+ my-project/
23
+ ├── bon.yaml # Project configuration
24
+ ├── bonnard/ # Semantic layer definitions
25
+ │ ├── cubes/ # Cube definitions
26
+ │ │ └── orders.yaml
27
+ │ └── views/ # View definitions
28
+ │ └── orders_overview.yaml
29
+ └── .bon/ # Local config (gitignored)
30
+ └── datasources.yaml # Data source credentials
31
+ ```
32
+
33
+ ## File Organization
34
+
35
+ ### One Cube Per File
36
+
37
+ ```
38
+ bonnard/cubes/
39
+ ├── orders.yaml
40
+ ├── users.yaml
41
+ ├── products.yaml
42
+ └── line_items.yaml
43
+ ```
44
+
45
+ ### Related Cubes Together
46
+
47
+ ```
48
+ bonnard/cubes/
49
+ ├── sales/
50
+ │ ├── orders.yaml
51
+ │ └── line_items.yaml
52
+ ├── users/
53
+ │ ├── users.yaml
54
+ │ └── profiles.yaml
55
+ └── products/
56
+ └── products.yaml
57
+ ```
58
+
59
+ ## Commands Reference
60
+
61
+ | Command | Description |
62
+ |---------|-------------|
63
+ | `bon init` | Create project structure |
64
+ | `bon datasource add` | Add a data source |
65
+ | `bon datasource add --demo` | Add demo dataset (no warehouse needed) |
66
+ | `bon datasource add --from-dbt` | Import from dbt profiles |
67
+ | `bon datasource list` | List configured sources |
68
+ | `bon validate` | Check cube and view syntax |
69
+ | `bon deploy -m "message"` | Deploy to Bonnard (message required) |
70
+ | `bon deploy --ci` | Non-interactive deploy |
71
+ | `bon deployments` | List deployment history |
72
+ | `bon diff <id>` | View changes in a deployment |
73
+ | `bon annotate <id>` | Add context to deployment changes |
74
+ | `bon query '{...}'` | Query the semantic layer |
75
+ | `bon mcp` | MCP setup instructions for AI agents |
76
+ | `bon docs` | Browse documentation |
77
+
78
+ ## CI/CD ready
79
+
80
+ Deploy from GitHub Actions, GitLab CI, or any pipeline:
81
+
82
+ ```bash
83
+ bon deploy --ci -m "CI deploy"
84
+ ```
85
+
86
+ Non-interactive mode. Datasources are synced automatically. Fails fast if anything is misconfigured.
87
+
88
+ ## Deployment versioning
89
+
90
+ Every deploy creates a versioned deployment with automatic change detection — added, modified, removed, and breaking changes are flagged. Review history with `bon deployments`, inspect changes with `bon diff`, and add context with `bon annotate`.
91
+
92
+ ## Built-in documentation
93
+
94
+ ```bash
95
+ bon docs cubes.measures # Read modeling docs in your terminal
96
+ bon docs --search "joins" # Search across all topics
97
+ ```
98
+
99
+ No context-switching. Learn and build in the same workflow.
100
+
101
+ ## Best Practices
102
+
103
+ 1. **Start from questions** — collect the most common questions your team asks, then build views that answer them. Don't just mirror your warehouse tables.
104
+ 2. **Add filtered measures** — if a dashboard card has a WHERE clause beyond a date range, that filter should be a filtered measure. This is the #1 way to match real dashboard numbers.
105
+ 3. **Write descriptions for agents** — descriptions are how AI agents choose which view and measure to use. Lead with scope, cross-reference related views, include dimension values.
106
+ 4. **Validate often** — run `bon validate` after each change
107
+ 5. **Test with real questions** — after deploying, ask an AI agent via MCP the same questions your team asks. Check it picks the right view and measure.
108
+ 6. **Iterate** — expect 2-4 rounds of deploying, testing with questions, and improving descriptions before agents reliably answer the top 10 questions.
109
+
110
+ ## See Also
111
+
112
+ - [cli.deploy](cli.deploy) — Deployment details
113
+ - [cli.validate](cli.validate) — Validation reference
@@ -0,0 +1,125 @@
1
+ # Validate
2
+
3
+ > Run validation checks on your cubes and views before deploying to catch YAML syntax errors, missing references, circular joins, and other issues. Use `bon validate` from the CLI.
4
+
5
+ ## Overview
6
+
7
+ The `bon validate` command checks your YAML cubes and views for syntax errors and schema violations. Run this before deploying to catch issues early.
8
+
9
+ ## Usage
10
+
11
+ ```bash
12
+ bon validate
13
+ ```
14
+
15
+ ## What Gets Validated
16
+
17
+ ### YAML Syntax
18
+
19
+ - Valid YAML format
20
+ - Proper indentation
21
+ - Correct quoting
22
+
23
+ ### Schema Compliance
24
+
25
+ - Required fields present (name, type, sql)
26
+ - Valid field values (known measure types, relationship types)
27
+ - Consistent naming conventions
28
+
29
+ ### Reference Integrity
30
+
31
+ - Referenced cubes exist
32
+ - Referenced members exist
33
+ - Join relationships are valid
34
+
35
+ ## Example Output
36
+
37
+ ### Success
38
+
39
+ ```
40
+ ✓ Validating YAML syntax...
41
+ ✓ Checking bonnard/cubes/orders.yaml
42
+ ✓ Checking bonnard/cubes/users.yaml
43
+ ✓ Checking bonnard/views/orders_overview.yaml
44
+
45
+ All cubes and views valid.
46
+ ```
47
+
48
+ ### Errors
49
+
50
+ ```
51
+ ✗ Validating YAML syntax...
52
+
53
+ bonnard/cubes/orders.yaml:15:5
54
+ error: Unknown measure type "counts"
55
+
56
+ Did you mean "count"?
57
+
58
+ 14: measures:
59
+ 15: - name: order_count
60
+ 16: type: counts <-- here
61
+ 17: sql: id
62
+
63
+ 1 error found.
64
+ ```
65
+
66
+ ## Common Errors
67
+
68
+ ### Missing Required Field
69
+
70
+ ```yaml
71
+ # Error: "sql" is required
72
+ measures:
73
+ - name: count
74
+ type: count
75
+ # Missing: sql: id
76
+ ```
77
+
78
+ ### Invalid Type
79
+
80
+ ```yaml
81
+ # Error: Unknown dimension type "text"
82
+ dimensions:
83
+ - name: status
84
+ type: text # Should be: string
85
+ sql: status
86
+ ```
87
+
88
+ ### Reference Not Found
89
+
90
+ ```yaml
91
+ # Error: Cube "user" not found (did you mean "users"?)
92
+ joins:
93
+ - name: user
94
+ relationship: many_to_one
95
+ sql: "{CUBE}.user_id = {user.id}"
96
+ ```
97
+
98
+ ### YAML Syntax
99
+
100
+ ```yaml
101
+ # Error: Bad indentation
102
+ measures:
103
+ - name: count # Should be indented
104
+ type: count
105
+ ```
106
+
107
+ ## Exit Codes
108
+
109
+ | Code | Meaning |
110
+ |------|---------|
111
+ | 0 | All validations passed |
112
+ | 1 | Validation errors found |
113
+
114
+ ## Best Practices
115
+
116
+ 1. **Run before every deploy** — `bon validate && bon deploy`
117
+ 2. **Add to CI/CD** — validate on pull requests
118
+ 3. **Fix errors first** — don't deploy with validation errors
119
+ 4. **Test connections** — connections are tested automatically during `bon deploy`
120
+
121
+ ## See Also
122
+
123
+ - cli
124
+ - cli.deploy
125
+ - syntax
@@ -89,4 +89,4 @@ Cubes from different data sources cannot be directly joined. Use views or pre-ag
89
89
 
90
90
  - cubes
91
91
  - cubes.sql
92
- - workflow.deploy
92
+ - cli.deploy
@@ -12,5 +12,5 @@ Bonnard is a semantic layer platform that sits between your data warehouse and y
12
12
 
13
13
  - Learn about [cubes](/docs/modeling/cubes) — measures, dimensions, joins
14
14
  - Learn about [views](/docs/modeling/views) — curated interfaces for consumers
15
- - Set up [MCP](/docs/workflow/mcp) — connect AI agents to your semantic layer
16
- - Read the full [workflow guide](/docs/workflow) — validate, deploy, query
15
+ - Set up [MCP](/docs/querying/mcp) — connect AI agents to your semantic layer
16
+ - Read the [CLI guide](/docs/cli) — validate, deploy, query
@@ -0,0 +1,83 @@
1
+ # Governance
2
+
3
+ > Control who can see which views, columns, and rows in your semantic layer.
4
+
5
+ Bonnard provides admin-managed data governance — control which views, columns, and rows each group of users can access. Policies are configured in the web UI and enforced automatically across MCP queries and the API. Changes take effect within one minute.
6
+
7
+ ## How It Works
8
+
9
+ ```
10
+ Admin configures in web UI:
11
+ Groups → Views → Field/Row restrictions
12
+
13
+ Enforced automatically:
14
+ MCP queries + API → only see what policies allow
15
+ ```
16
+
17
+ Governance uses **groups** as the unit of access. Each group has a set of **policies** that define which views its members can see, and optionally restrict specific columns or rows within those views.
18
+
19
+ ## Groups
20
+
21
+ Groups represent teams or roles in your organization — "Sales Team", "Finance", "Executive". Create and manage groups from the **Governance** page in the Bonnard dashboard.
22
+
23
+ Each group has:
24
+ - **Name** and optional description
25
+ - **Color** for visual identification
26
+ - **View access** — which views the group can query
27
+ - **Members** — which users belong to the group
28
+
29
+ Users can belong to multiple groups. Their effective access is the **union** of all group policies.
30
+
31
+ ## View-Level Access (Level 1)
32
+
33
+ The simplest control: toggle which views a group can see. Unchecked views are completely invisible to group members — they won't appear in `explore_schema` or be queryable.
34
+
35
+ From the group detail page, check the views you want to grant access to and click **Save changes**. New policies default to "All fields" with no row filters.
36
+
37
+ ## Field-Level Access (Level 2)
38
+
39
+ Fine-tune which measures and dimensions a group can see within a view. Click the gear icon on any granted view to open the fine-tune dialog.
40
+
41
+ Three modes:
42
+ - **All fields** — full access to every measure and dimension (default)
43
+ - **Only these** — whitelist specific fields; everything else is hidden
44
+ - **All except** — blacklist specific fields; everything else is visible
45
+
46
+ Hidden fields are removed from the schema — they don't appear in `explore_schema` and can't be used in queries.
47
+
48
+ ## Row-Level Filters (Level 2)
49
+
50
+ Restrict which rows a group can see. Add row filters in the fine-tune dialog to limit data by dimension values.
51
+
52
+ For example, filter `traffic_source` to `equals B2B, Organic` so the group only sees rows where traffic_source is B2B or Organic. Multiple values in a single filter are OR'd (any match). Multiple separate filters are AND'd (all must match).
53
+
54
+ Row filters are applied server-side on every query — users cannot bypass them.
55
+
56
+ ## Members
57
+
58
+ Assign users to groups from the **Members** tab. Each user shows which groups they belong to and a preview of their effective access (which views they can query, any field or row restrictions).
59
+
60
+ Users without any group assignment see nothing — they must be added to at least one group to query governed views.
61
+
62
+ ## How Policies Are Enforced
63
+
64
+ Policies configured in the web UI are stored in Supabase and injected into the query engine at runtime. When a user queries via MCP or the API:
65
+
66
+ 1. Their JWT is enriched with group memberships
67
+ 2. The query engine loads policies for those groups
68
+ 3. View visibility, field restrictions, and row filters are applied automatically
69
+ 4. The user only sees data their policies allow
70
+
71
+ No YAML changes are needed — governance is fully managed through the dashboard.
72
+
73
+ ## Best Practices
74
+
75
+ 1. **Start with broad access, then restrict** — give groups all views first, then fine-tune as needed
76
+ 2. **Use groups for teams, not individuals** — easier to manage and audit
77
+ 3. **Test with MCP** — after changing policies, query via MCP to verify the restrictions work as expected
78
+ 4. **Review after schema deploys** — new views need to be added to group policies to become visible
79
+
80
+ ## See Also
81
+
82
+ - [querying.mcp](querying.mcp) — How AI agents query your semantic layer
83
+ - [views](views) — Creating curated data views
@@ -0,0 +1,49 @@
1
+ # Overview
2
+
3
+ > Define your metrics once. Query governed data from any AI tool, dashboard, or application.
4
+
5
+ Bonnard is a semantic layer platform. Your data team defines metrics once, and everyone else gets reliable answers from the AI tools they already use — Claude, ChatGPT, Cursor, Copilot — in whatever form they need. No new interface to learn.
6
+
7
+ ## Architecture
8
+
9
+ ```
10
+ Data Warehouse → Cubes (metrics) → Views (interfaces) → Query Surfaces
11
+ ├── MCP (AI agents)
12
+ ├── REST API
13
+ └── SDK (custom apps)
14
+ ```
15
+
16
+ **Cubes** map to your database tables and define measures (revenue, count) and dimensions (status, date). **Views** compose cubes into focused interfaces for specific teams or use cases. Once deployed, your semantic layer is queryable through MCP, REST API, or the TypeScript SDK.
17
+
18
+ ## Multi-warehouse
19
+
20
+ Connect any combination of warehouses through a single semantic layer:
21
+
22
+ - **PostgreSQL** — Direct TCP connection
23
+ - **Redshift** — Cluster or serverless endpoint
24
+ - **Snowflake** — Account-based authentication
25
+ - **BigQuery** — GCP service account
26
+ - **Databricks** — Token-based workspace connection
27
+
28
+ Metrics from different warehouses are queried through the same API. Your consumers never need to know where the data lives.
29
+
30
+ ## One source of truth for every AI
31
+
32
+ Bonnard exposes your semantic layer as a remote MCP server. Add one URL to any MCP-compatible client — Claude, ChatGPT, Cursor, VS Code, Windsurf, Gemini — and it can explore your data model, run queries, and render charts. Every query is governed and scoped to the user's permissions automatically.
33
+
34
+ ## Governed by default
35
+
36
+ Metrics are version-controlled and deployed from the terminal. Access, roles, and row-level security are managed by admins from the dashboard. Every query — whether from an AI agent, the API, or the SDK — is scoped to the user's permissions. No ungoverned access.
37
+
38
+ ## Your data stays where it is
39
+
40
+ Your data stays in your warehouse. Bonnard adds a governed semantic layer on top — hosted, queryable, and managed. Each organization gets isolated query execution. Deploy from your terminal in minutes, not quarters.
41
+
42
+ ## Where to go next
43
+
44
+ - **[Getting Started](/docs/getting-started)** — Install the CLI and build your first semantic layer
45
+ - **[Modeling](/docs/modeling)** — Define cubes, views, and pre-aggregations
46
+ - **[Querying](/docs/querying)** — Query via MCP, REST API, or SDK
47
+ - **[CLI](/docs/cli)** — Commands, deployment, and validation
48
+ - **[Governance](/docs/governance)** — Control access to views, columns, and rows
49
+ - **[Catalog](/docs/catalog)** — Browse your data model in the browser
@@ -0,0 +1,200 @@
1
+ # MCP
2
+
3
+ > Connect AI agents like Claude, ChatGPT, and Cursor to your semantic layer using the Model Context Protocol. One URL, governed access to your metrics and dimensions.
4
+
5
+ Bonnard exposes your semantic layer as a remote MCP server. Add one URL to your agent platform and it can explore your data model, run queries, and render charts — all through the Model Context Protocol.
6
+
7
+ ![MCP chat with visualisations](/images/mcp-chat-demo.gif)
8
+
9
+ ## MCP URL
10
+
11
+ ```
12
+ https://mcp.bonnard.dev/mcp
13
+ ```
14
+
15
+ On first use, your browser opens to sign in — the agent receives a 30-day token automatically. No API keys, no config files, no secrets to rotate.
16
+
17
+ ## Connect your agent
18
+
19
+ Bonnard works with any MCP-compatible client:
20
+
21
+ - **Claude Desktop** — Add as a custom connector
22
+ - **ChatGPT** — Add via Settings > Apps (Pro/Plus)
23
+ - **Cursor** — Add via Settings > MCP or `.cursor/mcp.json`
24
+ - **Microsoft Copilot Studio** — Add as an MCP tool with OAuth 2.0 authentication
25
+ - **VS Code / GitHub Copilot** — Add via Command Palette or `.vscode/mcp.json`
26
+ - **Claude Code** — Add via `claude mcp add` or `.mcp.json`
27
+ - **Windsurf** — Add via MCP config
28
+ - **Gemini CLI** — Add via `.gemini/settings.json`
29
+
30
+ ## Setup
31
+
32
+ ### Claude Desktop
33
+
34
+ 1. Click the **+** button in the chat input, then select **Connectors > Manage connectors**
35
+
36
+ ![Claude Desktop — Connectors menu in chat](/images/claude-chat-connectors.png)
37
+
38
+ 2. Click **Add custom connector**
39
+ 3. Enter a name (e.g. "Bonnard MCP") and the MCP URL: `https://mcp.bonnard.dev/mcp`
40
+ 4. Click **Add**
41
+
42
+ ![Claude Desktop — Add custom connector dialog](/images/claude-add-connector.png)
43
+
44
+ Once added, enable the Bonnard connector in any chat via the **Connectors** menu.
45
+
46
+ Remote MCP servers in Claude Desktop must be added through the Connectors UI, not the JSON config file.
47
+
48
+ ### ChatGPT
49
+
50
+ Custom MCP servers must be added in the browser at [chatgpt.com](https://chatgpt.com) — the desktop app does not support this.
51
+
52
+ 1. Go to [chatgpt.com](https://chatgpt.com) in your browser
53
+ 2. Open **Settings > Apps**
54
+
55
+ ![ChatGPT — Settings > Apps](/images/chatgpt-apps.png)
56
+
57
+ 3. Click **Advanced settings**, enable **Developer mode**, then click **Create app**
58
+
59
+ ![ChatGPT — Advanced settings with Developer mode and Create app](/images/advanced-create-app-chatgpt.png)
60
+
61
+ 4. Enter a name (e.g. "Bonnard MCP"), the MCP URL `https://mcp.bonnard.dev/mcp`, and select **OAuth** for authentication
62
+ 5. Check the acknowledgement box and click **Create**
63
+
64
+ ![ChatGPT — Create new app with MCP URL](/images/chatgpt-new-app.png)
65
+
66
+ Once created, the Bonnard connector appears under **Enabled apps**:
67
+
68
+ ![ChatGPT — Bonnard MCP available in chat](/images/chatgpt-chat-apps.png)
69
+
70
+ Available on Pro and Plus plans.
71
+
72
+ ### Cursor
73
+
74
+ Open **Settings > MCP** and add the server URL, or add to `.cursor/mcp.json` in your project:
75
+
76
+ ```json
77
+ {
78
+ "mcpServers": {
79
+ "bonnard": {
80
+ "url": "https://mcp.bonnard.dev/mcp"
81
+ }
82
+ }
83
+ }
84
+ ```
85
+
86
+ On first use, your browser will open to sign in and authorize the connection.
87
+
88
+ ### VS Code / GitHub Copilot
89
+
90
+ Open the Command Palette and run **MCP: Add Server**, or add to `.vscode/mcp.json` in your project:
91
+
92
+ ```json
93
+ {
94
+ "servers": {
95
+ "bonnard": {
96
+ "type": "http",
97
+ "url": "https://mcp.bonnard.dev/mcp"
98
+ }
99
+ }
100
+ }
101
+ ```
102
+
103
+ On first use, your browser will open to sign in and authorize the connection.
104
+
105
+ ### Claude Code
106
+
107
+ Run in your terminal:
108
+
109
+ ```bash
110
+ claude mcp add --transport http bonnard https://mcp.bonnard.dev/mcp
111
+ ```
112
+
113
+ Or add to `.mcp.json` in your project:
114
+
115
+ ```json
116
+ {
117
+ "mcpServers": {
118
+ "bonnard": {
119
+ "type": "http",
120
+ "url": "https://mcp.bonnard.dev/mcp"
121
+ }
122
+ }
123
+ }
124
+ ```
125
+
126
+ ### Windsurf
127
+
128
+ Open **Settings > Plugins > Manage plugins > View raw config**, or edit `~/.codeium/windsurf/mcp_config.json`:
129
+
130
+ ```json
131
+ {
132
+ "mcpServers": {
133
+ "bonnard": {
134
+ "serverUrl": "https://mcp.bonnard.dev/mcp"
135
+ }
136
+ }
137
+ }
138
+ ```
139
+
140
+ ### Gemini CLI
141
+
142
+ Add to `.gemini/settings.json` in your project or `~/.gemini/settings.json` globally:
143
+
144
+ ```json
145
+ {
146
+ "mcpServers": {
147
+ "bonnard": {
148
+ "url": "https://mcp.bonnard.dev/mcp"
149
+ }
150
+ }
151
+ }
152
+ ```
153
+
154
+ ## Authentication
155
+
156
+ MCP uses OAuth 2.0 with PKCE. When an agent first connects:
157
+
158
+ 1. Agent discovers OAuth endpoints automatically
159
+ 2. You are redirected to Bonnard to sign in and authorize
160
+ 3. Agent receives an access token (valid for 30 days)
161
+
162
+ No API keys or manual token management needed.
163
+
164
+ ## Available Tools
165
+
166
+ Once connected, AI agents can use these MCP tools:
167
+
168
+ | Tool | Description |
169
+ |------|-------------|
170
+ | `explore_schema` | Discover views and cubes, list their measures, dimensions, and segments. Supports browsing a specific source by name or searching across all fields by keyword. |
171
+ | `query` | Query the semantic layer with measures, dimensions, time dimensions, filters, segments, and pagination. |
172
+ | `sql_query` | Execute raw SQL against the semantic layer using Cube SQL syntax with `MEASURE()` for aggregations. Use for CTEs, UNIONs, and custom calculations. |
173
+ | `describe_field` | Get detailed metadata for a field — SQL expression, type, format, origin cube, and referenced fields. |
174
+ | `visualize` | Render line, bar, pie, and KPI charts directly inside the conversation. |
175
+
176
+ ## Charts in chat
177
+
178
+ The `visualize` tool renders interactive charts inline — auto-detected from your query shape. Charts support dark mode, currency and percentage formatting, and multi-series data.
179
+
180
+ Ask "show me revenue by region this quarter" and get a formatted chart in your conversation, not a data dump.
181
+
182
+ ## Testing
183
+
184
+ ```bash
185
+ # Verify the MCP server is reachable
186
+ bon mcp test
187
+
188
+ # View connection info
189
+ bon mcp
190
+ ```
191
+
192
+ ## Managing Connections
193
+
194
+ Active MCP connections can be viewed and revoked in the Bonnard dashboard under **MCP**.
195
+
196
+ ## See Also
197
+
198
+ - [querying.rest-api](querying.rest-api) — Query format reference
199
+ - [querying.sdk](querying.sdk) — TypeScript SDK for custom apps
200
+ - [cli.deploy](cli.deploy) — Deploy before querying
@@ -0,0 +1,11 @@
1
+ # Querying
2
+
3
+ > Once deployed, query your semantic layer three ways: MCP for AI agents, REST API for structured queries, and SDK for custom applications.
4
+
5
+ Once your semantic layer is deployed, it's queryable through three interfaces:
6
+
7
+ - **[MCP](querying.mcp)** — Connect AI agents like Claude, ChatGPT, and Cursor to explore and query your data model through the Model Context Protocol
8
+ - **[REST API](querying.rest-api)** — Send JSON query objects or SQL strings for structured, programmatic access
9
+ - **[SDK](querying.sdk)** — Build custom dashboards, embedded analytics, and data pipelines with the TypeScript client
10
+
11
+ All three interfaces query the same governed semantic layer — same metrics, same access controls, same results.
@@ -0,0 +1,198 @@
1
+ # REST API
2
+
3
+ > Query your deployed semantic layer using the Bonnard REST API. Send JSON query objects or SQL strings to retrieve measures and dimensions with filtering, grouping, and time ranges.
4
+
5
+ ## Overview
6
+
7
+ After deploying with `bon deploy`, you can query the semantic layer using `bon query`. This tests that your cubes and views work correctly and returns data from your warehouse through Bonnard.
8
+
9
+ ## Query Formats
10
+
11
+ Bonnard supports two query formats:
12
+
13
+ ### JSON Format (Default)
14
+
15
+ The JSON format uses the REST API structure:
16
+
17
+ ```bash
18
+ bon query '{"measures": ["orders.count"]}'
19
+
20
+ bon query '{
21
+ "measures": ["orders.total_revenue"],
22
+ "dimensions": ["orders.status"],
23
+ "filters": [{
24
+ "member": "orders.created_at",
25
+ "operator": "inDateRange",
26
+ "values": ["2024-01-01", "2024-12-31"]
27
+ }]
28
+ }'
29
+ ```
30
+
31
+ **JSON Query Properties:**
32
+
33
+ | Property | Description |
34
+ |----------|-------------|
35
+ | `measures` | Array of measures to calculate (e.g., `["orders.count"]`) |
36
+ | `dimensions` | Array of dimensions to group by (e.g., `["orders.status"]`) |
37
+ | `filters` | Array of filter objects |
38
+ | `timeDimensions` | Time-based grouping with granularity |
39
+ | `segments` | Named filters defined in cubes |
40
+ | `limit` | Max rows to return |
41
+ | `offset` | Skip rows (pagination) |
42
+ | `order` | Sort specification |
43
+
44
+ **Filter Operators:**
45
+
46
+ | Operator | Use Case |
47
+ |----------|----------|
48
+ | `equals`, `notEquals` | Exact match |
49
+ | `contains`, `notContains` | String contains |
50
+ | `startsWith`, `endsWith` | String prefix/suffix |
51
+ | `gt`, `gte`, `lt`, `lte` | Numeric comparison |
52
+ | `inDateRange`, `beforeDate`, `afterDate` | Time filtering |
53
+ | `set`, `notSet` | NULL checks |
54
+
55
+ ### SQL Format
56
+
57
+ The SQL format uses the SQL API, where cubes are tables:
58
+
59
+ ```bash
60
+ bon query --sql "SELECT status, MEASURE(count) FROM orders GROUP BY 1"
61
+
62
+ bon query --sql "SELECT
63
+ city,
64
+ MEASURE(total_revenue),
65
+ MEASURE(avg_order_value)
66
+ FROM orders
67
+ WHERE status = 'completed'
68
+ GROUP BY 1
69
+ ORDER BY 2 DESC
70
+ LIMIT 10"
71
+ ```
72
+
73
+ **SQL Syntax Rules:**
74
+
75
+ 1. **Cubes/views are tables** — `FROM orders` references the `orders` cube
76
+ 2. **Dimensions are columns** — Include in `SELECT` and `GROUP BY`
77
+ 3. **Measures use `MEASURE()`** — Or matching aggregates (`SUM`, `COUNT`, etc.)
78
+ 4. **Segments are boolean** — Filter with `WHERE is_completed IS TRUE`
79
+
80
+ **Examples:**
81
+
82
+ ```sql
83
+ -- Count orders by status
84
+ SELECT status, MEASURE(count) FROM orders GROUP BY 1
85
+
86
+ -- Revenue by city with filter
87
+ SELECT city, SUM(amount) FROM orders WHERE status = 'shipped' GROUP BY 1
88
+
89
+ -- Using time dimension with granularity
90
+ SELECT DATE_TRUNC('month', created_at), MEASURE(total_revenue)
91
+ FROM orders
92
+ GROUP BY 1
93
+ ORDER BY 1
94
+ ```
95
+
96
+ ## CLI Usage
97
+
98
+ ```bash
99
+ # JSON format (default)
100
+ bon query '{"measures": ["orders.count"]}'
101
+
102
+ # SQL format
103
+ bon query --sql "SELECT MEASURE(count) FROM orders"
104
+
105
+ # Limit rows
106
+ bon query '{"measures": ["orders.count"], "dimensions": ["orders.city"]}' --limit 10
107
+
108
+ # JSON output (instead of table)
109
+ bon query '{"measures": ["orders.count"]}' --format json
110
+ ```
111
+
112
+ ## Output Formats
113
+
114
+ ### Table Format (Default)
115
+
116
+ ```
117
+ ┌─────────┬───────────────┐
118
+ │ status │ orders.count │
119
+ ├─────────┼───────────────┤
120
+ │ pending │ 42 │
121
+ │ shipped │ 156 │
122
+ │ done │ 892 │
123
+ └─────────┴───────────────┘
124
+ ```
125
+
126
+ ### JSON Format
127
+
128
+ ```bash
129
+ bon query '{"measures": ["orders.count"]}' --format json
130
+ ```
131
+
132
+ ```json
133
+ [
134
+ { "orders.status": "pending", "orders.count": 42 },
135
+ { "orders.status": "shipped", "orders.count": 156 },
136
+ { "orders.status": "done", "orders.count": 892 }
137
+ ]
138
+ ```
139
+
140
+ ## Common Patterns
141
+
142
+ ### Time Series Analysis
143
+
144
+ ```bash
145
+ bon query '{
146
+ "measures": ["orders.total_revenue"],
147
+ "timeDimensions": [{
148
+ "dimension": "orders.created_at",
149
+ "granularity": "month",
150
+ "dateRange": ["2024-01-01", "2024-12-31"]
151
+ }]
152
+ }'
153
+ ```
154
+
155
+ ### Filtering by Dimension
156
+
157
+ ```bash
158
+ bon query '{
159
+ "measures": ["orders.count"],
160
+ "dimensions": ["orders.city"],
161
+ "filters": [{
162
+ "member": "orders.status",
163
+ "operator": "equals",
164
+ "values": ["completed"]
165
+ }]
166
+ }'
167
+ ```
168
+
169
+ ### Multiple Measures
170
+
171
+ ```bash
172
+ bon query '{
173
+ "measures": ["orders.count", "orders.total_revenue", "orders.avg_order_value"],
174
+ "dimensions": ["orders.category"]
175
+ }'
176
+ ```
177
+
178
+ ## Error Handling
179
+
180
+ ### Common Errors
181
+
182
+ **"Projection references non-aggregate values"**
183
+ - All dimensions must be in `GROUP BY`
184
+ - All measures must use `MEASURE()` or matching aggregate
185
+
186
+ **"Cube not found"**
187
+ - Check cube name matches deployed cube
188
+ - Run `bon deploy` if cubes changed
189
+
190
+ **"Not logged in"**
191
+ - Run `bon login` first
192
+
193
+ ## See Also
194
+
195
+ - [cli.deploy](cli.deploy) - Deploy before querying
196
+ - [cubes.measures](cubes.measures) - Define measures
197
+ - [cubes.dimensions](cubes.dimensions) - Define dimensions
198
+ - [views](views) - Create focused query interfaces
@@ -0,0 +1,53 @@
1
+ # SDK
2
+
3
+ > Build custom data apps on top of your semantic layer.
4
+
5
+ The Bonnard SDK (`@bonnard/sdk`) is a lightweight TypeScript client for querying your deployed semantic layer programmatically. Build dashboards, embedded analytics, internal tools, or data pipelines — all backed by your governed metrics.
6
+
7
+ ## Quick start
8
+
9
+ ```bash
10
+ npm install @bonnard/sdk
11
+ ```
12
+
13
+ ```typescript
14
+ import { createClient } from '@bonnard/sdk';
15
+
16
+ const bonnard = createClient({
17
+ apiKey: 'your-api-key',
18
+ });
19
+
20
+ const result = await bonnard.query({
21
+ cube: 'orders',
22
+ measures: ['revenue', 'count'],
23
+ dimensions: ['status'],
24
+ timeDimension: {
25
+ dimension: 'created_at',
26
+ granularity: 'month',
27
+ dateRange: ['2025-01-01', '2025-12-31'],
28
+ },
29
+ });
30
+ ```
31
+
32
+ ## Type-safe queries
33
+
34
+ Full TypeScript support with inference. Measures, dimensions, filters, time dimensions, and sort orders are all typed. Query results include field annotations with titles and types.
35
+
36
+ ```typescript
37
+ const result = await bonnard.sql<OrderRow>(
38
+ `SELECT status, MEASURE(revenue) FROM orders GROUP BY 1`
39
+ );
40
+ // result.data is OrderRow[]
41
+ ```
42
+
43
+ ## What you can build
44
+
45
+ - **Custom dashboards** — Query your semantic layer from Next.js, React, or any frontend
46
+ - **Embedded analytics** — Add governed metrics to your product
47
+ - **Data pipelines** — Consume semantic layer data in ETL workflows
48
+ - **Internal tools** — Build admin panels backed by consistent metrics
49
+
50
+ ## See Also
51
+
52
+ - [Overview](/docs/overview) — Platform overview
53
+ - [querying.rest-api](querying.rest-api) — Query format reference
@@ -0,0 +1,18 @@
1
+ # Slack & Teams (Coming Soon)
2
+
3
+ > AI agents in your team chat. Coming soon.
4
+
5
+ Bonnard will bring semantic layer queries directly into Slack and Microsoft Teams — so anyone on your team can ask questions about data without leaving the conversation.
6
+
7
+ ## Planned capabilities
8
+
9
+ - **Natural language queries** — Ask "what was revenue last month?" in a channel and get an answer with a chart
10
+ - **Governed responses** — Every answer goes through your semantic layer, so metrics are always consistent
11
+ - **Shared context** — Results posted in channels are visible to the whole team, not siloed in individual AI chats
12
+ - **Proactive alerts** — Get notified when key metrics change beyond thresholds you define
13
+
14
+ ## Why it matters
15
+
16
+ Your team already lives in Slack and Teams. Instead of asking analysts or switching to a BI tool, anyone can get instant, governed answers right where they work. The same semantic layer that powers your AI agents and dashboards powers your team chat.
17
+
18
+ Interested in early access? Reach out at [hello@bonnard.dev](mailto:hello@bonnard.dev).
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@bonnard/cli",
3
- "version": "0.2.5",
3
+ "version": "0.2.6",
4
4
  "type": "module",
5
5
  "bin": {
6
6
  "bon": "./dist/bin/bon.mjs"
@@ -9,7 +9,7 @@
9
9
  "dist"
10
10
  ],
11
11
  "scripts": {
12
- "build": "tsdown src/bin/bon.ts --format esm --out-dir dist/bin && cp -r src/templates dist/ && mkdir -p dist/docs/topics dist/docs/schemas && cp ../content/index.md dist/docs/_index.md && cp ../content/getting-started.md dist/docs/topics/ && cp ../content/modeling/*.md dist/docs/topics/",
12
+ "build": "tsdown src/bin/bon.ts --format esm --out-dir dist/bin && cp -r src/templates dist/ && mkdir -p dist/docs/topics dist/docs/schemas && cp ../content/index.md dist/docs/_index.md && cp ../content/overview.md ../content/getting-started.md dist/docs/topics/ && cp ../content/modeling/*.md dist/docs/topics/",
13
13
  "dev": "tsdown src/bin/bon.ts --format esm --out-dir dist/bin --watch",
14
14
  "test": "vitest run"
15
15
  },