@bonnard/cli 0.2.4 → 0.2.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (29) hide show
  1. package/README.md +85 -0
  2. package/dist/bin/bon.mjs +15 -2
  3. package/dist/bin/{validate-BdqZBH2n.mjs → validate-Bc8zGNw7.mjs} +75 -3
  4. package/dist/docs/_index.md +17 -6
  5. package/dist/docs/topics/catalog.md +36 -0
  6. package/dist/docs/topics/cli.deploy.md +193 -0
  7. package/dist/docs/topics/cli.md +113 -0
  8. package/dist/docs/topics/cli.validate.md +125 -0
  9. package/dist/docs/topics/cubes.data-source.md +1 -1
  10. package/dist/docs/topics/features.governance.md +58 -59
  11. package/dist/docs/topics/features.semantic-layer.md +6 -0
  12. package/dist/docs/topics/getting-started.md +2 -2
  13. package/dist/docs/topics/governance.md +83 -0
  14. package/dist/docs/topics/overview.md +49 -0
  15. package/dist/docs/topics/querying.mcp.md +200 -0
  16. package/dist/docs/topics/querying.md +11 -0
  17. package/dist/docs/topics/querying.rest-api.md +198 -0
  18. package/dist/docs/topics/querying.sdk.md +53 -0
  19. package/dist/docs/topics/slack-teams.md +18 -0
  20. package/dist/docs/topics/views.md +17 -9
  21. package/dist/docs/topics/workflow.md +6 -5
  22. package/dist/templates/claude/skills/bonnard-design-guide/SKILL.md +233 -0
  23. package/dist/templates/claude/skills/bonnard-get-started/SKILL.md +49 -15
  24. package/dist/templates/claude/skills/bonnard-metabase-migrate/SKILL.md +28 -9
  25. package/dist/templates/cursor/rules/bonnard-design-guide.mdc +232 -0
  26. package/dist/templates/cursor/rules/bonnard-get-started.mdc +49 -15
  27. package/dist/templates/cursor/rules/bonnard-metabase-migrate.mdc +28 -9
  28. package/dist/templates/shared/bonnard.md +28 -11
  29. package/package.json +2 -2
package/README.md ADDED
@@ -0,0 +1,85 @@
1
+ # @bonnard/cli
2
+
3
+ The Bonnard CLI (`bon`) takes you from zero to a deployed semantic layer in minutes. Define metrics in YAML, validate locally, deploy, and query — from your terminal or AI coding agent.
4
+
5
+ ## Install
6
+
7
+ ```bash
8
+ npm install -g @bonnard/cli
9
+ ```
10
+
11
+ Requires Node.js 20+.
12
+
13
+ ## Quick start
14
+
15
+ ```bash
16
+ bon init # Create project structure + agent templates
17
+ bon datasource add --demo # Add demo dataset (no warehouse needed)
18
+ bon validate # Check syntax
19
+ bon login # Authenticate with Bonnard
20
+ bon deploy -m "Initial deploy" # Deploy to Bonnard
21
+ ```
22
+
23
+ ## Commands
24
+
25
+ | Command | Description |
26
+ |---------|-------------|
27
+ | `bon init` | Create project structure and AI agent templates |
28
+ | `bon login` | Authenticate with Bonnard |
29
+ | `bon logout` | Remove stored credentials |
30
+ | `bon whoami` | Show current login status |
31
+ | `bon datasource add` | Add a data source (interactive) |
32
+ | `bon datasource add --demo` | Add read-only demo dataset |
33
+ | `bon datasource add --from-dbt` | Import from dbt profiles |
34
+ | `bon datasource list` | List configured data sources |
35
+ | `bon datasource remove <name>` | Remove a data source |
36
+ | `bon validate` | Validate cube and view YAML |
37
+ | `bon deploy -m "message"` | Deploy to Bonnard |
38
+ | `bon deployments` | List deployment history |
39
+ | `bon diff <id>` | View changes in a deployment |
40
+ | `bon annotate <id>` | Add context to deployment changes |
41
+ | `bon query '{"measures":["orders.count"]}'` | Query the semantic layer (JSON) |
42
+ | `bon query "SELECT ..." --sql` | Query the semantic layer (SQL) |
43
+ | `bon mcp` | MCP setup instructions for AI agents |
44
+ | `bon mcp test` | Test MCP server connectivity |
45
+ | `bon docs [topic]` | Browse modeling documentation |
46
+ | `bon docs --search "joins"` | Search documentation |
47
+
48
+ ## Agent-ready from the start
49
+
50
+ `bon init` generates context files for your AI coding tools:
51
+
52
+ - **Claude Code** — `.claude/rules/` + get-started skill
53
+ - **Cursor** — `.cursor/rules/` with auto-apply frontmatter
54
+ - **Codex** — `AGENTS.md` + skills folder
55
+
56
+ Your agent understands Bonnard's modeling language from the first prompt.
57
+
58
+ ## Project structure
59
+
60
+ After `bon init`:
61
+
62
+ ```
63
+ my-project/
64
+ ├── bon.yaml # Project configuration
65
+ ├── bonnard/
66
+ │ ├── cubes/ # Cube definitions (measures, dimensions, joins)
67
+ │ └── views/ # View definitions (curated query interfaces)
68
+ └── .bon/ # Local config (gitignored)
69
+ └── datasources.yaml # Data source credentials
70
+ ```
71
+
72
+ ## CI/CD
73
+
74
+ ```bash
75
+ bon deploy --ci -m "CI deploy"
76
+ ```
77
+
78
+ Non-interactive mode for pipelines. Datasources are synced automatically.
79
+
80
+ ## Documentation
81
+
82
+ - [Getting Started](https://docs.bonnard.dev/docs/getting-started)
83
+ - [CLI Reference](https://docs.bonnard.dev/docs/cli)
84
+ - [Modeling Guide](https://docs.bonnard.dev/docs/modeling/cubes)
85
+ - [Querying](https://docs.bonnard.dev/docs/querying)
package/dist/bin/bon.mjs CHANGED
@@ -567,21 +567,26 @@ function createAgentTemplates(cwd, env) {
567
567
  fs.mkdirSync(claudeRulesDir, { recursive: true });
568
568
  fs.mkdirSync(path.join(claudeSkillsDir, "bonnard-get-started"), { recursive: true });
569
569
  fs.mkdirSync(path.join(claudeSkillsDir, "bonnard-metabase-migrate"), { recursive: true });
570
+ fs.mkdirSync(path.join(claudeSkillsDir, "bonnard-design-guide"), { recursive: true });
570
571
  writeTemplateFile(sharedBonnard, path.join(claudeRulesDir, "bonnard.md"), createdFiles);
571
572
  writeTemplateFile(loadTemplate("claude/skills/bonnard-get-started/SKILL.md"), path.join(claudeSkillsDir, "bonnard-get-started", "SKILL.md"), createdFiles);
572
573
  writeTemplateFile(loadTemplate("claude/skills/bonnard-metabase-migrate/SKILL.md"), path.join(claudeSkillsDir, "bonnard-metabase-migrate", "SKILL.md"), createdFiles);
574
+ writeTemplateFile(loadTemplate("claude/skills/bonnard-design-guide/SKILL.md"), path.join(claudeSkillsDir, "bonnard-design-guide", "SKILL.md"), createdFiles);
573
575
  mergeSettingsJson(loadJsonTemplate("claude/settings.json"), path.join(cwd, ".claude", "settings.json"), createdFiles);
574
576
  const cursorRulesDir = path.join(cwd, ".cursor", "rules");
575
577
  fs.mkdirSync(cursorRulesDir, { recursive: true });
576
578
  writeTemplateFile(withCursorFrontmatter(sharedBonnard, "Bonnard semantic layer project context", true), path.join(cursorRulesDir, "bonnard.mdc"), createdFiles);
577
579
  writeTemplateFile(loadTemplate("cursor/rules/bonnard-get-started.mdc"), path.join(cursorRulesDir, "bonnard-get-started.mdc"), createdFiles);
578
580
  writeTemplateFile(loadTemplate("cursor/rules/bonnard-metabase-migrate.mdc"), path.join(cursorRulesDir, "bonnard-metabase-migrate.mdc"), createdFiles);
581
+ writeTemplateFile(loadTemplate("cursor/rules/bonnard-design-guide.mdc"), path.join(cursorRulesDir, "bonnard-design-guide.mdc"), createdFiles);
579
582
  const codexSkillsDir = path.join(cwd, ".agents", "skills");
580
583
  fs.mkdirSync(path.join(codexSkillsDir, "bonnard-get-started"), { recursive: true });
581
584
  fs.mkdirSync(path.join(codexSkillsDir, "bonnard-metabase-migrate"), { recursive: true });
585
+ fs.mkdirSync(path.join(codexSkillsDir, "bonnard-design-guide"), { recursive: true });
582
586
  writeTemplateFile(sharedBonnard, path.join(cwd, "AGENTS.md"), createdFiles);
583
587
  writeTemplateFile(loadTemplate("claude/skills/bonnard-get-started/SKILL.md"), path.join(codexSkillsDir, "bonnard-get-started", "SKILL.md"), createdFiles);
584
588
  writeTemplateFile(loadTemplate("claude/skills/bonnard-metabase-migrate/SKILL.md"), path.join(codexSkillsDir, "bonnard-metabase-migrate", "SKILL.md"), createdFiles);
589
+ writeTemplateFile(loadTemplate("claude/skills/bonnard-design-guide/SKILL.md"), path.join(codexSkillsDir, "bonnard-design-guide", "SKILL.md"), createdFiles);
585
590
  return createdFiles;
586
591
  }
587
592
  async function initCommand() {
@@ -1704,7 +1709,7 @@ async function validateCommand() {
1704
1709
  console.log(pc.red("No bon.yaml found. Are you in a Bonnard project?"));
1705
1710
  process.exit(1);
1706
1711
  }
1707
- const { validate } = await import("./validate-BdqZBH2n.mjs");
1712
+ const { validate } = await import("./validate-Bc8zGNw7.mjs");
1708
1713
  const result = await validate(cwd);
1709
1714
  if (result.cubes.length === 0 && result.views.length === 0 && result.valid) {
1710
1715
  console.log(pc.yellow(`No cube or view files found in ${BONNARD_DIR}/cubes/ or ${BONNARD_DIR}/views/.`));
@@ -1739,6 +1744,14 @@ async function validateCommand() {
1739
1744
  console.log(pc.dim(" This can cause issues when multiple warehouses are configured."));
1740
1745
  console.log(pc.dim(` ${result.cubesMissingDataSource.join(", ")}`));
1741
1746
  }
1747
+ if (result.suspectPrimaryKeys.length > 0) {
1748
+ console.log();
1749
+ console.log(pc.yellow(`⚠ ${result.suspectPrimaryKeys.length} primary key(s) on time dimensions`));
1750
+ console.log(pc.dim(" Time dimensions are rarely unique. Non-unique primary keys cause dimension"));
1751
+ console.log(pc.dim(" queries to silently return empty results. Use a unique column or add a"));
1752
+ console.log(pc.dim(" ROW_NUMBER() synthetic key via the cube's sql property."));
1753
+ for (const s of result.suspectPrimaryKeys) console.log(pc.dim(` ${s.cube}.${s.dimension} (type: ${s.type})`));
1754
+ }
1742
1755
  }
1743
1756
 
1744
1757
  //#endregion
@@ -1770,7 +1783,7 @@ async function deployCommand(options = {}) {
1770
1783
  process.exit(1);
1771
1784
  }
1772
1785
  console.log(pc.dim("Validating cubes and views..."));
1773
- const { validate } = await import("./validate-BdqZBH2n.mjs");
1786
+ const { validate } = await import("./validate-Bc8zGNw7.mjs");
1774
1787
  const result = await validate(cwd);
1775
1788
  if (!result.valid) {
1776
1789
  console.log(pc.red("Validation failed:\n"));
@@ -218,11 +218,53 @@ function formatZodError(error, fileName, parsed) {
218
218
  return `${fileName}: ${location ? `${location} — ` : ""}${issue.message}`;
219
219
  });
220
220
  }
221
+ function checkViewMemberConflicts(parsedFiles, cubeMap) {
222
+ const errors = [];
223
+ for (const { fileName, parsed } of parsedFiles) for (const view of parsed.views ?? []) {
224
+ if (!view.name || !view.cubes) continue;
225
+ const seen = /* @__PURE__ */ new Map();
226
+ for (const m of view.measures ?? []) if (m.name) seen.set(m.name, `${view.name} (direct)`);
227
+ for (const d of view.dimensions ?? []) if (d.name) seen.set(d.name, `${view.name} (direct)`);
228
+ for (const s of view.segments ?? []) if (s.name) seen.set(s.name, `${view.name} (direct)`);
229
+ for (const cubeRef of view.cubes) {
230
+ const joinPath = cubeRef.join_path;
231
+ if (!joinPath) continue;
232
+ const segments = joinPath.split(".");
233
+ const targetCubeName = segments[segments.length - 1];
234
+ let memberNames = [];
235
+ if (cubeRef.includes === "*") {
236
+ const cube = cubeMap.get(targetCubeName);
237
+ if (!cube) continue;
238
+ memberNames = [
239
+ ...cube.measures,
240
+ ...cube.dimensions,
241
+ ...cube.segments
242
+ ];
243
+ } else if (Array.isArray(cubeRef.includes)) {
244
+ for (const item of cubeRef.includes) if (typeof item === "string") memberNames.push(item);
245
+ else if (item && typeof item === "object" && item.name) memberNames.push(item.alias || item.name);
246
+ } else continue;
247
+ if (Array.isArray(cubeRef.excludes)) {
248
+ const excludeSet = new Set(cubeRef.excludes);
249
+ memberNames = memberNames.filter((n) => !excludeSet.has(n));
250
+ }
251
+ for (const rawName of memberNames) {
252
+ const finalName = cubeRef.prefix ? `${targetCubeName}_${rawName}` : rawName;
253
+ const existingSource = seen.get(finalName);
254
+ if (existingSource) errors.push(`${fileName}: view '${view.name}' — member '${finalName}' from '${joinPath}' conflicts with '${existingSource}'. Use prefix: true or an alias.`);
255
+ else seen.set(finalName, joinPath);
256
+ }
257
+ }
258
+ }
259
+ return errors;
260
+ }
221
261
  function validateFiles(files) {
222
262
  const errors = [];
223
263
  const cubes = [];
224
264
  const views = [];
225
265
  const allNames = /* @__PURE__ */ new Map();
266
+ const parsedFiles = [];
267
+ const cubeMap = /* @__PURE__ */ new Map();
226
268
  for (const file of files) {
227
269
  let parsed;
228
270
  try {
@@ -240,12 +282,21 @@ function validateFiles(files) {
240
282
  errors.push(...formatZodError(result.error, file.fileName, parsed));
241
283
  continue;
242
284
  }
285
+ parsedFiles.push({
286
+ fileName: file.fileName,
287
+ parsed
288
+ });
243
289
  for (const cube of parsed.cubes ?? []) if (cube.name) {
244
290
  const existing = allNames.get(cube.name);
245
291
  if (existing) errors.push(`${file.fileName}: duplicate name '${cube.name}' (also defined in ${existing})`);
246
292
  else {
247
293
  allNames.set(cube.name, file.fileName);
248
294
  cubes.push(cube.name);
295
+ cubeMap.set(cube.name, {
296
+ measures: (cube.measures ?? []).map((m) => m.name).filter(Boolean),
297
+ dimensions: (cube.dimensions ?? []).map((d) => d.name).filter(Boolean),
298
+ segments: (cube.segments ?? []).map((s) => s.name).filter(Boolean)
299
+ });
249
300
  }
250
301
  }
251
302
  for (const view of parsed.views ?? []) if (view.name) {
@@ -257,6 +308,7 @@ function validateFiles(files) {
257
308
  }
258
309
  }
259
310
  }
311
+ if (errors.length === 0) errors.push(...checkViewMemberConflicts(parsedFiles, cubeMap));
260
312
  return {
261
313
  errors,
262
314
  cubes,
@@ -320,6 +372,22 @@ function checkMissingDescriptions(files) {
320
372
  } catch {}
321
373
  return missing;
322
374
  }
375
+ function checkSuspectPrimaryKeys(files) {
376
+ const suspects = [];
377
+ for (const file of files) try {
378
+ const parsed = YAML.parse(file.content);
379
+ if (!parsed) continue;
380
+ for (const cube of parsed.cubes || []) {
381
+ if (!cube.name) continue;
382
+ for (const dim of cube.dimensions || []) if (dim.primary_key && dim.type === "time") suspects.push({
383
+ cube: cube.name,
384
+ dimension: dim.name,
385
+ type: dim.type
386
+ });
387
+ }
388
+ } catch {}
389
+ return suspects;
390
+ }
323
391
  function checkMissingDataSource(files) {
324
392
  const missing = [];
325
393
  for (const file of files) try {
@@ -338,7 +406,8 @@ async function validate(projectPath) {
338
406
  cubes: [],
339
407
  views: [],
340
408
  missingDescriptions: [],
341
- cubesMissingDataSource: []
409
+ cubesMissingDataSource: [],
410
+ suspectPrimaryKeys: []
342
411
  };
343
412
  const result = validateFiles(files);
344
413
  if (result.errors.length > 0) return {
@@ -347,17 +416,20 @@ async function validate(projectPath) {
347
416
  cubes: [],
348
417
  views: [],
349
418
  missingDescriptions: [],
350
- cubesMissingDataSource: []
419
+ cubesMissingDataSource: [],
420
+ suspectPrimaryKeys: []
351
421
  };
352
422
  const missingDescriptions = checkMissingDescriptions(files);
353
423
  const cubesMissingDataSource = checkMissingDataSource(files);
424
+ const suspectPrimaryKeys = checkSuspectPrimaryKeys(files);
354
425
  return {
355
426
  valid: true,
356
427
  errors: [],
357
428
  cubes: result.cubes,
358
429
  views: result.views,
359
430
  missingDescriptions,
360
- cubesMissingDataSource
431
+ cubesMissingDataSource,
432
+ suspectPrimaryKeys
361
433
  };
362
434
  }
363
435
 
@@ -52,13 +52,24 @@
52
52
  - [syntax.references](syntax.references) - Reference columns, members, and cubes
53
53
  - [syntax.context-variables](syntax.context-variables) - CUBE, FILTER_PARAMS, COMPILE_CONTEXT
54
54
 
55
- ## Workflow
55
+ ## Querying
56
56
 
57
- - [workflow](workflow) - End-to-end development workflow
58
- - [workflow.validate](workflow.validate) - Validate cubes and views locally
59
- - [workflow.deploy](workflow.deploy) - Deploy to Bonnard
60
- - [workflow.query](workflow.query) - Query the deployed semantic layer
61
- - [workflow.mcp](workflow.mcp) - Connect AI agents via MCP
57
+ - [querying](querying) - Query your deployed semantic layer
58
+ - [querying.mcp](querying.mcp) - Connect AI agents via MCP
59
+ - [querying.rest-api](querying.rest-api) - REST API and SQL query reference
60
+ - [querying.sdk](querying.sdk) - TypeScript SDK for custom apps
61
+
62
+ ## CLI
63
+
64
+ - [cli](cli) - CLI commands and development workflow
65
+ - [cli.deploy](cli.deploy) - Deploy to Bonnard
66
+ - [cli.validate](cli.validate) - Validate cubes and views locally
67
+
68
+ ## Other
69
+
70
+ - [governance](governance) - User and group-level permissions
71
+ - [catalog](catalog) - Browse your data model in the browser
72
+ - [slack-teams](slack-teams) - AI agents in team chat (coming soon)
62
73
 
63
74
  ## Quick Reference
64
75
 
@@ -0,0 +1,36 @@
1
+ # Catalog
2
+
3
+ > Browse and understand your data model — no code required.
4
+
5
+ The Bonnard catalog gives everyone on your team a live view of your semantic layer. Browse cubes, views, measures, and dimensions from the browser. Understand what data is available before writing a single query.
6
+
7
+ ## What you can explore
8
+
9
+ - **Cubes and Views** — See every deployed source with field counts at a glance
10
+ - **Measures** — Aggregation type, SQL expression, format (currency, percentage), and description
11
+ - **Dimensions** — Data type, time granularity options, and custom metadata
12
+ - **Segments** — Pre-defined filters available for queries
13
+
14
+ ## Field-level detail
15
+
16
+ Click any field to see exactly how it's calculated:
17
+
18
+ - **SQL expression** — The underlying query logic
19
+ - **Type and format** — How the field is aggregated and displayed
20
+ - **Origin cube** — Which cube a view field traces back to
21
+ - **Referenced fields** — Dependencies this field relies on
22
+ - **Custom metadata** — Tags, labels, and annotations set by your data team
23
+
24
+ ## Built for business users
25
+
26
+ The catalog is designed for anyone who needs to understand the data, not just engineers. No YAML, no terminal, no warehouse credentials. Browse the schema, read descriptions, and know exactly what to ask your AI agent for.
27
+
28
+ ## Coming soon
29
+
30
+ - **Relationship visualization** — An interactive visual map showing how cubes connect through joins and shared dimensions
31
+ - **Impact analysis** — Understand which views and measures are affected when you change a cube, before you deploy
32
+
33
+ ## See Also
34
+
35
+ - [views](views) — How to create curated views for your team
36
+ - [cubes.public](cubes.public) — Control which cubes are visible
@@ -0,0 +1,193 @@
1
+ # Deploy
2
+
3
+ > Deploy your cubes and views to the Bonnard platform using the CLI. Once deployed, your semantic layer is queryable via the REST API, MCP for AI agents, and connected BI tools.
4
+
5
+ ## Overview
6
+
7
+ The `bon deploy` command uploads your cubes and views to Bonnard, making them available for querying via the API. It validates and tests connections before deploying, and creates a versioned deployment with change detection.
8
+
9
+ ## Usage
10
+
11
+ ```bash
12
+ bon deploy -m "description of changes"
13
+ ```
14
+
15
+ A `-m` message is **required** — it describes what changed in this deployment.
16
+
17
+ ### Flags
18
+
19
+ | Flag | Description |
20
+ |------|-------------|
21
+ | `-m "message"` | **Required.** Deployment description |
22
+ | `--ci` | Non-interactive mode |
23
+
24
+ Datasources are always synced automatically during deploy.
25
+
26
+ ### CI/CD
27
+
28
+ For automated pipelines, use `--ci` for non-interactive mode:
29
+
30
+ ```bash
31
+ bon deploy --ci -m "CI deploy"
32
+ ```
33
+
34
+ ## Prerequisites
35
+
36
+ 1. **Logged in** — run `bon login` first
37
+ 2. **Valid cubes and views** — must pass `bon validate`
38
+ 3. **Working connections** — data sources must be accessible
39
+
40
+ ## What Happens
41
+
42
+ 1. **Validates** — checks cubes and views for errors
43
+ 2. **Tests connections** — verifies data source access
44
+ 3. **Uploads** — sends cubes and views to Bonnard
45
+ 4. **Detects changes** — compares against the previous deployment
46
+ 5. **Activates** — makes cubes and views available for queries
47
+
48
+ ## Example Output
49
+
50
+ ```
51
+ bon deploy -m "Add revenue metrics"
52
+
53
+ ✓ Validating...
54
+ ✓ bonnard/cubes/orders.yaml
55
+ ✓ bonnard/cubes/users.yaml
56
+ ✓ bonnard/views/orders_overview.yaml
57
+
58
+ ✓ Testing connections...
59
+ ✓ datasource "default" connected
60
+
61
+ ✓ Deploying to Bonnard...
62
+ Uploading 2 cubes, 1 view...
63
+
64
+ ✓ Deploy complete!
65
+
66
+ Changes:
67
+ + orders.total_revenue (measure)
68
+ + orders.avg_order_value (measure)
69
+ ~ orders.count (measure) — type changed
70
+
71
+ ⚠ 1 breaking change detected
72
+ ```
73
+
74
+ ## Change Detection
75
+
76
+ Every deployment is versioned. Bonnard automatically detects:
77
+
78
+ - **Added** — new cubes, views, measures, dimensions
79
+ - **Modified** — changes to type, SQL, format, description
80
+ - **Removed** — deleted fields (flagged as breaking)
81
+ - **Breaking changes** — removed measures/dimensions, type changes
82
+
83
+ ## Reviewing Deployments
84
+
85
+ After deploying, use these commands to review history and changes:
86
+
87
+ ### List deployments
88
+
89
+ ```bash
90
+ bon deployments # Recent deployments
91
+ bon deployments --all # Full history
92
+ ```
93
+
94
+ ### View changes in a deployment
95
+
96
+ ```bash
97
+ bon diff <deployment-id> # All changes
98
+ bon diff <deployment-id> --breaking # Breaking changes only
99
+ ```
100
+
101
+ ### Annotate changes
102
+
103
+ Add reasoning or context to deployment changes:
104
+
105
+ ```bash
106
+ bon annotate <deployment-id> --data '{"object": "note about why this changed"}'
107
+ ```
108
+
109
+ Annotations are visible in the schema catalog and help teammates understand why changes were made.
110
+
111
+ ## Deploy Flow
112
+
113
+ ```
114
+ bon deploy -m "message"
115
+
116
+ ├── 1. bon validate (must pass)
117
+
118
+ ├── 2. Test all datasource connections (must succeed)
119
+
120
+ ├── 3. Upload to Bonnard API
121
+ │ - cubes from bonnard/cubes/
122
+ │ - views from bonnard/views/
123
+ │ - datasource configs
124
+
125
+ ├── 4. Detect changes vs. previous deployment
126
+
127
+ └── 5. Activate deployment
128
+ ```
129
+
130
+ ## Error Handling
131
+
132
+ ### Validation Errors
133
+
134
+ ```
135
+ ✗ Validating...
136
+
137
+ bonnard/cubes/orders.yaml:15:5
138
+ error: Unknown measure type "counts"
139
+
140
+ Deploy aborted. Fix validation errors first.
141
+ ```
142
+
143
+ ### Connection Errors
144
+
145
+ ```
146
+ ✗ Testing connections...
147
+ ✗ datasource "analytics": Connection refused
148
+
149
+ Deploy aborted. Fix connection issues:
150
+ - Check credentials in .bon/datasources.yaml
151
+ - Verify network access to database
152
+ - Run: bon datasource add (to reconfigure)
153
+ ```
154
+
155
+ ### Auth Errors
156
+
157
+ ```
158
+ ✗ Not logged in.
159
+
160
+ Run: bon login
161
+ ```
162
+
163
+ ## What Gets Deployed
164
+
165
+ | Source | Deployed |
166
+ |--------|----------|
167
+ | `bonnard/cubes/*.yaml` | All cube definitions |
168
+ | `bonnard/views/*.yaml` | All view definitions |
169
+ | `.bon/datasources.yaml` | Connection configs (credentials encrypted) |
170
+ | `bon.yaml` | Project settings |
171
+
172
+ ## Deployment Behavior
173
+
174
+ - **Replaces** previous deployment (not additive)
175
+ - **All or nothing** — partial deploys don't happen
176
+ - **Instant** — changes take effect immediately
177
+ - **Versioned** — every deployment is tracked with changes
178
+
179
+ ## Best Practices
180
+
181
+ 1. **Always include a meaningful message** — helps teammates understand what changed
182
+ 2. **Validate first** — run `bon validate` before deploy
183
+ 3. **Test locally** — verify queries work before deploying
184
+ 4. **Use version control** — commit cubes and views before deploying
185
+ 5. **Review after deploy** — use `bon diff` to check for unintended breaking changes
186
+ 6. **Annotate breaking changes** — add context so consumers know what to update
187
+
188
+ ## See Also
189
+
190
+ - cli
191
+ - cli.validate
192
+ - cubes
193
+ - views
@@ -0,0 +1,113 @@
1
+ # CLI
2
+
3
+ > Built for agent-first development by data engineers.
4
+
5
+ The Bonnard CLI (`bon`) takes you from zero to a deployed semantic layer in minutes. Initialize a project, connect your warehouse, define metrics in YAML, validate locally, and deploy — all from your terminal or your AI coding agent.
6
+
7
+ ## Agent-ready from the start
8
+
9
+ `bon init` generates context files for your AI coding tools automatically:
10
+
11
+ - **Claude Code** — `.claude/rules/` + get-started skill
12
+ - **Cursor** — `.cursor/rules/` with auto-apply frontmatter
13
+ - **Codex** — `AGENTS.md` + skills folder
14
+
15
+ Your agent understands Bonnard's modeling language from the first prompt.
16
+
17
+ ## Project Structure
18
+
19
+ After `bon init`, your project has:
20
+
21
+ ```
22
+ my-project/
23
+ ├── bon.yaml # Project configuration
24
+ ├── bonnard/ # Semantic layer definitions
25
+ │ ├── cubes/ # Cube definitions
26
+ │ │ └── orders.yaml
27
+ │ └── views/ # View definitions
28
+ │ └── orders_overview.yaml
29
+ └── .bon/ # Local config (gitignored)
30
+ └── datasources.yaml # Data source credentials
31
+ ```
32
+
33
+ ## File Organization
34
+
35
+ ### One Cube Per File
36
+
37
+ ```
38
+ bonnard/cubes/
39
+ ├── orders.yaml
40
+ ├── users.yaml
41
+ ├── products.yaml
42
+ └── line_items.yaml
43
+ ```
44
+
45
+ ### Related Cubes Together
46
+
47
+ ```
48
+ bonnard/cubes/
49
+ ├── sales/
50
+ │ ├── orders.yaml
51
+ │ └── line_items.yaml
52
+ ├── users/
53
+ │ ├── users.yaml
54
+ │ └── profiles.yaml
55
+ └── products/
56
+ └── products.yaml
57
+ ```
58
+
59
+ ## Commands Reference
60
+
61
+ | Command | Description |
62
+ |---------|-------------|
63
+ | `bon init` | Create project structure |
64
+ | `bon datasource add` | Add a data source |
65
+ | `bon datasource add --demo` | Add demo dataset (no warehouse needed) |
66
+ | `bon datasource add --from-dbt` | Import from dbt profiles |
67
+ | `bon datasource list` | List configured sources |
68
+ | `bon validate` | Check cube and view syntax |
69
+ | `bon deploy -m "message"` | Deploy to Bonnard (message required) |
70
+ | `bon deploy --ci` | Non-interactive deploy |
71
+ | `bon deployments` | List deployment history |
72
+ | `bon diff <id>` | View changes in a deployment |
73
+ | `bon annotate <id>` | Add context to deployment changes |
74
+ | `bon query '{...}'` | Query the semantic layer |
75
+ | `bon mcp` | MCP setup instructions for AI agents |
76
+ | `bon docs` | Browse documentation |
77
+
78
+ ## CI/CD ready
79
+
80
+ Deploy from GitHub Actions, GitLab CI, or any pipeline:
81
+
82
+ ```bash
83
+ bon deploy --ci -m "CI deploy"
84
+ ```
85
+
86
+ Non-interactive mode. Datasources are synced automatically. Fails fast if anything is misconfigured.
87
+
88
+ ## Deployment versioning
89
+
90
+ Every deploy creates a versioned deployment with automatic change detection — added, modified, removed, and breaking changes are flagged. Review history with `bon deployments`, inspect changes with `bon diff`, and add context with `bon annotate`.
91
+
92
+ ## Built-in documentation
93
+
94
+ ```bash
95
+ bon docs cubes.measures # Read modeling docs in your terminal
96
+ bon docs --search "joins" # Search across all topics
97
+ ```
98
+
99
+ No context-switching. Learn and build in the same workflow.
100
+
101
+ ## Best Practices
102
+
103
+ 1. **Start from questions** — collect the most common questions your team asks, then build views that answer them. Don't just mirror your warehouse tables.
104
+ 2. **Add filtered measures** — if a dashboard card has a WHERE clause beyond a date range, that filter should be a filtered measure. This is the #1 way to match real dashboard numbers.
105
+ 3. **Write descriptions for agents** — descriptions are how AI agents choose which view and measure to use. Lead with scope, cross-reference related views, include dimension values.
106
+ 4. **Validate often** — run `bon validate` after each change
107
+ 5. **Test with real questions** — after deploying, ask an AI agent via MCP the same questions your team asks. Check it picks the right view and measure.
108
+ 6. **Iterate** — expect 2-4 rounds of deploying, testing with questions, and improving descriptions before agents reliably answer the top 10 questions.
109
+
110
+ ## See Also
111
+
112
+ - [cli.deploy](cli.deploy) — Deployment details
113
+ - [cli.validate](cli.validate) — Validation reference