@mastra/clickhouse 1.6.0-alpha.0 → 1.7.0-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,85 @@
1
1
  # @mastra/clickhouse
2
2
 
3
+ ## 1.7.0-alpha.0
4
+
5
+ ### Minor Changes
6
+
7
+ - Improved metric drilldown performance with skip indexes on the high-cardinality ID columns of `metric_events`. Dashboard queries that filter metrics by `traceId`, `threadId`, `resourceId`, `userId`, `organizationId`, `experimentId`, `runId`, `sessionId`, or `requestId` skip data chunks that don't contain the filtered value instead of scanning the full time range. ([#16138](https://github.com/mastra-ai/mastra/pull/16138))
8
+
9
+ Equality (`=`) and `IN` filters benefit automatically. Aggregations and `GROUP BY` queries without a filter on these columns are unaffected.
10
+
11
+ **Migration**
12
+
13
+ Existing deployments pick up the indexes on next start. The migration is metadata-only and instant — no table lock, no rewrite, no downtime. Insert overhead is negligible and index storage is well under 1% of table size. Existing data is indexed lazily as parts merge under normal retention; no operator action is required.
14
+
15
+ - Added `count_distinct` aggregation and server-side TopK to the metrics storage API so dashboards built on high-cardinality fields (like `threadId` or `resourceId`) stay fast and bounded. ([#16137](https://github.com/mastra-ai/mastra/pull/16137))
16
+
17
+ **New aggregation**
18
+
19
+ `getMetricAggregate`, `getMetricBreakdown`, and `getMetricTimeSeries` accept `aggregation: 'count_distinct'` with a `distinctColumn`. Backends pick the most efficient native implementation — `uniq` on ClickHouse, `approx_count_distinct` on DuckDB.
20
+
21
+ `distinctColumn` is restricted to a low/medium-cardinality categorical allowlist (`entityType`, `entityName`, `parentEntityType`, `parentEntityName`, `rootEntityType`, `rootEntityName`, `name`, `provider`, `model`, `environment`, `executionSource`, `serviceName`). ID columns are not allowed — distinct counts over near-unique values converge to the row count and are rarely useful.
22
+
23
+ ```ts
24
+ await store.getMetricAggregate({
25
+ name: ['mastra_llm_tokens_total'],
26
+ aggregation: 'count_distinct',
27
+ distinctColumn: 'model',
28
+ filters: { timestamp: { start, end } },
29
+ });
30
+ ```
31
+
32
+ **Server-side TopK**
33
+
34
+ `getMetricBreakdown` accepts `limit` and `orderDirection`, so breakdowns never return the full cardinality of a column from the database. Ordering is always by the aggregated `value`; `orderDirection` flips between top-N (`DESC`, default) and bottom-N (`ASC`).
35
+
36
+ ```ts
37
+ await store.getMetricBreakdown({
38
+ name: ['mastra_agent_duration_ms'],
39
+ aggregation: 'sum',
40
+ groupBy: ['threadId'],
41
+ limit: 20,
42
+ orderDirection: 'DESC',
43
+ });
44
+ ```
45
+
46
+ - - **Added** `listBranches` and `getSpans` implementations. ([#16154](https://github.com/mastra-ai/mastra/pull/16154))
47
+ - Only spans recorded after this version is deployed are queryable via `listBranches`; historical traces remain accessible through the existing `listTraces` / `getTrace` APIs.
48
+
49
+ ### Patch Changes
50
+
51
+ - Added direct score lookup support to observability storage so score records can be fetched by `scoreId` without scanning paginated score lists, including DuckDB and ClickHouse vNext observability stores. ([#16162](https://github.com/mastra-ai/mastra/pull/16162))
52
+
53
+ - Updated dependencies [[`86c0298`](https://github.com/mastra-ai/mastra/commit/86c0298e647306423c842f9d5ac827bd616bd13d), [`7fce309`](https://github.com/mastra-ai/mastra/commit/7fce30912b14170bfc41f0ac736cca0f39fe0cd4), [`7997c2e`](https://github.com/mastra-ai/mastra/commit/7997c2e55ddd121562a4098cd8d2b89c68433bf1), [`e97ccb9`](https://github.com/mastra-ai/mastra/commit/e97ccb900f8b7a390ce82c9f8eb8d6eb2c5e3777), [`c5daf48`](https://github.com/mastra-ai/mastra/commit/c5daf48556e98c46ae06caf00f92c249912007e9), [`cd96779`](https://github.com/mastra-ai/mastra/commit/cd9677937f113b2856dc8b9f3d4bdabcee58bb2e)]:
54
+ - @mastra/core@1.32.0-alpha.2
55
+
56
+ ## 1.6.0
57
+
58
+ ### Minor Changes
59
+
60
+ - Added ClickhouseStoreVNext, a ClickHouse storage adapter that uses the vNext observability domain by default. Equivalent to constructing a ClickhouseStore and overriding the observability domain manually, but exposed as a single class for new projects. ([#15984](https://github.com/mastra-ai/mastra/pull/15984))
61
+
62
+ ```typescript
63
+ import { Mastra } from '@mastra/core';
64
+ import { ClickhouseStoreVNext } from '@mastra/clickhouse';
65
+
66
+ export const mastra = new Mastra({
67
+ storage: new ClickhouseStoreVNext({
68
+ id: 'clickhouse-storage',
69
+ url: process.env.CLICKHOUSE_URL!,
70
+ username: process.env.CLICKHOUSE_USERNAME!,
71
+ password: process.env.CLICKHOUSE_PASSWORD!,
72
+ }),
73
+ });
74
+ ```
75
+
76
+ ClickhouseStoreVNext accepts the same configuration as ClickhouseStore and reuses the same ClickHouse client across every domain. ClickhouseStore continues to work for projects on the legacy observability schema.
77
+
78
+ ### Patch Changes
79
+
80
+ - Updated dependencies [[`1723e09`](https://github.com/mastra-ai/mastra/commit/1723e099829892419ddbfe49287acfeac2522724), [`629f9e9`](https://github.com/mastra-ai/mastra/commit/629f9e9a7e56aa8f129515a3923c5813298790c7), [`25168fb`](https://github.com/mastra-ai/mastra/commit/25168fb9c1de9db7f8171df4f58ceb842c53aa29), [`ab34b5a`](https://github.com/mastra-ai/mastra/commit/ab34b5a2191b8e4353df1dbf7b9155e7d6628d79), [`5fb6c2a`](https://github.com/mastra-ai/mastra/commit/5fb6c2a95c1843cc231704b91354311fc1f34a71), [`2b0f355`](https://github.com/mastra-ai/mastra/commit/2b0f3553be3e9e5524da539a66e5cf82668440a4), [`394f0cf`](https://github.com/mastra-ai/mastra/commit/394f0cfc31e6b4d801219fdef2e9cc69e5bc8682), [`b2deb29`](https://github.com/mastra-ai/mastra/commit/b2deb29412b300c868655b5840463614fbb7962d), [`66644be`](https://github.com/mastra-ai/mastra/commit/66644beac1aa560f0e417956ff007c89341dc382), [`e109607`](https://github.com/mastra-ai/mastra/commit/e10960749251e34d46b480a20648c490fd30381b), [`310b953`](https://github.com/mastra-ai/mastra/commit/310b95345f302dcd5ba3ed862bdc96f059d44122), [`3d7f709`](https://github.com/mastra-ai/mastra/commit/3d7f709b615e588050bb6283c4ee5cfe2978cbde), [`48a42f1`](https://github.com/mastra-ai/mastra/commit/48a42f114a4006a95e0b7a1b5ad1a24815a175c2), [`8091c7c`](https://github.com/mastra-ai/mastra/commit/8091c7c944d15e13fef6d61b6cfd903f158d4006), [`2c83efc`](https://github.com/mastra-ai/mastra/commit/2c83efc4482b3efe50830e3b8b4ba9a8d219edff), [`43f0e1d`](https://github.com/mastra-ai/mastra/commit/43f0e1d5d5a74ba6fc746f2ad89ebe0c64777a7d), [`da0b9e2`](https://github.com/mastra-ai/mastra/commit/da0b9e2ba7ecc560213b426d6c097fe63946086e), [`282a10c`](https://github.com/mastra-ai/mastra/commit/282a10c9446e9922afe80e10e3770481c8ac8a28), [`04151c7`](https://github.com/mastra-ai/mastra/commit/04151c7dcea934b4fe9076708a23fac161195414), [`8091c7c`](https://github.com/mastra-ai/mastra/commit/8091c7c944d15e13fef6d61b6cfd903f158d4006)]:
81
+ - @mastra/core@1.31.0
82
+
3
83
  ## 1.6.0-alpha.0
4
84
 
5
85
  ### Minor Changes
@@ -3,7 +3,7 @@ name: mastra-clickhouse
3
3
  description: Documentation for @mastra/clickhouse. Use when working with @mastra/clickhouse APIs, configuration, or implementation.
4
4
  metadata:
5
5
  package: "@mastra/clickhouse"
6
- version: "1.6.0-alpha.0"
6
+ version: "1.7.0-alpha.0"
7
7
  ---
8
8
 
9
9
  ## When to use
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.6.0-alpha.0",
2
+ "version": "1.7.0-alpha.0",
3
3
  "package": "@mastra/clickhouse",
4
4
  "exports": {},
5
5
  "modules": {}
package/dist/index.cjs CHANGED
@@ -40,7 +40,11 @@ var TABLE_ENGINES = {
40
40
  [storage.TABLE_SKILLS]: `ReplacingMergeTree()`,
41
41
  [storage.TABLE_SKILL_VERSIONS]: `MergeTree()`,
42
42
  [storage.TABLE_SKILL_BLOBS]: `ReplacingMergeTree()`,
43
- mastra_background_tasks: `ReplacingMergeTree()`
43
+ mastra_background_tasks: `ReplacingMergeTree()`,
44
+ [storage.TABLE_SCHEDULES]: `ReplacingMergeTree()`,
45
+ [storage.TABLE_SCHEDULE_TRIGGERS]: `MergeTree()`,
46
+ mastra_channel_installations: `ReplacingMergeTree()`,
47
+ mastra_channel_config: `ReplacingMergeTree()`
44
48
  };
45
49
  var COLUMN_TYPES = {
46
50
  text: "String",
@@ -2830,6 +2834,7 @@ time for large tables. Please ensure you have a backup before proceeding.
2830
2834
  // src/storage/domains/observability/v-next/ddl.ts
2831
2835
  var TABLE_SPAN_EVENTS = "mastra_span_events";
2832
2836
  var TABLE_TRACE_ROOTS = "mastra_trace_roots";
2837
+ var TABLE_TRACE_BRANCHES = "mastra_trace_branches";
2833
2838
  var TABLE_METRIC_EVENTS = "mastra_metric_events";
2834
2839
  var TABLE_LOG_EVENTS = "mastra_log_events";
2835
2840
  var TABLE_SCORE_EVENTS = "mastra_score_events";
@@ -2837,8 +2842,18 @@ var TABLE_FEEDBACK_EVENTS = "mastra_feedback_events";
2837
2842
  var TABLE_DISCOVERY_VALUES = "mastra_discovery_values";
2838
2843
  var TABLE_DISCOVERY_PAIRS = "mastra_discovery_pairs";
2839
2844
  var MV_TRACE_ROOTS = "mastra_mv_trace_roots";
2845
+ var MV_TRACE_BRANCHES = "mastra_mv_trace_branches";
2840
2846
  var MV_DISCOVERY_VALUES = "mastra_mv_discovery_values";
2841
2847
  var MV_DISCOVERY_PAIRS = "mastra_mv_discovery_pairs";
2848
+ var BRANCH_SPAN_TYPE_VALUES = [
2849
+ "agent_run",
2850
+ "workflow_run",
2851
+ "processor_run",
2852
+ "scorer_run",
2853
+ "rag_ingestion",
2854
+ "tool_call",
2855
+ "mcp_tool_call"
2856
+ ];
2842
2857
  var SPAN_EVENTS_DDL = `
2843
2858
  CREATE TABLE IF NOT EXISTS ${TABLE_SPAN_EVENTS} (
2844
2859
  -- Identity
@@ -2979,6 +2994,80 @@ SELECT *
2979
2994
  FROM ${TABLE_SPAN_EVENTS}
2980
2995
  WHERE parentSpanId IS NULL
2981
2996
  `;
2997
+ var TRACE_BRANCHES_DDL = `
2998
+ CREATE TABLE IF NOT EXISTS ${TABLE_TRACE_BRANCHES} (
2999
+ -- Identity
3000
+ dedupeKey String,
3001
+
3002
+ -- IDs
3003
+ traceId String,
3004
+ spanId String,
3005
+ parentSpanId Nullable(String),
3006
+ experimentId Nullable(String),
3007
+
3008
+ -- Entity
3009
+ entityType LowCardinality(Nullable(String)),
3010
+ entityId Nullable(String),
3011
+ entityName Nullable(String),
3012
+ entityVersionId Nullable(String),
3013
+
3014
+ -- Parent entity
3015
+ parentEntityVersionId Nullable(String),
3016
+ parentEntityType LowCardinality(Nullable(String)),
3017
+ parentEntityId Nullable(String),
3018
+ parentEntityName Nullable(String),
3019
+
3020
+ -- Root entity
3021
+ rootEntityVersionId Nullable(String),
3022
+ rootEntityType LowCardinality(Nullable(String)),
3023
+ rootEntityId Nullable(String),
3024
+ rootEntityName Nullable(String),
3025
+
3026
+ -- Context
3027
+ userId Nullable(String),
3028
+ organizationId Nullable(String),
3029
+ resourceId Nullable(String),
3030
+ runId Nullable(String),
3031
+ sessionId Nullable(String),
3032
+ threadId Nullable(String),
3033
+ requestId Nullable(String),
3034
+ environment LowCardinality(Nullable(String)),
3035
+ executionSource LowCardinality(Nullable(String)),
3036
+ serviceName LowCardinality(Nullable(String)),
3037
+
3038
+ -- Span scalars
3039
+ name String,
3040
+ spanType LowCardinality(String),
3041
+ isEvent Bool DEFAULT false,
3042
+ startedAt DateTime64(3, 'UTC'),
3043
+ endedAt DateTime64(3, 'UTC'),
3044
+
3045
+ -- Query-relevant flexible fields
3046
+ tags Array(LowCardinality(String)) DEFAULT [],
3047
+ metadataSearch Map(LowCardinality(String), String) DEFAULT map(),
3048
+
3049
+ -- Information-only JSON payloads
3050
+ attributes Nullable(String),
3051
+ scope Nullable(String),
3052
+ links Nullable(String),
3053
+ input Nullable(String),
3054
+ output Nullable(String),
3055
+ error Nullable(String),
3056
+ metadataRaw Nullable(String),
3057
+ requestContext Nullable(String)
3058
+ )
3059
+ ENGINE = ReplacingMergeTree
3060
+ PARTITION BY toDate(endedAt)
3061
+ ORDER BY (spanType, startedAt, traceId, dedupeKey)
3062
+ `;
3063
+ var TRACE_BRANCHES_MV_DDL = `
3064
+ CREATE MATERIALIZED VIEW IF NOT EXISTS ${MV_TRACE_BRANCHES}
3065
+ TO ${TABLE_TRACE_BRANCHES}
3066
+ AS
3067
+ SELECT *
3068
+ FROM ${TABLE_SPAN_EVENTS}
3069
+ WHERE spanType IN (${BRANCH_SPAN_TYPE_VALUES.map((v) => `'${v}'`).join(", ")})
3070
+ `;
2982
3071
  var METRIC_EVENTS_DDL = `
2983
3072
  CREATE TABLE IF NOT EXISTS ${TABLE_METRIC_EVENTS} (
2984
3073
  -- Timestamp
@@ -3031,7 +3120,22 @@ CREATE TABLE IF NOT EXISTS ${TABLE_METRIC_EVENTS} (
3031
3120
  -- Information-only JSON payloads
3032
3121
  costMetadata Nullable(String),
3033
3122
  metadata Nullable(String),
3034
- scope Nullable(String)
3123
+ scope Nullable(String),
3124
+
3125
+ -- Bloom-filter skip indexes for high-cardinality ID drilldowns.
3126
+ -- Equality and IN filters on these columns can skip granule chunks that
3127
+ -- definitely do not contain the value. GRANULARITY 2 = 16K-row chunks.
3128
+ -- ID columns are out-of-sort-key, so without these every drilldown scans
3129
+ -- every row in the time range.
3130
+ INDEX idx_traceId traceId TYPE bloom_filter(0.01) GRANULARITY 2,
3131
+ INDEX idx_threadId threadId TYPE bloom_filter(0.01) GRANULARITY 2,
3132
+ INDEX idx_resourceId resourceId TYPE bloom_filter(0.01) GRANULARITY 2,
3133
+ INDEX idx_userId userId TYPE bloom_filter(0.01) GRANULARITY 2,
3134
+ INDEX idx_organizationId organizationId TYPE bloom_filter(0.01) GRANULARITY 2,
3135
+ INDEX idx_experimentId experimentId TYPE bloom_filter(0.01) GRANULARITY 2,
3136
+ INDEX idx_runId runId TYPE bloom_filter(0.01) GRANULARITY 2,
3137
+ INDEX idx_sessionId sessionId TYPE bloom_filter(0.01) GRANULARITY 2,
3138
+ INDEX idx_requestId requestId TYPE bloom_filter(0.01) GRANULARITY 2
3035
3139
  )
3036
3140
  ENGINE = ReplacingMergeTree
3037
3141
  PARTITION BY toDate(timestamp)
@@ -3286,6 +3390,7 @@ SELECT DISTINCT kind, key1, key2, value FROM (
3286
3390
  var ALL_TABLE_DDL = [
3287
3391
  SPAN_EVENTS_DDL,
3288
3392
  TRACE_ROOTS_DDL,
3393
+ TRACE_BRANCHES_DDL,
3289
3394
  METRIC_EVENTS_DDL,
3290
3395
  LOG_EVENTS_DDL,
3291
3396
  SCORE_EVENTS_DDL,
@@ -3293,7 +3398,7 @@ var ALL_TABLE_DDL = [
3293
3398
  DISCOVERY_VALUES_DDL,
3294
3399
  DISCOVERY_PAIRS_DDL
3295
3400
  ];
3296
- var ALL_MV_DDL = [TRACE_ROOTS_MV_DDL];
3401
+ var ALL_MV_DDL = [TRACE_ROOTS_MV_DDL, TRACE_BRANCHES_MV_DDL];
3297
3402
  var DISCOVERY_MV_DDL = [DISCOVERY_VALUES_MV_DDL, DISCOVERY_PAIRS_MV_DDL];
3298
3403
  var ALL_MIGRATIONS = [
3299
3404
  // Span events
@@ -3319,11 +3424,25 @@ var ALL_MIGRATIONS = [
3319
3424
  // Feedback
3320
3425
  `ALTER TABLE ${TABLE_FEEDBACK_EVENTS} ADD COLUMN IF NOT EXISTS entityVersionId Nullable(String)`,
3321
3426
  `ALTER TABLE ${TABLE_FEEDBACK_EVENTS} ADD COLUMN IF NOT EXISTS parentEntityVersionId Nullable(String)`,
3322
- `ALTER TABLE ${TABLE_FEEDBACK_EVENTS} ADD COLUMN IF NOT EXISTS rootEntityVersionId Nullable(String)`
3427
+ `ALTER TABLE ${TABLE_FEEDBACK_EVENTS} ADD COLUMN IF NOT EXISTS rootEntityVersionId Nullable(String)`,
3428
+ // Metric skip indexes — additive, instant DDL. Existing parts keep no index
3429
+ // until merged or `MATERIALIZE INDEX` is run; new parts are bloom-filtered
3430
+ // immediately. With normal retention turning over the table, the index
3431
+ // converges to full coverage without an explicit backfill.
3432
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_traceId traceId TYPE bloom_filter(0.01) GRANULARITY 2`,
3433
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_threadId threadId TYPE bloom_filter(0.01) GRANULARITY 2`,
3434
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_resourceId resourceId TYPE bloom_filter(0.01) GRANULARITY 2`,
3435
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_userId userId TYPE bloom_filter(0.01) GRANULARITY 2`,
3436
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_organizationId organizationId TYPE bloom_filter(0.01) GRANULARITY 2`,
3437
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_experimentId experimentId TYPE bloom_filter(0.01) GRANULARITY 2`,
3438
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_runId runId TYPE bloom_filter(0.01) GRANULARITY 2`,
3439
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_sessionId sessionId TYPE bloom_filter(0.01) GRANULARITY 2`,
3440
+ `ALTER TABLE ${TABLE_METRIC_EVENTS} ADD INDEX IF NOT EXISTS idx_requestId requestId TYPE bloom_filter(0.01) GRANULARITY 2`
3323
3441
  ];
3324
3442
  var ALL_TABLE_NAMES = [
3325
3443
  TABLE_SPAN_EVENTS,
3326
3444
  TABLE_TRACE_ROOTS,
3445
+ TABLE_TRACE_BRANCHES,
3327
3446
  TABLE_METRIC_EVENTS,
3328
3447
  TABLE_LOG_EVENTS,
3329
3448
  TABLE_SCORE_EVENTS,
@@ -3334,13 +3453,14 @@ var ALL_TABLE_NAMES = [
3334
3453
  var SIGNAL_TTL_COLUMNS = {
3335
3454
  [TABLE_SPAN_EVENTS]: "endedAt",
3336
3455
  [TABLE_TRACE_ROOTS]: "endedAt",
3456
+ [TABLE_TRACE_BRANCHES]: "endedAt",
3337
3457
  [TABLE_METRIC_EVENTS]: "timestamp",
3338
3458
  [TABLE_LOG_EVENTS]: "timestamp",
3339
3459
  [TABLE_SCORE_EVENTS]: "timestamp",
3340
3460
  [TABLE_FEEDBACK_EVENTS]: "timestamp"
3341
3461
  };
3342
3462
  var SIGNAL_TO_TABLES = {
3343
- tracing: [TABLE_SPAN_EVENTS, TABLE_TRACE_ROOTS],
3463
+ tracing: [TABLE_SPAN_EVENTS, TABLE_TRACE_ROOTS, TABLE_TRACE_BRANCHES],
3344
3464
  logs: [TABLE_LOG_EVENTS],
3345
3465
  metrics: [TABLE_METRIC_EVENTS],
3346
3466
  scores: [TABLE_SCORE_EVENTS],
@@ -4559,7 +4679,16 @@ var METRIC_TYPED_COLUMNS = /* @__PURE__ */ new Set([
4559
4679
  "costUnit"
4560
4680
  ]);
4561
4681
  var GROUP_BY_EXCLUDED2 = /* @__PURE__ */ new Set(["metadata", "scope", "costMetadata", "tags"]);
4562
- function getAggregationSql2(aggregation, measure = "value") {
4682
+ function resolveDistinctColumnSql(distinctColumn) {
4683
+ if (!distinctColumn) {
4684
+ throw new Error(`count_distinct aggregation requires a 'distinctColumn' argument`);
4685
+ }
4686
+ if (!storage.METRIC_DISTINCT_COLUMNS.includes(distinctColumn)) {
4687
+ throw new Error(`Invalid distinctColumn: ${distinctColumn}`);
4688
+ }
4689
+ return utils.parseFieldKey(distinctColumn);
4690
+ }
4691
+ function getAggregationSql2(aggregation, measure = "value", distinctColumn) {
4563
4692
  switch (aggregation) {
4564
4693
  case "sum":
4565
4694
  return `sum(${measure})`;
@@ -4571,6 +4700,9 @@ function getAggregationSql2(aggregation, measure = "value") {
4571
4700
  return `max(${measure})`;
4572
4701
  case "count":
4573
4702
  return `toFloat64(count(${measure}))`;
4703
+ case "count_distinct": {
4704
+ return `toFloat64(uniq(${resolveDistinctColumnSql(distinctColumn)}))`;
4705
+ }
4574
4706
  case "last":
4575
4707
  return `argMax(${measure}, timestamp)`;
4576
4708
  default:
@@ -4709,7 +4841,7 @@ async function listMetrics(client, args) {
4709
4841
  };
4710
4842
  }
4711
4843
  async function getMetricAggregate(client, args) {
4712
- const aggSql = getAggregationSql2(args.aggregation);
4844
+ const aggSql = getAggregationSql2(args.aggregation, "value", args.distinctColumn);
4713
4845
  const nameFilter = buildMetricNameFilter(args.name);
4714
4846
  const signalFilter = buildMetricsFilterConditions(args.filters);
4715
4847
  const combined = mergeFilters2(nameFilter, signalFilter);
@@ -4781,7 +4913,7 @@ async function getMetricAggregate(client, args) {
4781
4913
  return { value, estimatedCost: costSummary.estimatedCost, costUnit: costSummary.costUnit };
4782
4914
  }
4783
4915
  async function getMetricBreakdown(client, args) {
4784
- const aggSql = getAggregationSql2(args.aggregation);
4916
+ const aggSql = getAggregationSql2(args.aggregation, "value", args.distinctColumn);
4785
4917
  const nameFilter = buildMetricNameFilter(args.name);
4786
4918
  const signalFilter = buildMetricsFilterConditions(args.filters);
4787
4919
  const combined = mergeFilters2(nameFilter, signalFilter);
@@ -4792,14 +4924,22 @@ async function getMetricBreakdown(client, args) {
4792
4924
  const groupByCols = resolvedGroupBy.map((e) => e.groupSql).join(", ");
4793
4925
  const allConditions = [...combined.conditions, ...labelExclusions];
4794
4926
  const fullWhereClause = allConditions.length ? `WHERE ${allConditions.join(" AND ")}` : "";
4927
+ const orderDirection = args.orderDirection === "ASC" ? "ASC" : "DESC";
4928
+ const limitClause = typeof args.limit === "number" ? `LIMIT {breakdown_limit:UInt32}` : "";
4929
+ const extraParams = typeof args.limit === "number" ? { breakdown_limit: args.limit } : {};
4795
4930
  const sql = `
4796
4931
  SELECT ${selectGroupBy}, ${aggSql} AS value, ${getCostSummarySelect()}
4797
4932
  FROM ${TABLE_METRIC_EVENTS}
4798
4933
  ${fullWhereClause}
4799
4934
  GROUP BY ${groupByCols}
4800
- ORDER BY value DESC
4935
+ ORDER BY value ${orderDirection}
4936
+ ${limitClause}
4801
4937
  `;
4802
- const rows = await queryJson3(client, sql, { ...combined.params, ...labelParams });
4938
+ const rows = await queryJson3(client, sql, {
4939
+ ...combined.params,
4940
+ ...labelParams,
4941
+ ...extraParams
4942
+ });
4803
4943
  const groups = rows.map((row) => {
4804
4944
  const dimensions = {};
4805
4945
  for (const entry of resolvedGroupBy) {
@@ -4817,7 +4957,7 @@ async function getMetricBreakdown(client, args) {
4817
4957
  return { groups };
4818
4958
  }
4819
4959
  async function getMetricTimeSeries(client, args) {
4820
- const aggSql = getAggregationSql2(args.aggregation);
4960
+ const aggSql = getAggregationSql2(args.aggregation, "value", args.distinctColumn);
4821
4961
  const intervalSql = getIntervalSql2(args.interval);
4822
4962
  const nameFilter = buildMetricNameFilter(args.name);
4823
4963
  const signalFilter = buildMetricsFilterConditions(args.filters);
@@ -5215,6 +5355,14 @@ async function listScores(client, args) {
5215
5355
  scores: rows.map(rowToScoreRecord)
5216
5356
  };
5217
5357
  }
5358
+ async function getScoreById(client, scoreId) {
5359
+ const rows = await queryJson4(
5360
+ client,
5361
+ `SELECT * FROM ${TABLE_SCORE_EVENTS} WHERE scoreId = {scoreId:String} LIMIT 1`,
5362
+ { scoreId }
5363
+ );
5364
+ return rows[0] ? rowToScoreRecord(rows[0]) : null;
5365
+ }
5218
5366
  async function getScoreAggregate(client, args) {
5219
5367
  const aggSql = getAggregationSql3(args.aggregation);
5220
5368
  const identity = buildScoreIdentityFilter(args);
@@ -5470,8 +5618,7 @@ async function listTraces(client, args) {
5470
5618
  spans: storage.toTraceSpans(spans)
5471
5619
  };
5472
5620
  }
5473
-
5474
- // src/storage/domains/observability/v-next/tracing.ts
5621
+ var BRANCH_SPAN_TYPE_SQL_LIST = storage.BRANCH_SPAN_TYPES.map((t) => `'${t}'`).join(", ");
5475
5622
  async function createSpan(client, args) {
5476
5623
  const row = spanRecordToRow(args.span);
5477
5624
  await client.insert({
@@ -5491,6 +5638,29 @@ async function batchCreateSpans(client, args) {
5491
5638
  clickhouse_settings: CH_INSERT_SETTINGS
5492
5639
  });
5493
5640
  }
5641
+ async function getSpans(client, args) {
5642
+ if (args.spanIds.length === 0) {
5643
+ return { traceId: args.traceId, spans: [] };
5644
+ }
5645
+ const result = await client.query({
5646
+ query: `
5647
+ SELECT * FROM (
5648
+ SELECT *
5649
+ FROM ${TABLE_SPAN_EVENTS}
5650
+ WHERE traceId = {traceId:String}
5651
+ AND spanId IN {spanIds:Array(String)}
5652
+ ORDER BY dedupeKey, endedAt DESC
5653
+ LIMIT 1 BY dedupeKey
5654
+ )
5655
+ `,
5656
+ query_params: { traceId: args.traceId, spanIds: args.spanIds },
5657
+ format: "JSONEachRow",
5658
+ clickhouse_settings: CH_SETTINGS
5659
+ });
5660
+ const rows = await result.json();
5661
+ const spans = rows.map(rowToSpanRecord);
5662
+ return { traceId: args.traceId, spans };
5663
+ }
5494
5664
  async function getSpan(client, args) {
5495
5665
  const result = await client.query({
5496
5666
  query: `
@@ -5579,6 +5749,169 @@ async function batchDeleteTraces(client, args) {
5579
5749
  })
5580
5750
  ]);
5581
5751
  }
5752
+ async function listBranches(client, args) {
5753
+ const { filters, pagination, orderBy } = storage.listBranchesArgsSchema.parse(args);
5754
+ const page = pagination?.page ?? 0;
5755
+ const perPage = pagination?.perPage ?? 10;
5756
+ const conditions = [];
5757
+ const params = {};
5758
+ if (filters?.spanType) {
5759
+ conditions.push(`spanType = {spanType:String}`);
5760
+ params.spanType = filters.spanType;
5761
+ } else {
5762
+ conditions.push(`spanType IN (${BRANCH_SPAN_TYPE_SQL_LIST})`);
5763
+ }
5764
+ if (filters?.startedAt?.start) {
5765
+ const op = filters.startedAt.startExclusive ? ">" : ">=";
5766
+ conditions.push(`startedAt ${op} {startedAtStart:DateTime64(3)}`);
5767
+ params.startedAtStart = filters.startedAt.start.getTime();
5768
+ }
5769
+ if (filters?.startedAt?.end) {
5770
+ const op = filters.startedAt.endExclusive ? "<" : "<=";
5771
+ conditions.push(`startedAt ${op} {startedAtEnd:DateTime64(3)}`);
5772
+ params.startedAtEnd = filters.startedAt.end.getTime();
5773
+ }
5774
+ if (filters?.endedAt?.start) {
5775
+ const op = filters.endedAt.startExclusive ? ">" : ">=";
5776
+ conditions.push(`endedAt ${op} {endedAtStart:DateTime64(3)}`);
5777
+ params.endedAtStart = filters.endedAt.start.getTime();
5778
+ }
5779
+ if (filters?.endedAt?.end) {
5780
+ const op = filters.endedAt.endExclusive ? "<" : "<=";
5781
+ conditions.push(`endedAt ${op} {endedAtEnd:DateTime64(3)}`);
5782
+ params.endedAtEnd = filters.endedAt.end.getTime();
5783
+ }
5784
+ const eq = [
5785
+ { col: "traceId", value: filters?.traceId, param: "traceId" },
5786
+ { col: "entityType", value: filters?.entityType, param: "entityType" },
5787
+ { col: "entityId", value: filters?.entityId, param: "entityId" },
5788
+ { col: "entityName", value: filters?.entityName, param: "entityName" },
5789
+ { col: "entityVersionId", value: filters?.entityVersionId, param: "entityVersionId" },
5790
+ { col: "parentEntityVersionId", value: filters?.parentEntityVersionId, param: "parentEntityVersionId" },
5791
+ { col: "parentEntityType", value: filters?.parentEntityType, param: "parentEntityType" },
5792
+ { col: "parentEntityId", value: filters?.parentEntityId, param: "parentEntityId" },
5793
+ { col: "parentEntityName", value: filters?.parentEntityName, param: "parentEntityName" },
5794
+ { col: "rootEntityVersionId", value: filters?.rootEntityVersionId, param: "rootEntityVersionId" },
5795
+ { col: "rootEntityType", value: filters?.rootEntityType, param: "rootEntityType" },
5796
+ { col: "rootEntityId", value: filters?.rootEntityId, param: "rootEntityId" },
5797
+ { col: "rootEntityName", value: filters?.rootEntityName, param: "rootEntityName" },
5798
+ { col: "experimentId", value: filters?.experimentId, param: "experimentId" },
5799
+ { col: "userId", value: filters?.userId, param: "userId" },
5800
+ { col: "organizationId", value: filters?.organizationId, param: "organizationId" },
5801
+ { col: "resourceId", value: filters?.resourceId, param: "resourceId" },
5802
+ { col: "runId", value: filters?.runId, param: "runId" },
5803
+ { col: "sessionId", value: filters?.sessionId, param: "sessionId" },
5804
+ { col: "threadId", value: filters?.threadId, param: "threadId" },
5805
+ { col: "requestId", value: filters?.requestId, param: "requestId" },
5806
+ { col: "environment", value: filters?.environment, param: "environment" },
5807
+ { col: "executionSource", value: filters?.source, param: "source" },
5808
+ { col: "serviceName", value: filters?.serviceName, param: "serviceName" }
5809
+ ];
5810
+ for (const { col, value, param } of eq) {
5811
+ if (value == null) continue;
5812
+ conditions.push(`${col} = {${param}:String}`);
5813
+ params[param] = value;
5814
+ }
5815
+ if (filters?.tags && filters.tags.length > 0) {
5816
+ for (let i = 0; i < filters.tags.length; i++) {
5817
+ const tag = filters.tags[i];
5818
+ if (typeof tag !== "string" || tag.trim() === "") continue;
5819
+ const param = `tag_${i}`;
5820
+ conditions.push(`has(tags, {${param}:String})`);
5821
+ params[param] = tag;
5822
+ }
5823
+ }
5824
+ if (filters?.metadata != null && typeof filters.metadata === "object") {
5825
+ let i = 0;
5826
+ for (const [key, value] of Object.entries(filters.metadata)) {
5827
+ if (typeof value !== "string") continue;
5828
+ const keyParam = `meta_k_${i}`;
5829
+ const valParam = `meta_v_${i}`;
5830
+ conditions.push(`metadataSearch[{${keyParam}:String}] = {${valParam}:String}`);
5831
+ params[keyParam] = key;
5832
+ params[valParam] = value;
5833
+ i++;
5834
+ }
5835
+ }
5836
+ if (filters?.scope != null && typeof filters.scope === "object") {
5837
+ let i = 0;
5838
+ for (const [key, value] of Object.entries(filters.scope)) {
5839
+ if (value === void 0) continue;
5840
+ const normalized = typeof value === "string" ? value : JSON.stringify(value);
5841
+ if (normalized == null) continue;
5842
+ const keyParam = `scope_k_${i}`;
5843
+ const valParam = `scope_v_${i}`;
5844
+ conditions.push(`JSONExtractString(scope, {${keyParam}:String}) = {${valParam}:String}`);
5845
+ params[keyParam] = key;
5846
+ params[valParam] = normalized;
5847
+ i++;
5848
+ }
5849
+ }
5850
+ if (filters?.status === storage.TraceStatus.ERROR) {
5851
+ conditions.push(`error IS NOT NULL`);
5852
+ } else if (filters?.status === storage.TraceStatus.SUCCESS) {
5853
+ conditions.push(`error IS NULL`);
5854
+ } else if (filters?.status === storage.TraceStatus.RUNNING) {
5855
+ conditions.push("1 = 0");
5856
+ }
5857
+ const whereClause = conditions.length > 0 ? `WHERE ${conditions.join(" AND ")}` : "";
5858
+ const sortField = orderBy?.field === "endedAt" ? "endedAt" : "startedAt";
5859
+ const sortDirection = orderBy?.direction === "ASC" ? "ASC" : "DESC";
5860
+ const countResult = await client.query({
5861
+ query: `
5862
+ SELECT count() as cnt FROM (
5863
+ SELECT dedupeKey
5864
+ FROM ${TABLE_TRACE_BRANCHES}
5865
+ ${whereClause}
5866
+ ORDER BY dedupeKey
5867
+ LIMIT 1 BY dedupeKey
5868
+ )
5869
+ `,
5870
+ query_params: params,
5871
+ format: "JSONEachRow",
5872
+ clickhouse_settings: CH_SETTINGS
5873
+ });
5874
+ const countRows = await countResult.json();
5875
+ const total = Number(countRows[0]?.cnt ?? 0);
5876
+ if (total === 0) {
5877
+ return {
5878
+ pagination: { total: 0, page, perPage, hasMore: false },
5879
+ branches: []
5880
+ };
5881
+ }
5882
+ const dataResult = await client.query({
5883
+ query: `
5884
+ SELECT * FROM (
5885
+ SELECT *
5886
+ FROM ${TABLE_TRACE_BRANCHES}
5887
+ ${whereClause}
5888
+ ORDER BY dedupeKey
5889
+ LIMIT 1 BY dedupeKey
5890
+ )
5891
+ ORDER BY ${sortField} ${sortDirection}, dedupeKey ASC
5892
+ LIMIT {limit:UInt32}
5893
+ OFFSET {offset:UInt32}
5894
+ `,
5895
+ query_params: {
5896
+ ...params,
5897
+ limit: perPage,
5898
+ offset: page * perPage
5899
+ },
5900
+ format: "JSONEachRow",
5901
+ clickhouse_settings: CH_SETTINGS
5902
+ });
5903
+ const rows = await dataResult.json();
5904
+ const spans = rows.map(rowToSpanRecord);
5905
+ return {
5906
+ pagination: {
5907
+ total,
5908
+ page,
5909
+ perPage,
5910
+ hasMore: (page + 1) * perPage < total
5911
+ },
5912
+ branches: storage.toTraceSpans(spans)
5913
+ };
5914
+ }
5582
5915
 
5583
5916
  // src/storage/domains/observability/v-next/index.ts
5584
5917
  function buildSignalMigrationRequiredMessage(args) {
@@ -5757,6 +6090,22 @@ var ObservabilityStorageClickhouseVNext = class extends storage.ObservabilitySto
5757
6090
  );
5758
6091
  }
5759
6092
  }
6093
+ async getSpans(args) {
6094
+ try {
6095
+ return await getSpans(this.#client, args);
6096
+ } catch (error$1) {
6097
+ if (error$1 instanceof error.MastraError) throw error$1;
6098
+ throw new error.MastraError(
6099
+ {
6100
+ id: storage.createStorageErrorId("CLICKHOUSE", "GET_SPANS", "FAILED"),
6101
+ domain: error.ErrorDomain.STORAGE,
6102
+ category: error.ErrorCategory.THIRD_PARTY,
6103
+ details: { traceId: args.traceId, count: args.spanIds.length }
6104
+ },
6105
+ error$1
6106
+ );
6107
+ }
6108
+ }
5760
6109
  async getRootSpan(args) {
5761
6110
  try {
5762
6111
  return await getRootSpan(this.#client, args);
@@ -5820,6 +6169,21 @@ var ObservabilityStorageClickhouseVNext = class extends storage.ObservabilitySto
5820
6169
  );
5821
6170
  }
5822
6171
  }
6172
+ async listBranches(args) {
6173
+ try {
6174
+ return await listBranches(this.#client, args);
6175
+ } catch (error$1) {
6176
+ if (error$1 instanceof error.MastraError) throw error$1;
6177
+ throw new error.MastraError(
6178
+ {
6179
+ id: storage.createStorageErrorId("CLICKHOUSE", "LIST_BRANCHES", "FAILED"),
6180
+ domain: error.ErrorDomain.STORAGE,
6181
+ category: error.ErrorCategory.THIRD_PARTY
6182
+ },
6183
+ error$1
6184
+ );
6185
+ }
6186
+ }
5823
6187
  async batchCreateLogs(args) {
5824
6188
  try {
5825
6189
  await batchCreateLogs(this.#client, args);
@@ -5928,6 +6292,22 @@ var ObservabilityStorageClickhouseVNext = class extends storage.ObservabilitySto
5928
6292
  );
5929
6293
  }
5930
6294
  }
6295
+ async getScoreById(scoreId) {
6296
+ try {
6297
+ return await getScoreById(this.#client, scoreId);
6298
+ } catch (error$1) {
6299
+ if (error$1 instanceof error.MastraError) throw error$1;
6300
+ throw new error.MastraError(
6301
+ {
6302
+ id: storage.createStorageErrorId("CLICKHOUSE", "GET_SCORE_BY_ID", "FAILED"),
6303
+ domain: error.ErrorDomain.STORAGE,
6304
+ category: error.ErrorCategory.THIRD_PARTY,
6305
+ details: { scoreId }
6306
+ },
6307
+ error$1
6308
+ );
6309
+ }
6310
+ }
5931
6311
  async createFeedback(args) {
5932
6312
  try {
5933
6313
  await createFeedback(this.#client, args);