trifle-stats 2.3.0 → 2.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: cace74102291a5e8b9d1de50cc489ea5ca0514e7b23a4a73fb218f9626569f44
4
- data.tar.gz: 7149e5a8928348b7631897abb94c33beca2963b9e1a4d8c1490168e2a2bf4e4b
3
+ metadata.gz: a7f3833c65dc5e0a762b93c6489280be068ed5d8e5309860ec8a5827018e287b
4
+ data.tar.gz: f019c55266e3b283da326a49861cf51cf1e85f3a7fac287b102dfb8e4eb7f9a2
5
5
  SHA512:
6
- metadata.gz: 6578def9715b7cdee5c236ea592f482888c0c29304224bc5cdae419df49900613cd115a0c5f3e2e920f04aea02cdbe433ad0742b6ceaf981a3a46ecd5c3c541b
7
- data.tar.gz: 4d257ea260be8adc9d8a1868199cf1ceb983fc240312b13dc63444bf21ff1c0c2493aa2118a80001c3419758536a2d7e7678bb852b2583930f1e0b58f72b17ad
6
+ metadata.gz: c7a67031c4a61531f81bca6bcff2b5726628eb95d8c4ff03ef49721f052cdeaaadd5b438a5670a5c55d070afc7af320597075d511f38d9b4eaa27d33f76b8dda
7
+ data.tar.gz: 0352b77fd921945ce1bb96d815117fbb838f6aafcdd63405bf9ce3a0afcceaf04753dd42f7a3f93e60052c329a86752fce7688820adc70599006977485f922eb
@@ -42,6 +42,19 @@ jobs:
42
42
  ports:
43
43
  - 6379:6379
44
44
 
45
+ mysql:
46
+ image: mysql:8
47
+ env:
48
+ MYSQL_ROOT_PASSWORD: password
49
+ MYSQL_DATABASE: trifle_stats_test
50
+ options: >-
51
+ --health-cmd "mysqladmin ping -h 127.0.0.1 -uroot -ppassword"
52
+ --health-interval 10s
53
+ --health-timeout 5s
54
+ --health-retries 10
55
+ ports:
56
+ - 3306:3306
57
+
45
58
  mongodb:
46
59
  image: mongo:6.0
47
60
  env:
@@ -64,6 +77,11 @@ jobs:
64
77
  DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
65
78
  REDIS_URL: redis://localhost:6379/0
66
79
  MONGODB_URL: mongodb://root:password@localhost:27017/test_db?authSource=admin
80
+ MYSQL_HOST: 127.0.0.1
81
+ MYSQL_PORT: 3306
82
+ MYSQL_USER: root
83
+ MYSQL_PASSWORD: password
84
+ MYSQL_DATABASE: trifle_stats_test
67
85
 
68
86
  steps:
69
87
  - uses: actions/checkout@v4
@@ -94,6 +112,12 @@ jobs:
94
112
  sleep 2
95
113
  done
96
114
 
115
+ # Wait for MySQL
116
+ until timeout 1 bash -c "</dev/tcp/localhost/3306"; do
117
+ echo "Waiting for MySQL..."
118
+ sleep 2
119
+ done
120
+
97
121
  - name: Setup Database
98
122
  run: |
99
123
  # Create database if needed (adjust based on your setup)
@@ -107,6 +131,11 @@ jobs:
107
131
  DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
108
132
  REDIS_URL: redis://localhost:6379/0
109
133
  MONGODB_URL: mongodb://root:password@localhost:27017/test_db?authSource=admin
134
+ MYSQL_HOST: 127.0.0.1
135
+ MYSQL_PORT: 3306
136
+ MYSQL_USER: root
137
+ MYSQL_PASSWORD: password
138
+ MYSQL_DATABASE: trifle_stats_test
110
139
 
111
140
  - name: Rubocop
112
141
  run: bundle exec rubocop
data/Gemfile CHANGED
@@ -6,6 +6,7 @@ gemspec
6
6
  gem "rake", "~> 12.0"
7
7
  gem "rspec", "~> 3.0"
8
8
  gem "mongo", require: false
9
+ gem "mysql2", require: false
9
10
  gem "pg", require: false
10
11
  gem "redis", require: false
11
12
  gem "sqlite3", require: false
data/Gemfile.lock CHANGED
@@ -1,21 +1,24 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- trifle-stats (2.3.0)
4
+ trifle-stats (2.4.0)
5
5
  tzinfo (~> 2.0)
6
6
 
7
7
  GEM
8
8
  remote: https://rubygems.org/
9
9
  specs:
10
10
  ast (2.4.2)
11
+ bigdecimal (4.0.1)
11
12
  bson (4.12.1)
12
13
  byebug (11.1.3)
13
- concurrent-ruby (1.3.5)
14
+ concurrent-ruby (1.3.6)
14
15
  diff-lcs (1.4.4)
15
16
  dotenv (2.7.6)
16
17
  mini_portile2 (2.8.9)
17
18
  mongo (2.14.0)
18
19
  bson (>= 4.8.2, < 5.0.0)
20
+ mysql2 (0.5.7)
21
+ bigdecimal
19
22
  parallel (1.20.1)
20
23
  parser (3.0.0.0)
21
24
  ast (~> 2.4.1)
@@ -66,6 +69,7 @@ DEPENDENCIES
66
69
  byebug
67
70
  dotenv
68
71
  mongo
72
+ mysql2
69
73
  pg
70
74
  rake (~> 12.0)
71
75
  redis
data/README.md CHANGED
@@ -3,92 +3,83 @@
3
3
  [![Gem Version](https://badge.fury.io/rb/trifle-stats.svg)](https://rubygems.org/gems/trifle-stats)
4
4
  [![Ruby](https://github.com/trifle-io/trifle-stats/workflows/Ruby/badge.svg?branch=main)](https://github.com/trifle-io/trifle-stats)
5
5
 
6
- Simple analytics backed by Redis, Postgres, MongoDB, Google Analytics, Segment, or whatever. It gets you from having bunch of events occuring within few minutes to being able to say what happened on 25th January 2021.
6
+ Time-series metrics for Ruby. Track anything (signups, revenue, job durations) using the database you already have. No InfluxDB. No TimescaleDB. Just one call and your existing Postgres, Redis, MongoDB, MySQL, or SQLite.
7
7
 
8
- ## Documentation
8
+ Part of the [Trifle](https://trifle.io) ecosystem. Also available in [Elixir](https://github.com/trifle-io/trifle_stats) and [Go](https://github.com/trifle-io/trifle_stats_go).
9
+
10
+ ## Why Trifle::Stats?
9
11
 
10
- For comprehensive guides, API reference, and examples, visit [trifle.io/trifle-stats-rb](https://trifle.io/trifle-stats-rb)
12
+ - **No new infrastructure.** Uses your existing database. No dedicated time-series DB to deploy, maintain, or pay for.
13
+ - **One call, many dimensions.** Track nested breakdowns (revenue by country by channel) in a single `track` call. Automatic rollup across dynamic time granularities (`1m`, `6h`, `1d`, etc.).
14
+ - **Library-first.** Start with the gem. Add [Trifle App](https://trifle.io/product/app) dashboards, [Trifle CLI](https://github.com/trifle-io/trifle-cli) terminal access, or AI agent integration via MCP when you need them.
11
15
 
12
- ## Installation
16
+ ## Quick Start
13
17
 
14
- Add this line to your application's Gemfile:
18
+ ### 1. Install
15
19
 
16
20
  ```ruby
17
21
  gem 'trifle-stats'
18
22
  ```
19
23
 
20
- And then execute:
21
-
22
- ```bash
23
- $ bundle install
24
- ```
25
-
26
- Or install it yourself as:
27
-
28
- ```bash
29
- $ gem install trifle-stats
30
- ```
31
-
32
- ## Quick Start
33
-
34
- ### 1. Configure
24
+ ### 2. Configure
35
25
 
36
26
  ```ruby
37
- require 'trifle/stats'
38
-
39
27
  Trifle::Stats.configure do |config|
40
- config.driver = Trifle::Stats::Driver::Redis.new(Redis.new)
41
- config.granularities = ['1m', '1h', '1d', '1w', '1mo', '1q', '1y']
28
+ config.driver = Trifle::Stats::Driver::Postgres.new(ActiveRecord::Base.connection)
29
+ config.granularities = ['1h', '1d', '1w', '1mo']
42
30
  end
43
31
  ```
44
32
 
45
- ### 2. Track events
33
+ ### 3. Track
46
34
 
47
35
  ```ruby
48
- Trifle::Stats.track(key: 'event::logs', at: Time.now, values: { count: 1, duration: 2.11 })
36
+ Trifle::Stats.track(
37
+ key: 'orders',
38
+ at: Time.now,
39
+ values: {
40
+ count: 1,
41
+ revenue: 49_90,
42
+ revenue_by_country: { us: 49_90 },
43
+ revenue_by_channel: { organic: 49_90 }
44
+ }
45
+ )
49
46
  ```
50
47
 
51
- ### 3. Retrieve values
48
+ ### 4. Query
52
49
 
53
50
  ```ruby
54
- Trifle::Stats.values(key: 'event::logs', from: 1.month.ago, to: Time.now, granularity: :day)
55
- #=> {:at=>[Wed, 25 Jan 2023 00:00:00 +0000], :values=>[{"count"=>1, "duration"=>2.11}]}
51
+ Trifle::Stats.values(
52
+ key: 'orders',
53
+ from: 1.week.ago,
54
+ to: Time.now,
55
+ granularity: :day
56
+ )
57
+ #=> { at: [Mon, Tue, Wed, ...], values: [{ "count" => 12, "revenue" => 598_80, ... }, ...] }
56
58
  ```
57
59
 
58
60
  ## Drivers
59
61
 
60
- Trifle::Stats supports multiple backends:
61
-
62
- - **Redis** - Fast, in-memory storage
63
- - **Postgres** - SQL database with JSONB support
64
- - **SQLite** - SQL database in a file
65
- - **MongoDB** - Document database
66
- - **Process** - Thread-safe in-memory storage (development/testing)
67
- - **Dummy** - No-op driver for disabled analytics
62
+ | Driver | Backend | Best for |
63
+ |--------|---------|----------|
64
+ | **Postgres** | JSONB upsert | Most production apps |
65
+ | **Redis** | Hash increment | High-throughput counters |
66
+ | **MongoDB** | Document upsert | Document-oriented stacks |
67
+ | **MySQL** | JSON column | MySQL shops |
68
+ | **SQLite** | JSON1 extension | Single-server apps, dev/test |
69
+ | **Process** | In-memory | Testing |
70
+ | **Dummy** | No-op | Disabled analytics |
68
71
 
69
72
  ## Features
70
73
 
71
- - **Multiple time granularities** - Track data across different time periods
72
- - **Custom aggregators** - Sum, average, min, max with custom logic
73
- - **Series operations** - Advanced data manipulation and calculations
74
- - **Performance optimized** - Efficient storage and retrieval patterns
75
- - **Buffered writes** - Queue metrics locally before flushing to the driver
76
- - **Driver flexibility** - Switch between storage backends easily
74
+ - **Dynamic time granularities.** Use any interval like `1m`, `10m`, `1h`, `6h`, `1d`, `1w`, `1mo`, `1q`, `1y`.
75
+ - **Nested value hierarchies.** Track dimensional breakdowns in a single call.
76
+ - **Series operations.** Aggregators (sum, avg, min, max), transponders, formatters.
77
+ - **Buffered writes.** Queue metrics in-memory before flushing to reduce write load.
78
+ - **Driver flexibility.** Switch backends without changing application code.
77
79
 
78
80
  ## Buffered Persistence
79
81
 
80
- Every `track/assert/assort` call can be buffered before touching the driver. The buffer is enabled by
81
- default and flushes on an interval, when the queue reaches a configurable size, and again on shutdown
82
- (`SIGTERM`/`at_exit`).
83
-
84
- Available configuration options:
85
-
86
- - `buffer_enabled` (default: `true`) – Disable to write-through synchronously
87
- - `buffer_duration` (default: `1` second) – Maximum time between automatic flushes
88
- - `buffer_size` (default: `256`) – Maximum queued actions before forcing a flush
89
- - `buffer_aggregate` (default: `true`) – Combine repeated operations on the same key set
90
-
91
- Example:
82
+ Every `track`/`assert`/`assort` call is buffered by default. The buffer flushes on an interval, when the queue reaches a configurable size, and on shutdown (`SIGTERM`/`at_exit`).
92
83
 
93
84
  ```ruby
94
85
  Trifle::Stats.configure do |config|
@@ -99,34 +90,25 @@ Trifle::Stats.configure do |config|
99
90
  end
100
91
  ```
101
92
 
102
- If your application manages database connections manually (e.g. ActiveRecord with a pool size of 1),
103
- increase the pool size or disable buffering to avoid starving other threads.
104
-
105
- ## Testing
106
-
107
- Tests are run against all supported drivers. To run the test suite:
108
-
109
- ```bash
110
- $ bundle exec rspec
111
- ```
112
-
113
- Ensure Redis, Postgres, and MongoDB are running locally. The test suite will handle database setup automatically.
93
+ Set `buffer_enabled = false` for synchronous write-through.
114
94
 
115
- Tests are meant to be **simple and isolated**. Every test should be **independent** and able to run in any order. Tests should be **self-contained** and set up their own configuration. This makes it easier to debug and maintain the test suite.
116
-
117
- Use **single layer testing** to focus on testing a specific class or module in isolation. Use **appropriate stubbing** for driver methods when testing higher-level operations.
95
+ ## Documentation
118
96
 
119
- Driver tests use real database connections for accurate behavior validation. The `Process` driver is preferred for in-memory testing environments.
97
+ Full guides, API reference, and examples at **[docs.trifle.io/trifle-stats-rb](https://docs.trifle.io/trifle-stats-rb)**
120
98
 
121
- **Repeat yourself** in test setup for clarity rather than complex shared setups that can hide dependencies.
99
+ ## Trifle Ecosystem
122
100
 
123
- For performance testing:
101
+ Trifle::Stats is the tracking layer. The ecosystem grows with you:
124
102
 
125
- ```bash
126
- $ cd specs/performance
127
- $ bundle install
128
- $ ruby run.rb 100 '{"a":1}'
129
- ```
103
+ | Component | What it does |
104
+ |-----------|-------------|
105
+ | **[Trifle App](https://trifle.io/product/app)** | Dashboards, alerts, scheduled reports, AI-powered chat. Cloud or self-hosted. |
106
+ | **[Trifle CLI](https://github.com/trifle-io/trifle-cli)** | Query and push metrics from the terminal. MCP server mode for AI agents. |
107
+ | **[Trifle::Stats (Elixir)](https://github.com/trifle-io/trifle_stats)** | Elixir implementation with the same API and storage format. |
108
+ | **[Trifle Stats (Go)](https://github.com/trifle-io/trifle_stats_go)** | Go implementation with the same API and storage format. |
109
+ | **[Trifle::Traces](https://github.com/trifle-io/trifle-traces)** | Structured execution tracing for background jobs. |
110
+ | **[Trifle::Logs](https://github.com/trifle-io/trifle-logs)** | File-based log storage with ripgrep-powered search. |
111
+ | **[Trifle::Docs](https://github.com/trifle-io/trifle-docs)** | Map a folder of Markdown files to documentation URLs. |
130
112
 
131
113
  ## Contributing
132
114
 
@@ -105,8 +105,12 @@ module Trifle
105
105
  reset!
106
106
  end
107
107
 
108
- def store(operation, keys, values)
109
- aggregate? ? store_aggregate(operation, keys, values) : store_linear(operation, keys, values)
108
+ def store(operation, keys, values, tracking_key)
109
+ if aggregate?
110
+ store_aggregate(operation, keys, values, tracking_key)
111
+ else
112
+ store_linear(operation, keys, values, tracking_key)
113
+ end
110
114
  @operation_count += 1
111
115
  end
112
116
 
@@ -135,17 +139,17 @@ module Trifle
135
139
  @operation_count = 0
136
140
  end
137
141
 
138
- def store_linear(operation, keys, values)
139
- @actions << { operation: operation, keys: keys, values: duplicate(values), count: 1 }
142
+ def store_linear(operation, keys, values, tracking_key)
143
+ @actions << build_action(operation, keys, values, tracking_key)
140
144
  end
141
145
 
142
- def store_aggregate(operation, keys, values)
143
- signature = signature_for(operation, keys)
146
+ def store_aggregate(operation, keys, values, tracking_key)
147
+ signature = signature_for(operation, keys, tracking_key)
144
148
  if (entry = @actions[signature])
145
149
  entry[:values] = merge_values(operation, entry[:values], values)
146
150
  entry[:count] += 1
147
151
  else
148
- @actions[signature] = { operation: operation, keys: keys, values: duplicate(values), count: 1 }
152
+ @actions[signature] = build_action(operation, keys, values, tracking_key)
149
153
  end
150
154
  end
151
155
 
@@ -172,11 +176,22 @@ module Trifle
172
176
  current
173
177
  end
174
178
 
175
- def signature_for(operation, keys)
179
+ def signature_for(operation, keys, tracking_key)
180
+ tracking_marker = tracking_key || '__tracked__'
176
181
  identifiers = keys.map do |key|
177
182
  [key.prefix, key.key, key.granularity, key.at&.to_i].join(':')
178
183
  end
179
- "#{operation}-#{identifiers.join('|')}"
184
+ "#{operation}-#{tracking_marker}-#{identifiers.join('|')}"
185
+ end
186
+
187
+ def build_action(operation, keys, values, tracking_key)
188
+ {
189
+ operation: operation,
190
+ keys: keys,
191
+ values: duplicate(values),
192
+ count: 1,
193
+ tracking_key: tracking_key
194
+ }
180
195
  end
181
196
 
182
197
  def duplicate(value)
@@ -231,12 +246,12 @@ module Trifle
231
246
  self.class.register(self)
232
247
  end
233
248
 
234
- def inc(keys:, values:)
235
- enqueue(:inc, keys: keys, values: values)
249
+ def inc(keys:, values:, tracking_key: nil)
250
+ enqueue(:inc, keys: keys, values: values, tracking_key: tracking_key)
236
251
  end
237
252
 
238
- def set(keys:, values:)
239
- enqueue(:set, keys: keys, values: values)
253
+ def set(keys:, values:, tracking_key: nil)
254
+ enqueue(:set, keys: keys, values: values, tracking_key: tracking_key)
240
255
  end
241
256
 
242
257
  def flush!
@@ -263,10 +278,10 @@ module Trifle
263
278
 
264
279
  private
265
280
 
266
- def enqueue(operation, keys:, values:)
281
+ def enqueue(operation, keys:, values:, tracking_key:)
267
282
  should_flush = false
268
283
  @mutex.synchronize do
269
- @queue.store(operation, keys, values)
284
+ @queue.store(operation, keys, values, tracking_key)
270
285
  should_flush = @queue.size >= @size
271
286
  end
272
287
 
@@ -336,15 +351,29 @@ module Trifle
336
351
  end
337
352
 
338
353
  def process(actions)
339
- actions.each do |action|
340
- @driver.public_send(
341
- action[:operation], keys: action[:keys], values: action[:values], count: action[:count] || 1
342
- )
343
- end
354
+ actions.each { |action| dispatch_action(action) }
344
355
  ensure
345
356
  release_active_record_connection
346
357
  end
347
358
 
359
+ def dispatch_action(action)
360
+ payload = action_payload(action)
361
+
362
+ if action[:tracking_key]
363
+ @driver.public_send(action[:operation], **payload.merge(tracking_key: action[:tracking_key]))
364
+ else
365
+ @driver.public_send(action[:operation], **payload)
366
+ end
367
+ end
368
+
369
+ def action_payload(action)
370
+ {
371
+ keys: action[:keys],
372
+ values: action[:values],
373
+ count: action[:count] || 1
374
+ }
375
+ end
376
+
348
377
  def start_worker
349
378
  Thread.new do
350
379
  loop do
@@ -50,11 +50,12 @@ module Trifle
50
50
  identifier_for(key)
51
51
  end
52
52
 
53
- def system_data_for(key:, count: 1)
54
- self.class.pack(hash: { data: { count: count, keys: { key.key => count } } })
53
+ def system_data_for(key:, count: 1, tracking_key: nil)
54
+ tracking_key ||= key.key
55
+ self.class.pack(hash: { data: { count: count, keys: { tracking_key => count } } })
55
56
  end
56
57
 
57
- def inc(keys:, values:, count: 1) # rubocop:disable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/MethodLength, Metrics/PerceivedComplexity
58
+ def inc(keys:, values:, count: 1, tracking_key: nil) # rubocop:disable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/MethodLength, Metrics/PerceivedComplexity
58
59
  data = self.class.pack(hash: { data: values })
59
60
 
60
61
  if @bulk_write
@@ -63,7 +64,14 @@ module Trifle
63
64
  expire_at = @expire_after ? key.at + @expire_after : nil
64
65
 
65
66
  ops << upsert_operation('$inc', filter: filter, data: data, expire_at: expire_at)
66
- ops << upsert_operation('$inc', filter: system_identifier_for(key: key), data: system_data_for(key: key, count: count), expire_at: expire_at) if @system_tracking # rubocop:disable Layout/LineLength
67
+ next unless @system_tracking
68
+
69
+ ops << upsert_operation(
70
+ '$inc',
71
+ filter: system_identifier_for(key: key),
72
+ data: system_data_for(key: key, count: count, tracking_key: tracking_key),
73
+ expire_at: expire_at
74
+ )
67
75
  end
68
76
 
69
77
  collection.bulk_write(operations)
@@ -74,12 +82,19 @@ module Trifle
74
82
  update = build_update('$inc', data: data, expire_at: expire_at)
75
83
 
76
84
  collection.update_many(filter, update, upsert: true)
77
- collection.update_many(system_identifier_for(key: key), build_update('$inc', data: system_data_for(key: key, count: count), expire_at: expire_at), upsert: true) if @system_tracking # rubocop:disable Layout/LineLength
85
+ next unless @system_tracking
86
+
87
+ system_update = build_update(
88
+ '$inc',
89
+ data: system_data_for(key: key, count: count, tracking_key: tracking_key),
90
+ expire_at: expire_at
91
+ )
92
+ collection.update_many(system_identifier_for(key: key), system_update, upsert: true)
78
93
  end
79
94
  end
80
95
  end
81
96
 
82
- def set(keys:, values:, count: 1) # rubocop:disable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/MethodLength, Metrics/PerceivedComplexity
97
+ def set(keys:, values:, count: 1, tracking_key: nil) # rubocop:disable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/MethodLength, Metrics/PerceivedComplexity
83
98
  data = self.class.pack(hash: { data: values })
84
99
 
85
100
  if @bulk_write
@@ -88,7 +103,14 @@ module Trifle
88
103
  expire_at = @expire_after ? key.at + @expire_after : nil
89
104
 
90
105
  ops << upsert_operation('$set', filter: filter, data: data, expire_at: expire_at)
91
- ops << upsert_operation('$inc', filter: system_identifier_for(key: key), data: system_data_for(key: key, count: count), expire_at: expire_at) if @system_tracking # rubocop:disable Layout/LineLength
106
+ next unless @system_tracking
107
+
108
+ ops << upsert_operation(
109
+ '$inc',
110
+ filter: system_identifier_for(key: key),
111
+ data: system_data_for(key: key, count: count, tracking_key: tracking_key),
112
+ expire_at: expire_at
113
+ )
92
114
  end
93
115
 
94
116
  collection.bulk_write(operations)
@@ -99,7 +121,14 @@ module Trifle
99
121
  update = build_update('$set', data: data, expire_at: expire_at)
100
122
 
101
123
  collection.update_many(filter, update, upsert: true)
102
- collection.update_many(system_identifier_for(key: key), build_update('$inc', data: system_data_for(key: key, count: count), expire_at: expire_at), upsert: true) if @system_tracking # rubocop:disable Layout/LineLength
124
+ next unless @system_tracking
125
+
126
+ system_update = build_update(
127
+ '$inc',
128
+ data: system_data_for(key: key, count: count, tracking_key: tracking_key),
129
+ expire_at: expire_at
130
+ )
131
+ collection.update_many(system_identifier_for(key: key), system_update, upsert: true)
103
132
  end
104
133
  end
105
134
  end
@@ -0,0 +1,305 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'json'
4
+ require 'time'
5
+ require_relative '../mixins/packer'
6
+
7
+ module Trifle
8
+ module Stats
9
+ module Driver
10
+ class Mysql # rubocop:disable Metrics/ClassLength
11
+ include Mixins::Packer
12
+ attr_accessor :client, :table_name, :ping_table_name
13
+
14
+ def initialize(client, table_name: 'trifle_stats', joined_identifier: :full, ping_table_name: nil, system_tracking: true) # rubocop:disable Layout/LineLength
15
+ @client = client
16
+ @table_name = table_name
17
+ @ping_table_name = ping_table_name || "#{table_name}_ping"
18
+ @joined_identifier = self.class.normalize_joined_identifier(joined_identifier)
19
+ @system_tracking = system_tracking
20
+ @separator = '::'
21
+ end
22
+
23
+ def self.setup!(client, table_name: 'trifle_stats', joined_identifier: :full, ping_table_name: nil) # rubocop:disable Metrics/MethodLength
24
+ ping_table_name ||= "#{table_name}_ping"
25
+ identifier_mode = normalize_joined_identifier(joined_identifier)
26
+ quoted_table_name = quote_identifier(table_name)
27
+ quoted_ping_table_name = quote_identifier(ping_table_name)
28
+
29
+ case identifier_mode
30
+ when :full
31
+ client.query(<<~SQL)
32
+ CREATE TABLE IF NOT EXISTS #{quoted_table_name}
33
+ (`key` VARCHAR(255) PRIMARY KEY, `data` JSON NOT NULL)
34
+ SQL
35
+ when :partial
36
+ client.query(<<~SQL)
37
+ CREATE TABLE IF NOT EXISTS #{quoted_table_name}
38
+ (`key` VARCHAR(255) NOT NULL, `at` DATETIME(6) NOT NULL, `data` JSON NOT NULL, PRIMARY KEY (`key`, `at`))
39
+ SQL
40
+ else
41
+ client.query(<<~SQL)
42
+ CREATE TABLE IF NOT EXISTS #{quoted_table_name}
43
+ (`key` VARCHAR(255) NOT NULL, `granularity` VARCHAR(255) NOT NULL, `at` DATETIME(6) NOT NULL, `data` JSON NOT NULL, PRIMARY KEY (`key`, `granularity`, `at`))
44
+ SQL
45
+ client.query(<<~SQL)
46
+ CREATE TABLE IF NOT EXISTS #{quoted_ping_table_name}
47
+ (`key` VARCHAR(255) PRIMARY KEY, `at` DATETIME(6) NOT NULL, `data` JSON NOT NULL)
48
+ SQL
49
+ end
50
+ end
51
+
52
+ def description
53
+ mode = if @joined_identifier == :full
54
+ 'J'
55
+ else
56
+ @joined_identifier == :partial ? 'P' : 'S'
57
+ end
58
+ "#{self.class.name}(#{mode})"
59
+ end
60
+
61
+ attr_reader :separator
62
+
63
+ def system_identifier_for(key:)
64
+ key = Nocturnal::Key.new(key: '__system__key__', granularity: key.granularity, at: key.at)
65
+ identifier_for(key)
66
+ end
67
+
68
+ def system_data_for(key:, count: 1, tracking_key: nil)
69
+ tracking_key ||= key.key
70
+ self.class.pack(hash: { count: count, keys: { tracking_key => count } })
71
+ end
72
+
73
+ def inc(keys:, values:, count: 1, tracking_key: nil)
74
+ data = self.class.pack(hash: values)
75
+ with_transaction(client) do |connection|
76
+ keys.each do |key|
77
+ identifier = identifier_for(key)
78
+ query, args = inc_query(identifier: identifier, data: data)
79
+ execute_prepared(connection, query, args)
80
+ track_system_data(connection, key, count, tracking_key)
81
+ end
82
+ end
83
+ end
84
+
85
+ def set(keys:, values:, count: 1, tracking_key: nil)
86
+ data = self.class.pack(hash: values)
87
+ with_transaction(client) do |connection|
88
+ keys.each do |key|
89
+ identifier = identifier_for(key)
90
+ query, args = set_query(identifier: identifier, data: data)
91
+ execute_prepared(connection, query, args)
92
+ track_system_data(connection, key, count, tracking_key)
93
+ end
94
+ end
95
+ end
96
+
97
+ def get(keys:)
98
+ keys.map do |key|
99
+ identifier = identifier_for(key)
100
+ self.class.unpack(hash: fetch_packed_data(identifier))
101
+ end
102
+ end
103
+
104
+ def ping(key:, values:)
105
+ return [] if @joined_identifier
106
+
107
+ data = self.class.pack(hash: { data: values, at: key.at })
108
+ query, args = ping_query(key: key.key, at: key.at, data: data)
109
+
110
+ with_transaction(client) do |connection|
111
+ execute_prepared(connection, query, args)
112
+ end
113
+ end
114
+
115
+ # rubocop:disable Metrics/MethodLength
116
+ def scan(key:)
117
+ return [] if @joined_identifier
118
+
119
+ query = <<~SQL
120
+ SELECT `at`, CAST(`data` AS CHAR) AS data
121
+ FROM #{self.class.quote_identifier(ping_table_name)}
122
+ WHERE `key` = ?
123
+ ORDER BY `at` DESC
124
+ LIMIT 1
125
+ SQL
126
+ result = execute_prepared(client, query, [key.key]).first
127
+ return [] if result.nil?
128
+
129
+ [parse_time_value(result['at']), self.class.unpack(hash: JSON.parse(result['data']))]
130
+ rescue JSON::ParserError
131
+ []
132
+ end
133
+ # rubocop:enable Metrics/MethodLength
134
+
135
+ def self.normalize_joined_identifier(value)
136
+ case value
137
+ when nil, :full, 'full', :partial, 'partial'
138
+ value.nil? ? nil : value.to_sym
139
+ else
140
+ raise ArgumentError, 'joined_identifier must be nil, :full, "full", :partial, or "partial"'
141
+ end
142
+ end
143
+
144
+ def self.quote_identifier(identifier)
145
+ "`#{identifier.to_s.gsub('`', '``')}`"
146
+ end
147
+
148
+ private
149
+
150
+ def track_system_data(connection, key, count, tracking_key)
151
+ return unless @system_tracking
152
+
153
+ query, args = inc_query(
154
+ identifier: system_identifier_for(key: key),
155
+ data: system_data_for(key: key, count: count, tracking_key: tracking_key)
156
+ )
157
+ execute_prepared(connection, query, args)
158
+ end
159
+
160
+ # rubocop:disable Metrics/AbcSize, Metrics/MethodLength
161
+ def fetch_packed_data(identifier)
162
+ conditions = identifier.keys.map { |column| "#{self.class.quote_identifier(column)} = ?" }.join(' AND ')
163
+ query = <<~SQL
164
+ SELECT CAST(`data` AS CHAR) AS data
165
+ FROM #{self.class.quote_identifier(table_name)}
166
+ WHERE #{conditions}
167
+ LIMIT 1
168
+ SQL
169
+ packed_data = execute_prepared(client, query, query_values(identifier)).first&.fetch('data', nil)
170
+ return {} if packed_data.nil? || packed_data.empty?
171
+
172
+ JSON.parse(packed_data)
173
+ rescue JSON::ParserError
174
+ {}
175
+ end
176
+ # rubocop:enable Metrics/AbcSize, Metrics/MethodLength
177
+
178
+ def inc_query(identifier:, data:)
179
+ upsert_query(
180
+ identifier: identifier,
181
+ data: data,
182
+ conflict_data_sql: build_inc_json_set_expression(data),
183
+ conflict_values: increment_values(data)
184
+ )
185
+ end
186
+
187
+ def set_query(identifier:, data:)
188
+ upsert_query(
189
+ identifier: identifier,
190
+ data: data,
191
+ conflict_data_sql: build_set_json_set_expression(data),
192
+ conflict_values: serialized_set_values(data)
193
+ )
194
+ end
195
+
196
+ def upsert_query(identifier:, data:, conflict_data_sql:, conflict_values:)
197
+ columns = identifier.keys
198
+ columns_sql = columns.map { |column| self.class.quote_identifier(column) }.join(', ')
199
+ values_sql = (['?'] * columns.size + ['CAST(? AS JSON)']).join(', ')
200
+
201
+ query = <<~SQL
202
+ INSERT INTO #{self.class.quote_identifier(table_name)} (#{columns_sql}, `data`) VALUES (#{values_sql})
203
+ ON DUPLICATE KEY UPDATE `data` = #{conflict_data_sql}
204
+ SQL
205
+
206
+ [query, query_values(identifier) + [JSON.generate(data)] + conflict_values]
207
+ end
208
+
209
+ def ping_query(key:, at:, data:)
210
+ query = <<~SQL
211
+ INSERT INTO #{self.class.quote_identifier(ping_table_name)} (`key`, `at`, `data`) VALUES (?, ?, CAST(? AS JSON))
212
+ ON DUPLICATE KEY UPDATE `at` = VALUES(`at`), `data` = VALUES(`data`)
213
+ SQL
214
+ [query, [key.to_s, format_time_value(at), JSON.generate(data)]]
215
+ end
216
+
217
+ def build_inc_json_set_expression(data)
218
+ expression = +'JSON_SET(COALESCE(`data`, JSON_OBJECT())'
219
+ data.each_key do |path_key|
220
+ path = json_path_for(path_key)
221
+ expression << ", '#{path}', (COALESCE(CAST(JSON_UNQUOTE(JSON_EXTRACT(COALESCE(`data`, JSON_OBJECT()), '#{path}')) AS DECIMAL(65,10)), 0) + CAST(? AS DECIMAL(65,10)))" # rubocop:disable Layout/LineLength
222
+ end
223
+ expression << ')'
224
+ expression
225
+ end
226
+
227
+ def build_set_json_set_expression(data)
228
+ expression = +'JSON_SET(COALESCE(`data`, JSON_OBJECT())'
229
+ data.each_key do |path_key|
230
+ path = json_path_for(path_key)
231
+ expression << ", '#{path}', CAST(? AS JSON)"
232
+ end
233
+ expression << ')'
234
+ expression
235
+ end
236
+
237
+ def query_values(identifier)
238
+ identifier.values.map { |value| normalize_query_value(value) }
239
+ end
240
+
241
+ def increment_values(data)
242
+ data.map do |key, value|
243
+ next value if value.is_a?(Numeric)
244
+
245
+ raise ArgumentError, "increment requires numeric value for key #{key.inspect}"
246
+ end
247
+ end
248
+
249
+ def serialized_set_values(data)
250
+ data.values.map { |value| JSON.generate(value) }
251
+ end
252
+
253
+ def normalize_query_value(value)
254
+ return format_time_value(value) if value.is_a?(Time)
255
+
256
+ value
257
+ end
258
+
259
+ def format_time_value(value)
260
+ parse_time_value(value).utc.strftime('%Y-%m-%d %H:%M:%S.%6N')
261
+ end
262
+
263
+ def parse_time_value(value)
264
+ case value
265
+ when Time
266
+ value
267
+ when String
268
+ Time.parse(value)
269
+ when DateTime
270
+ value.to_time
271
+ else
272
+ raise ArgumentError, "unsupported time value: #{value.inspect}"
273
+ end
274
+ end
275
+
276
+ def json_path_for(key)
277
+ escaped = key.to_s.gsub('\\', '\\\\').gsub('"', '\"').gsub("'", "''")
278
+ "$.\"#{escaped}\""
279
+ end
280
+
281
+ def with_transaction(connection)
282
+ connection.query('START TRANSACTION')
283
+ result = yield(connection)
284
+ connection.query('COMMIT')
285
+ result
286
+ rescue StandardError
287
+ connection.query('ROLLBACK')
288
+ raise
289
+ end
290
+
291
+ def execute_prepared(connection, query, args = [])
292
+ statement = connection.prepare(query)
293
+ result = statement.execute(*args)
294
+ result.respond_to?(:to_a) ? result.to_a : result
295
+ ensure
296
+ statement&.close
297
+ end
298
+
299
+ def identifier_for(key)
300
+ key.identifier(separator, @joined_identifier)
301
+ end
302
+ end
303
+ end
304
+ end
305
+ end
@@ -51,17 +51,18 @@ module Trifle
51
51
  identifier_for(key)
52
52
  end
53
53
 
54
- def system_data_for(key:, count: 1)
55
- self.class.pack(hash: { count: count, keys: { key.key => count } })
54
+ def system_data_for(key:, count: 1, tracking_key: nil)
55
+ tracking_key ||= key.key
56
+ self.class.pack(hash: { count: count, keys: { tracking_key => count } })
56
57
  end
57
58
 
58
- def inc(keys:, values:, count: 1)
59
+ def inc(keys:, values:, count: 1, tracking_key: nil)
59
60
  data = self.class.pack(hash: values)
60
61
  client.transaction do |c|
61
62
  keys.map do |key|
62
63
  identifier = identifier_for(key)
63
64
  c.exec(inc_query(identifier: identifier, data: data))
64
- c.exec(inc_query(identifier: system_identifier_for(key: key), data: system_data_for(key: key, count: count))) if @system_tracking # rubocop:disable Layout/LineLength
65
+ track_system_data(c, key, count, tracking_key)
65
66
  end
66
67
  end
67
68
  end
@@ -78,17 +79,26 @@ module Trifle
78
79
  SQL
79
80
  end
80
81
 
81
- def set(keys:, values:, count: 1)
82
+ def set(keys:, values:, count: 1, tracking_key: nil)
82
83
  data = self.class.pack(hash: values)
83
84
  client.transaction do |c|
84
85
  keys.map do |key|
85
86
  identifier = identifier_for(key)
86
87
  c.exec(set_query(identifier: identifier, data: data))
87
- c.exec(inc_query(identifier: system_identifier_for(key: key), data: system_data_for(key: key, count: count))) if @system_tracking # rubocop:disable Layout/LineLength
88
+ track_system_data(c, key, count, tracking_key)
88
89
  end
89
90
  end
90
91
  end
91
92
 
93
+ def track_system_data(connection, key, count, tracking_key)
94
+ return unless @system_tracking
95
+
96
+ system_data = system_data_for(key: key, count: count, tracking_key: tracking_key)
97
+ connection.exec(
98
+ inc_query(identifier: system_identifier_for(key: key), data: system_data)
99
+ )
100
+ end
101
+
92
102
  def set_query(identifier:, data:)
93
103
  columns = identifier.keys.join(', ')
94
104
  values = identifier.values.map { |v| format_value(v) }.join(', ')
@@ -16,7 +16,7 @@ module Trifle
16
16
  "#{self.class.name}(J)"
17
17
  end
18
18
 
19
- def inc(keys:, values:, count: 1) # rubocop:disable Lint/UnusedMethodArgument
19
+ def inc(keys:, values:, count: 1, tracking_key: nil) # rubocop:disable Lint/UnusedMethodArgument
20
20
  keys.map do |key|
21
21
  self.class.pack(hash: values).each do |k, c|
22
22
  d = @data.fetch(key.join(@separator), {})
@@ -26,7 +26,7 @@ module Trifle
26
26
  end
27
27
  end
28
28
 
29
- def set(keys:, values:, count: 1) # rubocop:disable Lint/UnusedMethodArgument
29
+ def set(keys:, values:, count: 1, tracking_key: nil) # rubocop:disable Lint/UnusedMethodArgument
30
30
  keys.map do |key|
31
31
  self.class.pack(hash: values).each do |k, c|
32
32
  d = @data.fetch(key.join(@separator), {})
@@ -26,11 +26,12 @@ module Trifle
26
26
  key.join(separator)
27
27
  end
28
28
 
29
- def system_data_for(key:, count: 1)
30
- self.class.pack(hash: { count: count, keys: { key.key => count } })
29
+ def system_data_for(key:, count: 1, tracking_key: nil)
30
+ tracking_key ||= key.key
31
+ self.class.pack(hash: { count: count, keys: { tracking_key => count } })
31
32
  end
32
33
 
33
- def inc(keys:, values:, count: 1) # rubocop:disable Metrics/AbcSize, Metrics/MethodLength
34
+ def inc(keys:, values:, count: 1, tracking_key: nil) # rubocop:disable Metrics/AbcSize, Metrics/MethodLength
34
35
  keys.map do |key|
35
36
  key.prefix = prefix
36
37
  pkey = key.join(separator)
@@ -41,13 +42,13 @@ module Trifle
41
42
  next unless @system_tracking
42
43
 
43
44
  skey = system_join_for(key: key)
44
- system_data_for(key: key, count: count).each do |k, c|
45
+ system_data_for(key: key, count: count, tracking_key: tracking_key).each do |k, c|
45
46
  client.hincrby(skey, k, c)
46
47
  end
47
48
  end
48
49
  end
49
50
 
50
- def set(keys:, values:, count: 1)
51
+ def set(keys:, values:, count: 1, tracking_key: nil)
51
52
  keys.map do |key|
52
53
  key.prefix = prefix
53
54
  pkey = key.join(separator)
@@ -56,7 +57,7 @@ module Trifle
56
57
  next unless @system_tracking
57
58
 
58
59
  skey = system_join_for(key: key)
59
- system_data_for(key: key, count: count).each do |k, c|
60
+ system_data_for(key: key, count: count, tracking_key: tracking_key).each do |k, c|
60
61
  client.hincrby(skey, k, c)
61
62
  end
62
63
  end
@@ -1,6 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require 'json'
4
+ require 'time'
4
5
  require_relative '../mixins/packer'
5
6
 
6
7
  module Trifle
@@ -53,18 +54,18 @@ module Trifle
53
54
  identifier_for(key)
54
55
  end
55
56
 
56
- def system_data_for(key:, count: 1)
57
- self.class.pack(hash: { count: count, keys: { key.key => count } })
57
+ def system_data_for(key:, count: 1, tracking_key: nil)
58
+ tracking_key ||= key.key
59
+ self.class.pack(hash: { count: count, keys: { tracking_key => count } })
58
60
  end
59
61
 
60
- def inc(keys:, values:, count: 1)
62
+ def inc(keys:, values:, count: 1, tracking_key: nil)
61
63
  data = self.class.pack(hash: values)
62
64
  client.transaction do |c|
63
65
  keys.each do |key|
64
66
  identifier = identifier_for(key)
65
- # Batch data operations to avoid SQLite parser stack overflow
66
67
  batch_data_operations(identifier: identifier, data: data, connection: c, operation: :inc)
67
- batch_data_operations(identifier: system_identifier_for(key: key), data: system_data_for(key: key, count: count), connection: c, operation: :inc) if @system_tracking # rubocop:disable Layout/LineLength
68
+ track_system_data(c, key, count, tracking_key)
68
69
  end
69
70
  end
70
71
  end
@@ -81,18 +82,28 @@ module Trifle
81
82
  SQL
82
83
  end
83
84
 
84
- def set(keys:, values:, count: 1)
85
+ def set(keys:, values:, count: 1, tracking_key: nil)
85
86
  data = self.class.pack(hash: values)
86
87
  client.transaction do |c|
87
88
  keys.each do |key|
88
89
  identifier = identifier_for(key)
89
- # Batch data operations to avoid SQLite parser stack overflow
90
90
  batch_data_operations(identifier: identifier, data: data, connection: c, operation: :set)
91
- batch_data_operations(identifier: system_identifier_for(key: key), data: system_data_for(key: key, count: count), connection: c, operation: :inc) if @system_tracking # rubocop:disable Layout/LineLength
91
+ track_system_data(c, key, count, tracking_key)
92
92
  end
93
93
  end
94
94
  end
95
95
 
96
+ def track_system_data(connection, key, count, tracking_key)
97
+ return unless @system_tracking
98
+
99
+ batch_data_operations(
100
+ identifier: system_identifier_for(key: key),
101
+ data: system_data_for(key: key, count: count, tracking_key: tracking_key),
102
+ connection: connection,
103
+ operation: :inc
104
+ )
105
+ end
106
+
96
107
  def set_query(identifier:, data:)
97
108
  columns = identifier.keys.join(', ')
98
109
  values = identifier.values.map { |v| format_value(v) }.join(', ')
@@ -116,7 +127,7 @@ module Trifle
116
127
  sample = identifiers.first
117
128
 
118
129
  results.each_with_object(Hash.new({})) do |r, o|
119
- identifier = sample.each_with_index.to_h { |(k, _), i| [k, k == :at ? Time.parse(r[i]) : r[i]] }
130
+ identifier = sample.each_with_index.to_h { |(k, _), i| [k, k == :at ? Time.iso8601(r[i]) : r[i]] }
120
131
 
121
132
  o[identifier] = JSON.parse(r.last)
122
133
  rescue JSON::ParserError
@@ -145,9 +156,11 @@ module Trifle
145
156
  end
146
157
 
147
158
  def ping_query(key:, at:, data:)
159
+ at_formatted = format_time_value(at)
160
+
148
161
  <<-SQL
149
- INSERT INTO #{ping_table_name} (key, at, data) VALUES ('#{key}', '#{at.strftime('%Y-%m-%d %H:%M:%S')}', json('#{data.to_json}'))
150
- ON CONFLICT (key) DO UPDATE SET at = '#{at.strftime('%Y-%m-%d %H:%M:%S')}', data = json('#{data.to_json}');
162
+ INSERT INTO #{ping_table_name} (key, at, data) VALUES ('#{key}', '#{at_formatted}', json('#{data.to_json}'))
163
+ ON CONFLICT (key) DO UPDATE SET at = '#{at_formatted}', data = json('#{data.to_json}');
151
164
  SQL
152
165
  end
153
166
 
@@ -158,7 +171,7 @@ module Trifle
158
171
  return [] if result.nil?
159
172
 
160
173
  # SQLite returns columns in order: key, at, data
161
- [Time.parse(result[1]), self.class.unpack(hash: JSON.parse(result[2]))]
174
+ [Time.iso8601(result[1]), self.class.unpack(hash: JSON.parse(result[2]))]
162
175
  rescue JSON::ParserError
163
176
  []
164
177
  end
@@ -193,15 +206,22 @@ module Trifle
193
206
  end
194
207
 
195
208
  def format_value(value)
209
+ return "'#{format_time_value(value)}'" if value.is_a?(Time) || value.is_a?(DateTime)
210
+ return value.to_s if value.is_a?(Integer) || value.is_a?(Float)
211
+
212
+ "'#{value}'"
213
+ end
214
+
215
+ def format_time_value(value)
196
216
  case value
197
- when String
198
- "'#{value}'"
199
217
  when Time
200
- "'#{value.strftime('%Y-%m-%d %H:%M:%S')}'"
201
- when Integer, Float
202
- value.to_s
218
+ value.getutc.iso8601
219
+ when DateTime
220
+ value.to_time.getutc.iso8601
221
+ when Integer
222
+ Time.at(value).getutc.iso8601
203
223
  else
204
- "'#{value}'"
224
+ Time.iso8601(value.to_s).getutc.iso8601
205
225
  end
206
226
  end
207
227
 
@@ -214,11 +234,9 @@ module Trifle
214
234
 
215
235
  def build_identifier_key(identifier)
216
236
  return identifier[:key] if @joined_identifier == :full
217
- if @joined_identifier == :partial
218
- return "#{identifier[:key]}::#{identifier[:at].strftime('%Y-%m-%d %H:%M:%S')}"
219
- end
237
+ return "#{identifier[:key]}::#{format_time_value(identifier[:at])}" if @joined_identifier == :partial
220
238
 
221
- "#{identifier[:key]}::#{identifier[:granularity]}::#{identifier[:at].strftime('%Y-%m-%d %H:%M:%S')}"
239
+ "#{identifier[:key]}::#{identifier[:granularity]}::#{format_time_value(identifier[:at])}"
222
240
  end
223
241
 
224
242
  def identifier_for(key)
@@ -12,6 +12,7 @@ module Trifle
12
12
  @at = keywords.fetch(:at)
13
13
  @values = keywords.fetch(:values)
14
14
  @config = keywords[:config]
15
+ @untracked = keywords.fetch(:untracked, false)
15
16
  end
16
17
 
17
18
  def config
@@ -25,10 +26,20 @@ module Trifle
25
26
  end
26
27
 
27
28
  def perform
28
- config.storage.inc(
29
+ payload = {
29
30
  keys: config.granularities.map { |granularity| key_for(granularity: granularity) },
30
31
  values: values
31
- )
32
+ }
33
+
34
+ if tracking_key
35
+ config.storage.inc(**payload.merge(tracking_key: tracking_key))
36
+ else
37
+ config.storage.inc(**payload)
38
+ end
39
+ end
40
+
41
+ def tracking_key
42
+ @untracked ? '__untracked__' : nil
32
43
  end
33
44
  end
34
45
  end
@@ -12,6 +12,7 @@ module Trifle
12
12
  @at = keywords.fetch(:at)
13
13
  @values = keywords.fetch(:values)
14
14
  @config = keywords[:config]
15
+ @untracked = keywords.fetch(:untracked, false)
15
16
  end
16
17
 
17
18
  def config
@@ -25,10 +26,20 @@ module Trifle
25
26
  end
26
27
 
27
28
  def perform
28
- config.storage.set(
29
+ payload = {
29
30
  keys: config.granularities.map { |granularity| key_for(granularity: granularity) },
30
31
  values: values
31
- )
32
+ }
33
+
34
+ if tracking_key
35
+ config.storage.set(**payload.merge(tracking_key: tracking_key))
36
+ else
37
+ config.storage.set(**payload)
38
+ end
39
+ end
40
+
41
+ def tracking_key
42
+ @untracked ? '__untracked__' : nil
32
43
  end
33
44
  end
34
45
  end
@@ -2,6 +2,6 @@
2
2
 
3
3
  module Trifle
4
4
  module Stats
5
- VERSION = '2.3.0'
5
+ VERSION = '2.4.0'
6
6
  end
7
7
  end
data/lib/trifle/stats.rb CHANGED
@@ -14,6 +14,7 @@ require 'trifle/stats/designator/custom'
14
14
  require 'trifle/stats/designator/geometric'
15
15
  require 'trifle/stats/designator/linear'
16
16
  require 'trifle/stats/driver/mongo'
17
+ require 'trifle/stats/driver/mysql'
17
18
  require 'trifle/stats/driver/postgres'
18
19
  require 'trifle/stats/driver/process'
19
20
  require 'trifle/stats/driver/redis'
@@ -54,21 +55,23 @@ module Trifle
54
55
  default
55
56
  end
56
57
 
57
- def self.track(key:, at:, values:, config: nil)
58
+ def self.track(key:, at:, values:, config: nil, untracked: false)
58
59
  Trifle::Stats::Operations::Timeseries::Increment.new(
59
60
  key: key,
60
61
  at: at,
61
62
  values: values,
62
- config: config
63
+ config: config,
64
+ untracked: untracked
63
65
  ).perform
64
66
  end
65
67
 
66
- def self.assert(key:, at:, values:, config: nil)
68
+ def self.assert(key:, at:, values:, config: nil, untracked: false)
67
69
  Trifle::Stats::Operations::Timeseries::Set.new(
68
70
  key: key,
69
71
  at: at,
70
72
  values: values,
71
- config: config
73
+ config: config,
74
+ untracked: untracked
72
75
  ).perform
73
76
  end
74
77
 
data/trifle-stats.gemspec CHANGED
@@ -6,19 +6,18 @@ Gem::Specification.new do |spec|
6
6
  spec.authors = ['Jozef Vaclavik']
7
7
  spec.email = ['jozef@hey.com']
8
8
 
9
- spec.summary = 'Simple analytics backed by Redis, Postgres, MongoDB, '\
10
- 'Google Analytics, Segment, or whatever.'
11
- spec.description = 'Trifle::Stats is a way too simple timeline analytics '\
12
- 'that helps you track custom metrics. Automatically '\
13
- 'increments counters for each enabled range. '\
14
- 'It supports timezones and different week beginning.'
9
+ spec.summary = 'Time-series metrics for Ruby backed by Postgres, Redis, MongoDB, MySQL, or SQLite.'
10
+ spec.description = 'Track custom business metrics using your existing database. '\
11
+ 'One call to record nested, multi-dimensional counters with '\
12
+ 'automatic rollup across configurable time granularities.'
15
13
  spec.homepage = 'https://trifle.io'
16
14
  spec.licenses = ['MIT']
17
15
  spec.required_ruby_version = Gem::Requirement.new('>= 2.6')
18
16
 
19
17
  spec.metadata['homepage_uri'] = spec.homepage
20
18
  spec.metadata['source_code_uri'] = 'https://github.com/trifle-io/trifle-stats'
21
- spec.metadata['changelog_uri'] = 'https://trifle.io/trifle-stats/changelog'
19
+ spec.metadata['changelog_uri'] = 'https://docs.trifle.io/trifle-stats-rb/changelog'
20
+ spec.metadata['documentation_uri'] = 'https://docs.trifle.io/trifle-stats-rb'
22
21
 
23
22
  # Specify which files should be added to the gem when it is released.
24
23
  # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
@@ -33,6 +32,7 @@ Gem::Specification.new do |spec|
33
32
  spec.add_development_dependency('byebug', '>= 0')
34
33
  spec.add_development_dependency('dotenv')
35
34
  spec.add_development_dependency('mongo', '>= 2.14.0')
35
+ spec.add_development_dependency('mysql2', '>= 0.5.5')
36
36
  spec.add_development_dependency('sqlite3', '>= 1.4.4')
37
37
  spec.add_development_dependency('pg', '>= 1.2')
38
38
  spec.add_development_dependency('rake', '~> 13.0')
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: trifle-stats
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.3.0
4
+ version: 2.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jozef Vaclavik
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2026-01-23 00:00:00.000000000 Z
11
+ date: 2026-02-25 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -66,6 +66,20 @@ dependencies:
66
66
  - - ">="
67
67
  - !ruby/object:Gem::Version
68
68
  version: 2.14.0
69
+ - !ruby/object:Gem::Dependency
70
+ name: mysql2
71
+ requirement: !ruby/object:Gem::Requirement
72
+ requirements:
73
+ - - ">="
74
+ - !ruby/object:Gem::Version
75
+ version: 0.5.5
76
+ type: :development
77
+ prerelease: false
78
+ version_requirements: !ruby/object:Gem::Requirement
79
+ requirements:
80
+ - - ">="
81
+ - !ruby/object:Gem::Version
82
+ version: 0.5.5
69
83
  - !ruby/object:Gem::Dependency
70
84
  name: sqlite3
71
85
  requirement: !ruby/object:Gem::Requirement
@@ -164,9 +178,9 @@ dependencies:
164
178
  - - "~>"
165
179
  - !ruby/object:Gem::Version
166
180
  version: '2.0'
167
- description: Trifle::Stats is a way too simple timeline analytics that helps you track
168
- custom metrics. Automatically increments counters for each enabled range. It supports
169
- timezones and different week beginning.
181
+ description: Track custom business metrics using your existing database. One call
182
+ to record nested, multi-dimensional counters with automatic rollup across configurable
183
+ time granularities.
170
184
  email:
171
185
  - jozef@hey.com
172
186
  executables: []
@@ -212,6 +226,7 @@ files:
212
226
  - lib/trifle/stats/designator/linear.rb
213
227
  - lib/trifle/stats/driver/README.md
214
228
  - lib/trifle/stats/driver/mongo.rb
229
+ - lib/trifle/stats/driver/mysql.rb
215
230
  - lib/trifle/stats/driver/postgres.rb
216
231
  - lib/trifle/stats/driver/process.rb
217
232
  - lib/trifle/stats/driver/redis.rb
@@ -248,7 +263,8 @@ licenses:
248
263
  metadata:
249
264
  homepage_uri: https://trifle.io
250
265
  source_code_uri: https://github.com/trifle-io/trifle-stats
251
- changelog_uri: https://trifle.io/trifle-stats/changelog
266
+ changelog_uri: https://docs.trifle.io/trifle-stats-rb/changelog
267
+ documentation_uri: https://docs.trifle.io/trifle-stats-rb
252
268
  post_install_message:
253
269
  rdoc_options: []
254
270
  require_paths:
@@ -267,6 +283,6 @@ requirements: []
267
283
  rubygems_version: 3.3.3
268
284
  signing_key:
269
285
  specification_version: 4
270
- summary: Simple analytics backed by Redis, Postgres, MongoDB, Google Analytics, Segment,
271
- or whatever.
286
+ summary: Time-series metrics for Ruby backed by Postgres, Redis, MongoDB, MySQL, or
287
+ SQLite.
272
288
  test_files: []