event_meter 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/LICENSE.txt +21 -0
- data/README.md +1081 -0
- data/exe/event_meter +5 -0
- data/lib/event_meter/auto_cleanup.rb +93 -0
- data/lib/event_meter/cli.rb +124 -0
- data/lib/event_meter/configuration.rb +244 -0
- data/lib/event_meter/errors.rb +9 -0
- data/lib/event_meter/event.rb +180 -0
- data/lib/event_meter/event_payload.rb +103 -0
- data/lib/event_meter/hash_input.rb +20 -0
- data/lib/event_meter/index_key.rb +19 -0
- data/lib/event_meter/keys.rb +63 -0
- data/lib/event_meter/path_name.rb +37 -0
- data/lib/event_meter/processor.rb +305 -0
- data/lib/event_meter/rails.rb +79 -0
- data/lib/event_meter/report_definition.rb +184 -0
- data/lib/event_meter/reports.rb +143 -0
- data/lib/event_meter/rollup.rb +148 -0
- data/lib/event_meter/stores/cleanup_helpers.rb +76 -0
- data/lib/event_meter/stores/file_helpers.rb +47 -0
- data/lib/event_meter/stores/lock_refresher.rb +75 -0
- data/lib/event_meter/stores/namespace.rb +14 -0
- data/lib/event_meter/stores/redis_lock.rb +77 -0
- data/lib/event_meter/stores/rollup/active_record_postgres.rb +135 -0
- data/lib/event_meter/stores/rollup/file.rb +736 -0
- data/lib/event_meter/stores/rollup/postgres.rb +813 -0
- data/lib/event_meter/stores/rollup/redis.rb +349 -0
- data/lib/event_meter/stores/stream/file.rb +98 -0
- data/lib/event_meter/stores/stream/redis.rb +79 -0
- data/lib/event_meter/time_buckets.rb +56 -0
- data/lib/event_meter/version.rb +3 -0
- data/lib/event_meter/write_result.rb +26 -0
- data/lib/event_meter.rb +150 -0
- data/lib/generators/event_meter/install_generator.rb +57 -0
- data/lib/generators/event_meter/templates/create_event_meter_tables.rb.erb +12 -0
- data/lib/generators/event_meter/templates/event_meter.rb.erb +12 -0
- metadata +156 -0
data/README.md
ADDED
|
@@ -0,0 +1,1081 @@
|
|
|
1
|
+
# EventMeter
|
|
2
|
+
|
|
3
|
+
EventMeter gives Ruby applications a small event-based runtime metrics layer.
|
|
4
|
+
|
|
5
|
+
It records application events, processes them into storage-backed rollups, and
|
|
6
|
+
answers operational questions:
|
|
7
|
+
|
|
8
|
+
- How many times did this run?
|
|
9
|
+
- How fast is it running?
|
|
10
|
+
- How long did each run take?
|
|
11
|
+
- How often does the same thing run again?
|
|
12
|
+
- What changed between two time windows?
|
|
13
|
+
|
|
14
|
+
EventMeter does not send data to an external service. Your app chooses the
|
|
15
|
+
storage, and the metrics stay inside your infrastructure.
|
|
16
|
+
|
|
17
|
+
## Install
|
|
18
|
+
|
|
19
|
+
```ruby
|
|
20
|
+
gem "event_meter"
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
```ruby
|
|
24
|
+
require "event_meter"
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
EventMeter does not force Redis or PostgreSQL into every install. Add the store
|
|
28
|
+
client your app chooses:
|
|
29
|
+
|
|
30
|
+
```ruby
|
|
31
|
+
gem "redis" # for Redis stream or rollup storage
|
|
32
|
+
gem "pg" # for PostgreSQL rollup storage
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
### Rails With File Stream And PostgreSQL Rollups
|
|
36
|
+
|
|
37
|
+
For Rails apps, the lowest-friction production setup is usually:
|
|
38
|
+
|
|
39
|
+
- file stream storage for fast local event writes
|
|
40
|
+
- PostgreSQL rollup storage for durable, shared reports
|
|
41
|
+
|
|
42
|
+
Add the gem:
|
|
43
|
+
|
|
44
|
+
```ruby
|
|
45
|
+
gem "event_meter"
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
Generate the migration and initializer:
|
|
49
|
+
|
|
50
|
+
```sh
|
|
51
|
+
bin/rails generate event_meter:install \
|
|
52
|
+
--namespace billing_app:event_meter:v1 \
|
|
53
|
+
--table-prefix event_meter
|
|
54
|
+
|
|
55
|
+
bin/rails db:migrate
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
The generator creates three PostgreSQL tables:
|
|
59
|
+
|
|
60
|
+
- `event_meter_rollups`
|
|
61
|
+
- `event_meter_strings`
|
|
62
|
+
- `event_meter_processed_entries`
|
|
63
|
+
|
|
64
|
+
It also creates `config/initializers/event_meter.rb`:
|
|
65
|
+
|
|
66
|
+
```ruby
|
|
67
|
+
require "event_meter/rails"
|
|
68
|
+
|
|
69
|
+
EventMeter::Rails.configure(
|
|
70
|
+
namespace: "billing_app:event_meter:v1",
|
|
71
|
+
stream_storage: :file,
|
|
72
|
+
stream_path: Rails.root.join("tmp/event_meter/billing_app-event_meter-v1").to_s,
|
|
73
|
+
rollup_storage: :postgres,
|
|
74
|
+
table_prefix: "event_meter",
|
|
75
|
+
stream_sync: :flush,
|
|
76
|
+
auto_cleanup_history: true,
|
|
77
|
+
logger: Rails.logger
|
|
78
|
+
)
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
If you prefer to keep settings in `config/application.rb` or environment files:
|
|
82
|
+
|
|
83
|
+
```ruby
|
|
84
|
+
# config/application.rb
|
|
85
|
+
config.event_meter = ActiveSupport::OrderedOptions.new
|
|
86
|
+
config.event_meter.namespace = "billing_app:event_meter:v1"
|
|
87
|
+
config.event_meter.stream_storage = :file
|
|
88
|
+
config.event_meter.stream_path = Rails.root.join("tmp/event_meter/billing_app-event_meter-v1").to_s
|
|
89
|
+
config.event_meter.rollup_storage = :postgres
|
|
90
|
+
config.event_meter.table_prefix = "event_meter"
|
|
91
|
+
config.event_meter.stream_sync = :flush
|
|
92
|
+
config.event_meter.auto_cleanup_history = true
|
|
93
|
+
config.event_meter.cleanup_history_retention = 31.days.to_i
|
|
94
|
+
config.event_meter.cleanup_history_interval = 1.hour.to_i
|
|
95
|
+
config.event_meter.summary_key_limit = 10_000
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
```ruby
|
|
99
|
+
# config/initializers/event_meter.rb
|
|
100
|
+
require "event_meter/rails"
|
|
101
|
+
|
|
102
|
+
settings = Rails.application.config.event_meter
|
|
103
|
+
|
|
104
|
+
EventMeter::Rails.configure(
|
|
105
|
+
namespace: settings.namespace,
|
|
106
|
+
stream_storage: settings.stream_storage,
|
|
107
|
+
stream_path: settings.stream_path,
|
|
108
|
+
rollup_storage: settings.rollup_storage,
|
|
109
|
+
table_prefix: settings.table_prefix,
|
|
110
|
+
stream_sync: settings.stream_sync,
|
|
111
|
+
auto_cleanup_history: settings.auto_cleanup_history,
|
|
112
|
+
cleanup_history_retention: settings.cleanup_history_retention,
|
|
113
|
+
cleanup_history_interval: settings.cleanup_history_interval,
|
|
114
|
+
summary_key_limit: settings.summary_key_limit,
|
|
115
|
+
logger: Rails.logger
|
|
116
|
+
)
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
After boot, every Rails process shares the same EventMeter configuration.
|
|
120
|
+
Instrumentation can call `EventMeter.start(...)`, scheduled jobs can call
|
|
121
|
+
`EventMeter.process_pending(...)`, and consoles can call `EventMeter.summary(...)`
|
|
122
|
+
directly.
|
|
123
|
+
|
|
124
|
+
## Quick Start
|
|
125
|
+
|
|
126
|
+
Configure storage:
|
|
127
|
+
|
|
128
|
+
```ruby
|
|
129
|
+
EventMeter.configure do |config|
|
|
130
|
+
config.namespace = "billing_app:event_meter:v1"
|
|
131
|
+
config.redis = -> { Redis.new }
|
|
132
|
+
config.auto_cleanup_history = true
|
|
133
|
+
end
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
Record one unit of work:
|
|
137
|
+
|
|
138
|
+
```ruby
|
|
139
|
+
event = EventMeter.start("invoice_delivery",
|
|
140
|
+
customer_id: 42,
|
|
141
|
+
provider: "postmark",
|
|
142
|
+
queue: "mailers",
|
|
143
|
+
worker_id: "mail-1"
|
|
144
|
+
)
|
|
145
|
+
|
|
146
|
+
deliver_invoice
|
|
147
|
+
result = event.success(message_id: "msg_123")
|
|
148
|
+
|
|
149
|
+
warn "EventMeter write failed: #{result.error.message}" if result.error?
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
Process pending events into a versioned report:
|
|
153
|
+
|
|
154
|
+
```ruby
|
|
155
|
+
EventMeter.process_pending("invoice_delivery", version: 1) do |report|
|
|
156
|
+
report.index_by(:customer_id)
|
|
157
|
+
report.index_by(:provider)
|
|
158
|
+
report.index_by(:provider, :queue)
|
|
159
|
+
end
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
When `auto_cleanup_history` is enabled, `process_pending` also cleans old
|
|
163
|
+
rollups, interval state, and leftover processed-entry markers on a
|
|
164
|
+
storage-backed interval. Reads such as `summary` and `series` stay read-only.
|
|
165
|
+
|
|
166
|
+
Read the report:
|
|
167
|
+
|
|
168
|
+
```ruby
|
|
169
|
+
EventMeter.summary("invoice_delivery",
|
|
170
|
+
version: 1,
|
|
171
|
+
from: Time.now.utc - 3600,
|
|
172
|
+
to: Time.now.utc,
|
|
173
|
+
by: { provider: "postmark" }
|
|
174
|
+
)
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
Example output:
|
|
178
|
+
|
|
179
|
+
```ruby
|
|
180
|
+
{
|
|
181
|
+
count: 1260,
|
|
182
|
+
success_count: 1254,
|
|
183
|
+
failure_count: 6,
|
|
184
|
+
skipped_count: 0,
|
|
185
|
+
started_at_min: "2026-05-06T10:00:02.000000Z",
|
|
186
|
+
started_at_max: "2026-05-06T10:59:58.000000Z",
|
|
187
|
+
rate_window_seconds: 3600.0,
|
|
188
|
+
per_second: 0.35,
|
|
189
|
+
per_minute: 21.0,
|
|
190
|
+
duration_ms_count: 1260,
|
|
191
|
+
duration_ms_sum: 970_200,
|
|
192
|
+
duration_ms_avg: 770.0,
|
|
193
|
+
duration_ms_min: 42,
|
|
194
|
+
duration_ms_max: 8_910,
|
|
195
|
+
interval_ms_count: 0,
|
|
196
|
+
interval_ms_sum: 0
|
|
197
|
+
}
|
|
198
|
+
```
|
|
199
|
+
|
|
200
|
+
## The Model
|
|
201
|
+
|
|
202
|
+
EventMeter has two phases.
|
|
203
|
+
|
|
204
|
+
Recording writes raw events:
|
|
205
|
+
|
|
206
|
+
```ruby
|
|
207
|
+
EventMeter.start("invoice_delivery", provider: "postmark").success
|
|
208
|
+
```
|
|
209
|
+
|
|
210
|
+
Processing defines the report shape and moves pending events into rollups:
|
|
211
|
+
|
|
212
|
+
```ruby
|
|
213
|
+
EventMeter.process_pending("invoice_delivery", version: 1) do |report|
|
|
214
|
+
report.index_by(:provider)
|
|
215
|
+
end
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
Reports read processed rollups only:
|
|
219
|
+
|
|
220
|
+
```ruby
|
|
221
|
+
EventMeter.summary("invoice_delivery", version: 1, by: { provider: "postmark" })
|
|
222
|
+
```
|
|
223
|
+
|
|
224
|
+
The event name passed to `EventMeter.start` and `EventMeter.process_pending`
|
|
225
|
+
must match. If you record `"invoice_delivery"`, process `"invoice_delivery"`.
|
|
226
|
+
|
|
227
|
+
### Params
|
|
228
|
+
|
|
229
|
+
Everything your app knows about the event is a param:
|
|
230
|
+
|
|
231
|
+
```ruby
|
|
232
|
+
EventMeter.start("feed_refresh",
|
|
233
|
+
feed_id: 77,
|
|
234
|
+
provider: "shopify",
|
|
235
|
+
queue: "feeds"
|
|
236
|
+
)
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
EventMeter does not know what a feed, customer, queue, or worker is. They are
|
|
240
|
+
just params that can be indexed when you need reports by them.
|
|
241
|
+
|
|
242
|
+
### Report Versions
|
|
243
|
+
|
|
244
|
+
The report version is part of the processed data shape:
|
|
245
|
+
|
|
246
|
+
```ruby
|
|
247
|
+
EventMeter.process_pending("feed_refresh", version: 1) do |report|
|
|
248
|
+
report.index_by(:provider)
|
|
249
|
+
end
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
EventMeter stores the report definition metadata when processing starts. If you
|
|
253
|
+
change the indexes or interval rules for the same event and version, processing
|
|
254
|
+
raises `EventMeter::DefinitionChangedError`. Bump the version when you change
|
|
255
|
+
the report shape:
|
|
256
|
+
|
|
257
|
+
```ruby
|
|
258
|
+
EventMeter.process_pending("feed_refresh", version: 2) do |report|
|
|
259
|
+
report.index_by(:provider)
|
|
260
|
+
report.index_by(:provider, :queue)
|
|
261
|
+
end
|
|
262
|
+
```
|
|
263
|
+
|
|
264
|
+
Reports also take the version:
|
|
265
|
+
|
|
266
|
+
```ruby
|
|
267
|
+
EventMeter.summary("feed_refresh", version: 2, by: { provider: "shopify" })
|
|
268
|
+
```
|
|
269
|
+
|
|
270
|
+
### Indexes
|
|
271
|
+
|
|
272
|
+
An index says: "I want to query reports by this exact shape."
|
|
273
|
+
|
|
274
|
+
```ruby
|
|
275
|
+
EventMeter.process_pending("invoice_delivery", version: 1) do |report|
|
|
276
|
+
report.index_by(:provider)
|
|
277
|
+
report.index_by(:provider, :queue)
|
|
278
|
+
end
|
|
279
|
+
```
|
|
280
|
+
|
|
281
|
+
This supports:
|
|
282
|
+
|
|
283
|
+
```ruby
|
|
284
|
+
EventMeter.summary("invoice_delivery", version: 1)
|
|
285
|
+
EventMeter.summary("invoice_delivery", version: 1, by: { provider: "postmark" })
|
|
286
|
+
EventMeter.summary("invoice_delivery", version: 1, by: { provider: "postmark", queue: "mailers" })
|
|
287
|
+
```
|
|
288
|
+
|
|
289
|
+
This is not supported until you add `report.index_by(:queue)`:
|
|
290
|
+
|
|
291
|
+
```ruby
|
|
292
|
+
EventMeter.summary("invoice_delivery", version: 1, by: { queue: "mailers" })
|
|
293
|
+
```
|
|
294
|
+
|
|
295
|
+
Unsupported filters raise `EventMeter::UnsupportedQueryError`. EventMeter does
|
|
296
|
+
not scan raw events to answer new filters later. If a question matters, index it
|
|
297
|
+
before you need that report.
|
|
298
|
+
|
|
299
|
+
Every report automatically has the all-events index, so `by: {}` works without
|
|
300
|
+
custom indexes.
|
|
301
|
+
|
|
302
|
+
Indexed `by:` values cannot be `nil`. Events with a `nil` value simply do not
|
|
303
|
+
write that specific indexed rollup.
|
|
304
|
+
|
|
305
|
+
### Intervals
|
|
306
|
+
|
|
307
|
+
Intervals answer: "How long since this same thing ran before?"
|
|
308
|
+
|
|
309
|
+
Use `measure_interval_by` with the param that identifies the repeated thing:
|
|
310
|
+
|
|
311
|
+
```ruby
|
|
312
|
+
EventMeter.process_pending("feed_refresh", version: 1) do |report|
|
|
313
|
+
report.index_by(:provider)
|
|
314
|
+
|
|
315
|
+
report.measure_interval_by(:feed_id)
|
|
316
|
+
report.measure_interval_by(:feed_id, group_by: :provider)
|
|
317
|
+
end
|
|
318
|
+
```
|
|
319
|
+
|
|
320
|
+
Read this as:
|
|
321
|
+
|
|
322
|
+
```text
|
|
323
|
+
Use feed_id to recognize the same feed running again.
|
|
324
|
+
Store one interval report for all feeds.
|
|
325
|
+
Store another interval report grouped by provider.
|
|
326
|
+
```
|
|
327
|
+
|
|
328
|
+
If feed `77` starts at `10:00` and then starts again at `10:05`, EventMeter
|
|
329
|
+
records one interval sample:
|
|
330
|
+
|
|
331
|
+
```text
|
|
332
|
+
10:05 - 10:00 = 300_000 ms
|
|
333
|
+
```
|
|
334
|
+
|
|
335
|
+
The grouped report includes interval fields:
|
|
336
|
+
|
|
337
|
+
```ruby
|
|
338
|
+
EventMeter.summary("feed_refresh", version: 1, by: { provider: "shopify" })
|
|
339
|
+
```
|
|
340
|
+
|
|
341
|
+
```ruby
|
|
342
|
+
{
|
|
343
|
+
count: 2,
|
|
344
|
+
interval_ms_count: 1,
|
|
345
|
+
interval_ms_sum: 300_000,
|
|
346
|
+
interval_ms_avg: 300_000.0,
|
|
347
|
+
interval_ms_min: 300_000,
|
|
348
|
+
interval_ms_max: 300_000
|
|
349
|
+
}
|
|
350
|
+
```
|
|
351
|
+
|
|
352
|
+
The identity param, `feed_id` here, does not have to be indexed. It only has to
|
|
353
|
+
exist in event params so EventMeter can remember the previous start time for
|
|
354
|
+
that value.
|
|
355
|
+
|
|
356
|
+
EventMeter uses `started_at` for interval math. Late events still count in their
|
|
357
|
+
original time buckets. For interval state, EventMeter only moves forward; older
|
|
358
|
+
timestamps do not rewrite previous interval samples.
|
|
359
|
+
|
|
360
|
+
`measure_interval_by(..., group_by: ...)` automatically creates the matching
|
|
361
|
+
group index. You can still write `index_by` explicitly when it makes the report
|
|
362
|
+
definition easier to read.
|
|
363
|
+
|
|
364
|
+
## Lifecycle API
|
|
365
|
+
|
|
366
|
+
Start an event:
|
|
367
|
+
|
|
368
|
+
```ruby
|
|
369
|
+
event = EventMeter.start("invoice_delivery",
|
|
370
|
+
customer_id: 42,
|
|
371
|
+
provider: "postmark"
|
|
372
|
+
)
|
|
373
|
+
```
|
|
374
|
+
|
|
375
|
+
Finish it once:
|
|
376
|
+
|
|
377
|
+
```ruby
|
|
378
|
+
event.success(message_id: "msg_123")
|
|
379
|
+
event.skip("disabled")
|
|
380
|
+
event.failure(error)
|
|
381
|
+
```
|
|
382
|
+
|
|
383
|
+
The lifecycle API sets EventMeter-owned fields for you:
|
|
384
|
+
|
|
385
|
+
```ruby
|
|
386
|
+
{
|
|
387
|
+
"name" => "invoice_delivery",
|
|
388
|
+
"status" => "success",
|
|
389
|
+
"started_at" => "2026-05-06T10:15:20.000000Z",
|
|
390
|
+
"duration_ms" => 734,
|
|
391
|
+
"params" => {
|
|
392
|
+
"customer_id" => 42,
|
|
393
|
+
"provider" => "postmark",
|
|
394
|
+
"message_id" => "msg_123"
|
|
395
|
+
}
|
|
396
|
+
}
|
|
397
|
+
```
|
|
398
|
+
|
|
399
|
+
`name`, `status`, `started_at`, and `duration_ms` belong to EventMeter.
|
|
400
|
+
Application data goes under `params`, so your app can still pass params named
|
|
401
|
+
`status` or `duration_ms` without colliding with EventMeter fields.
|
|
402
|
+
|
|
403
|
+
`success`, `skip`, and `failure` return `EventMeter::WriteResult`:
|
|
404
|
+
|
|
405
|
+
```ruby
|
|
406
|
+
result.recorded? # true when the event reached stream storage
|
|
407
|
+
result.error? # true when EventMeter could not write the event
|
|
408
|
+
result.payload # event payload EventMeter tried to write
|
|
409
|
+
result.error # config, validation, or storage error
|
|
410
|
+
```
|
|
411
|
+
|
|
412
|
+
The write path is designed for production instrumentation. `start`, `success`,
|
|
413
|
+
`skip`, and `failure` do not raise because storage is unavailable,
|
|
414
|
+
misconfigured, or temporarily broken.
|
|
415
|
+
|
|
416
|
+
```ruby
|
|
417
|
+
event = EventMeter.start("invoice_delivery", Object.new)
|
|
418
|
+
warn event.error.message if event.error?
|
|
419
|
+
|
|
420
|
+
result = event.success
|
|
421
|
+
warn result.error.message if result.error?
|
|
422
|
+
```
|
|
423
|
+
|
|
424
|
+
An event object records at most once. A second finish call returns a failed
|
|
425
|
+
`WriteResult` with `EventMeter::AlreadyRecordedError`.
|
|
426
|
+
|
|
427
|
+
## Processing
|
|
428
|
+
|
|
429
|
+
Recording only appends to stream storage. Reports do not process pending events
|
|
430
|
+
for you.
|
|
431
|
+
|
|
432
|
+
Run processing from a scheduler, worker, cron, console task, or admin action:
|
|
433
|
+
|
|
434
|
+
```ruby
|
|
435
|
+
result = EventMeter.process_pending("invoice_delivery", version: 1) do |report|
|
|
436
|
+
report.index_by(:customer_id)
|
|
437
|
+
report.index_by(:provider)
|
|
438
|
+
report.measure_interval_by(:customer_id, group_by: :provider)
|
|
439
|
+
end
|
|
440
|
+
|
|
441
|
+
result.to_h
|
|
442
|
+
```
|
|
443
|
+
|
|
444
|
+
Example output:
|
|
445
|
+
|
|
446
|
+
```ruby
|
|
447
|
+
{
|
|
448
|
+
event_name: "invoice_delivery",
|
|
449
|
+
version: 1,
|
|
450
|
+
processed: 842,
|
|
451
|
+
skipped_already_processed: 0,
|
|
452
|
+
malformed: 0,
|
|
453
|
+
complete: true,
|
|
454
|
+
locked: false
|
|
455
|
+
}
|
|
456
|
+
```
|
|
457
|
+
|
|
458
|
+
If another processor is holding the interval lock, `locked` is `true` and the
|
|
459
|
+
stream rows are left in place for a later pass.
|
|
460
|
+
|
|
461
|
+
If rollups were written but stream deletion did not finish, `complete` is
|
|
462
|
+
`false`. The next processing pass can read those rows again, but processed-entry
|
|
463
|
+
guards prevent double counting. When stream deletion later succeeds, those
|
|
464
|
+
guards are removed too.
|
|
465
|
+
|
|
466
|
+
## Reports
|
|
467
|
+
|
|
468
|
+
All report methods require the same event name and version that you used while
|
|
469
|
+
processing.
|
|
470
|
+
|
|
471
|
+
### Summary
|
|
472
|
+
|
|
473
|
+
Use `summary` for headline numbers:
|
|
474
|
+
|
|
475
|
+
```ruby
|
|
476
|
+
EventMeter.summary("invoice_delivery",
|
|
477
|
+
version: 1,
|
|
478
|
+
from: Time.utc(2026, 5, 6, 10, 0),
|
|
479
|
+
to: Time.utc(2026, 5, 6, 11, 0),
|
|
480
|
+
by: { provider: "postmark" }
|
|
481
|
+
)
|
|
482
|
+
```
|
|
483
|
+
|
|
484
|
+
Pass both `from:` and `to:`, or neither. With a time window, EventMeter reads
|
|
485
|
+
minute buckets and rates use the included bucket span.
|
|
486
|
+
|
|
487
|
+
Without a time window, EventMeter reads retained hour rollups and uses the
|
|
488
|
+
observed event span for rate fields:
|
|
489
|
+
|
|
490
|
+
```ruby
|
|
491
|
+
EventMeter.summary("invoice_delivery", version: 1, by: { provider: "postmark" })
|
|
492
|
+
```
|
|
493
|
+
|
|
494
|
+
No-window summaries are capped by `config.summary_key_limit`, which defaults to
|
|
495
|
+
10,000 retained hour buckets per query. Pass `from:` and `to:` for large report
|
|
496
|
+
windows, or set `summary_key_limit = nil` if your storage can safely support
|
|
497
|
+
unbounded retained-summary reads.
|
|
498
|
+
|
|
499
|
+
### Series
|
|
500
|
+
|
|
501
|
+
Use `series` for charts:
|
|
502
|
+
|
|
503
|
+
```ruby
|
|
504
|
+
EventMeter.series("invoice_delivery",
|
|
505
|
+
version: 1,
|
|
506
|
+
from: Time.utc(2026, 5, 6, 10, 0),
|
|
507
|
+
to: Time.utc(2026, 5, 6, 10, 3),
|
|
508
|
+
every: :minute,
|
|
509
|
+
by: { provider: "postmark" }
|
|
510
|
+
)
|
|
511
|
+
```
|
|
512
|
+
|
|
513
|
+
Example output:
|
|
514
|
+
|
|
515
|
+
```ruby
|
|
516
|
+
[
|
|
517
|
+
{
|
|
518
|
+
bucket: "2026-05-06T10:00:00Z",
|
|
519
|
+
count: 21,
|
|
520
|
+
success_count: 21,
|
|
521
|
+
failure_count: 0,
|
|
522
|
+
skipped_count: 0,
|
|
523
|
+
duration_ms_avg: 720.0,
|
|
524
|
+
per_minute: 21.0
|
|
525
|
+
}
|
|
526
|
+
]
|
|
527
|
+
```
|
|
528
|
+
|
|
529
|
+
By default, `series` returns the last 60 minute buckets ending at the current
|
|
530
|
+
minute bucket:
|
|
531
|
+
|
|
532
|
+
```ruby
|
|
533
|
+
EventMeter.series("invoice_delivery", version: 1)
|
|
534
|
+
```
|
|
535
|
+
|
|
536
|
+
### Compare
|
|
537
|
+
|
|
538
|
+
Use `compare` to read the same summary for two windows:
|
|
539
|
+
|
|
540
|
+
```ruby
|
|
541
|
+
EventMeter.compare("invoice_delivery",
|
|
542
|
+
version: 1,
|
|
543
|
+
before: Time.utc(2026, 5, 6, 9, 0)...Time.utc(2026, 5, 6, 10, 0),
|
|
544
|
+
after: Time.utc(2026, 5, 6, 10, 0)...Time.utc(2026, 5, 6, 11, 0),
|
|
545
|
+
by: { provider: "postmark" }
|
|
546
|
+
)
|
|
547
|
+
```
|
|
548
|
+
|
|
549
|
+
### Definition Metadata
|
|
550
|
+
|
|
551
|
+
Inspect the stored report definition:
|
|
552
|
+
|
|
553
|
+
```ruby
|
|
554
|
+
EventMeter.report_definition("invoice_delivery", version: 1)
|
|
555
|
+
```
|
|
556
|
+
|
|
557
|
+
This is useful when you need to confirm which indexes and interval rules a
|
|
558
|
+
processed report is using.
|
|
559
|
+
|
|
560
|
+
## Storage
|
|
561
|
+
|
|
562
|
+
EventMeter separates raw pending events from processed reports:
|
|
563
|
+
|
|
564
|
+
```text
|
|
565
|
+
stream_storage -> pending input buffer
|
|
566
|
+
rollup_storage -> processed report data
|
|
567
|
+
```
|
|
568
|
+
|
|
569
|
+
You can use one backend for both or mix backends.
|
|
570
|
+
|
|
571
|
+
For busy applications, a good default shape is:
|
|
572
|
+
|
|
573
|
+
```text
|
|
574
|
+
file stream on fast local disk -> Redis or PostgreSQL rollups
|
|
575
|
+
```
|
|
576
|
+
|
|
577
|
+
That keeps event recording cheap and lets report storage be chosen for how you
|
|
578
|
+
want to query and retain data.
|
|
579
|
+
|
|
580
|
+
### Redis For Both
|
|
581
|
+
|
|
582
|
+
```ruby
|
|
583
|
+
EventMeter.configure do |config|
|
|
584
|
+
config.namespace = "billing_app:event_meter:v1"
|
|
585
|
+
config.redis = -> { Redis.new }
|
|
586
|
+
end
|
|
587
|
+
```
|
|
588
|
+
|
|
589
|
+
When `config.redis` is a factory, EventMeter creates separate Redis clients for
|
|
590
|
+
data commands and refreshed process locks. That keeps long processing locks from
|
|
591
|
+
sharing one connection with report writes.
|
|
592
|
+
|
|
593
|
+
### File Stream And Redis Rollups
|
|
594
|
+
|
|
595
|
+
```ruby
|
|
596
|
+
EventMeter.configure do |config|
|
|
597
|
+
config.namespace = "billing_app:event_meter:v1"
|
|
598
|
+
|
|
599
|
+
config.stream_storage = EventMeter::Stores::Stream::File.new(
|
|
600
|
+
path: "/tmp/event_meter/billing-app",
|
|
601
|
+
sync: :flush
|
|
602
|
+
)
|
|
603
|
+
|
|
604
|
+
config.rollup_storage = EventMeter::Stores::Rollup::Redis.new(
|
|
605
|
+
redis: Redis.new,
|
|
606
|
+
namespace: config.namespace
|
|
607
|
+
)
|
|
608
|
+
end
|
|
609
|
+
```
|
|
610
|
+
|
|
611
|
+
### File Stream And File Rollups
|
|
612
|
+
|
|
613
|
+
```ruby
|
|
614
|
+
EventMeter.configure do |config|
|
|
615
|
+
config.namespace = "billing_app:event_meter:v1"
|
|
616
|
+
|
|
617
|
+
config.stream_storage = EventMeter::Stores::Stream::File.new(
|
|
618
|
+
path: "/tmp/event_meter/billing-app",
|
|
619
|
+
sync: :flush
|
|
620
|
+
)
|
|
621
|
+
|
|
622
|
+
config.rollup_storage = EventMeter::Stores::Rollup::File.new(
|
|
623
|
+
path: "/tmp/event_meter/billing-app"
|
|
624
|
+
)
|
|
625
|
+
end
|
|
626
|
+
```
|
|
627
|
+
|
|
628
|
+
File stream storage writes one folder per event name:
|
|
629
|
+
|
|
630
|
+
```text
|
|
631
|
+
/tmp/event_meter/billing-app/
|
|
632
|
+
streams/
|
|
633
|
+
invoice_delivery-f2a49ce1aca9419c/
|
|
634
|
+
logs/
|
|
635
|
+
processing/
|
|
636
|
+
quarantine/
|
|
637
|
+
claim_locks/
|
|
638
|
+
```
|
|
639
|
+
|
|
640
|
+
File rollup storage writes one folder per namespace, event name, and version:
|
|
641
|
+
|
|
642
|
+
```text
|
|
643
|
+
/tmp/event_meter/billing-app/
|
|
644
|
+
rollups/
|
|
645
|
+
billing_app-event_meter-v1-ff44ee2453002022/
|
|
646
|
+
invoice_delivery-f2a49ce1aca9419c/
|
|
647
|
+
v1/
|
|
648
|
+
definition.json
|
|
649
|
+
process.lock
|
|
650
|
+
hashes/
|
|
651
|
+
minute/
|
|
652
|
+
202605061015.json
|
|
653
|
+
hour/
|
|
654
|
+
2026050610.json
|
|
655
|
+
strings/
|
|
656
|
+
shards/
|
|
657
|
+
cd.json
|
|
658
|
+
processed/
|
|
659
|
+
202605061015-host-1234-abcd.jsonl.processed.json
|
|
660
|
+
```
|
|
661
|
+
|
|
662
|
+
The readable folder names include stable hashes, so unusual namespaces and
|
|
663
|
+
event names stay safe on filesystems while still being recognizable.
|
|
664
|
+
|
|
665
|
+
File stream storage uses `time_bucket_stream` internally. It writes JSONL files
|
|
666
|
+
by UTC minute and only processes inactive past-minute files. That keeps
|
|
667
|
+
processors away from the active append file.
|
|
668
|
+
|
|
669
|
+
Sync modes:
|
|
670
|
+
|
|
671
|
+
```ruby
|
|
672
|
+
sync: :none # fastest; OS buffers decide when bytes hit disk
|
|
673
|
+
sync: :flush # default; flush after each event
|
|
674
|
+
sync: :fsync # strongest; fsync after each event
|
|
675
|
+
```
|
|
676
|
+
|
|
677
|
+
### Redis Stream Storage
|
|
678
|
+
|
|
679
|
+
Redis stream storage writes one Redis stream per event name:
|
|
680
|
+
|
|
681
|
+
```text
|
|
682
|
+
billing_app:event_meter:v1:stream:invoice_delivery
|
|
683
|
+
```
|
|
684
|
+
|
|
685
|
+
By default, Redis stream storage reads all available rows. If you need a cap for
|
|
686
|
+
a very large backlog:
|
|
687
|
+
|
|
688
|
+
```ruby
|
|
689
|
+
config.redis_read_limit = 10_000
|
|
690
|
+
```
|
|
691
|
+
|
|
692
|
+
If you instantiate Redis stores yourself, you can pass `lock_redis:` when you
|
|
693
|
+
want process locks to use their own connection:
|
|
694
|
+
|
|
695
|
+
```ruby
|
|
696
|
+
EventMeter::Stores::Stream::Redis.new(
|
|
697
|
+
redis: Redis.new,
|
|
698
|
+
lock_redis: Redis.new,
|
|
699
|
+
namespace: "billing_app:event_meter:v1"
|
|
700
|
+
)
|
|
701
|
+
|
|
702
|
+
EventMeter::Stores::Rollup::Redis.new(
|
|
703
|
+
redis: Redis.new,
|
|
704
|
+
lock_redis: Redis.new,
|
|
705
|
+
namespace: "billing_app:event_meter:v1"
|
|
706
|
+
)
|
|
707
|
+
```
|
|
708
|
+
|
|
709
|
+
### Rollup Keys
|
|
710
|
+
|
|
711
|
+
Rollup keys include namespace, event name, report version, bucket, and index.
|
|
712
|
+
|
|
713
|
+
For `invoice_delivery` version 1:
|
|
714
|
+
|
|
715
|
+
```text
|
|
716
|
+
billing_app:event_meter:v1:rollup:invoice_delivery:v1:minute:202605061015:all
|
|
717
|
+
billing_app:event_meter:v1:rollup:invoice_delivery:v1:hour:2026050610:provider=postmark
|
|
718
|
+
billing_app:event_meter:v1:state:invoice_delivery:v1:interval:customer_id:42
|
|
719
|
+
billing_app:event_meter:v1:definition:invoice_delivery:v1
|
|
720
|
+
```
|
|
721
|
+
|
|
722
|
+
When multiple buckets are read, EventMeter combines them by summing counts and
|
|
723
|
+
sums, and by taking the min/max of min/max fields. It never averages averages.
|
|
724
|
+
|
|
725
|
+
### PostgreSQL Rollups
|
|
726
|
+
|
|
727
|
+
Rails apps should normally use `EventMeter::Rails.configure`, shown in the
|
|
728
|
+
install section. It wraps ActiveRecord's connection pool so request and job
|
|
729
|
+
processes do not share one long-lived raw `PG::Connection`.
|
|
730
|
+
|
|
731
|
+
Install the tables:
|
|
732
|
+
|
|
733
|
+
```ruby
|
|
734
|
+
EventMeter::Stores::Rollup::Postgres.install!(connection: connection)
|
|
735
|
+
```
|
|
736
|
+
|
|
737
|
+
Configure:
|
|
738
|
+
|
|
739
|
+
```ruby
|
|
740
|
+
EventMeter.configure do |config|
|
|
741
|
+
config.namespace = "billing_app:event_meter:v1"
|
|
742
|
+
config.rollup_storage = EventMeter::Stores::Rollup::Postgres.new(
|
|
743
|
+
connection: connection,
|
|
744
|
+
lock_connection: lock_connection,
|
|
745
|
+
namespace: config.namespace
|
|
746
|
+
)
|
|
747
|
+
end
|
|
748
|
+
```
|
|
749
|
+
|
|
750
|
+
`connection` and `lock_connection` should respond to `exec(sql)` and
|
|
751
|
+
`exec_params(sql, params)`, such as `PG::Connection` objects. Use a separate
|
|
752
|
+
`lock_connection` so long rollup transactions do not block lock lease refreshes.
|
|
753
|
+
The Rails ActiveRecord adapter creates that separate lock connection for you.
|
|
754
|
+
|
|
755
|
+
Run `process_pending` outside caller-managed database transactions. Raw
|
|
756
|
+
PostgreSQL storage opens its own `BEGIN`/`COMMIT` around each rollup apply, and
|
|
757
|
+
the ActiveRecord adapter checks out one connection for that transaction so the
|
|
758
|
+
same connection is used until commit or rollback.
|
|
759
|
+
|
|
760
|
+
PostgreSQL storage keeps all report data in three tables:
|
|
761
|
+
|
|
762
|
+
| Table | Purpose |
|
|
763
|
+
| --- | --- |
|
|
764
|
+
| `event_meter_rollups` | Minute and hour buckets. |
|
|
765
|
+
| `event_meter_strings` | Report definitions, cleanup watermark, and interval state. |
|
|
766
|
+
| `event_meter_processed_entries` | Temporary retry guards for stream rows that were applied but not deleted yet. |
|
|
767
|
+
|
|
768
|
+
`install!` also creates prefix indexes for cleanup and report-key scans, so old
|
|
769
|
+
history can be removed by SQL predicates instead of loading all keys into Ruby.
|
|
770
|
+
|
|
771
|
+
CLI helpers:
|
|
772
|
+
|
|
773
|
+
```sh
|
|
774
|
+
event_meter postgres schema --table-prefix event_meter
|
|
775
|
+
event_meter postgres install --url "$DATABASE_URL" --table-prefix event_meter
|
|
776
|
+
```
|
|
777
|
+
|
|
778
|
+
## Idempotency And Concurrency
|
|
779
|
+
|
|
780
|
+
Processing reads stream entries, updates rollup storage, marks entry ids as
|
|
781
|
+
processed, then deletes the processed stream rows.
|
|
782
|
+
|
|
783
|
+
```text
|
|
784
|
+
record event
|
|
785
|
+
-> stream storage
|
|
786
|
+
|
|
787
|
+
process pending
|
|
788
|
+
-> minute rollups
|
|
789
|
+
-> hour rollups
|
|
790
|
+
-> interval state
|
|
791
|
+
-> processed-entry guards
|
|
792
|
+
-> delete processed stream rows
|
|
793
|
+
-> delete processed-entry guards for those rows
|
|
794
|
+
```
|
|
795
|
+
|
|
796
|
+
If rollup writes succeed but stream deletion fails, the next processing pass may
|
|
797
|
+
read the same stream rows again. Processed-entry markers make that safe: those
|
|
798
|
+
ids are skipped and do not double count. Once the stream rows are deleted,
|
|
799
|
+
EventMeter removes the matching processed-entry markers immediately.
|
|
800
|
+
|
|
801
|
+
File rollup storage keeps those processed-entry markers beside the stream-file
|
|
802
|
+
lifecycle instead of inside one large state file:
|
|
803
|
+
|
|
804
|
+
```text
|
|
805
|
+
rollups/billing_app-event_meter-v1-ff44ee2453002022/invoice_delivery-f2a49ce1aca9419c/v1/processed/
|
|
806
|
+
202605061015-host-1234-abcd.jsonl.processed.json
|
|
807
|
+
```
|
|
808
|
+
|
|
809
|
+
After the stream file is deleted, EventMeter deletes the matching processed
|
|
810
|
+
sidecar too.
|
|
811
|
+
|
|
812
|
+
Redis and PostgreSQL rollup storage keep processed-entry markers scoped by
|
|
813
|
+
namespace, event name, report version, and stream entry id. That means the same
|
|
814
|
+
stream id can appear in different event streams or report versions without
|
|
815
|
+
colliding. Redis stores those markers as keys; PostgreSQL stores them in
|
|
816
|
+
`event_meter_processed_entries`.
|
|
817
|
+
|
|
818
|
+
Split file rollups also protect partial writes. Bucket and shard files keep a
|
|
819
|
+
temporary `_applied` map while a stream file is still retryable:
|
|
820
|
+
|
|
821
|
+
```json
|
|
822
|
+
{
|
|
823
|
+
"_applied": {
|
|
824
|
+
"6e559ad9...": "2026-05-06T10:15:03.000000Z"
|
|
825
|
+
},
|
|
826
|
+
"provider=postmark": {
|
|
827
|
+
"count": "12",
|
|
828
|
+
"duration_ms_sum": "8400"
|
|
829
|
+
}
|
|
830
|
+
}
|
|
831
|
+
```
|
|
832
|
+
|
|
833
|
+
If processing dies after writing some bucket files but before marking every
|
|
834
|
+
write as finished, retrying the same stream entries skips bucket files that
|
|
835
|
+
already have the transaction id and applies only the missing files. Once the
|
|
836
|
+
stream file is deleted, EventMeter removes the sidecar and those `_applied`
|
|
837
|
+
markers.
|
|
838
|
+
|
|
839
|
+
Plain count and duration rollup writes are merge-safe and can run concurrently
|
|
840
|
+
when stream storage gives each processor different rows. Interval metrics need a
|
|
841
|
+
processing lock because they advance "previous start time" state. Redis and
|
|
842
|
+
PostgreSQL use refreshed locks with TTL-backed recovery, and file rollups use a
|
|
843
|
+
process lock file.
|
|
844
|
+
|
|
845
|
+
Redis stream storage also uses a refreshed processing lock so two processors do
|
|
846
|
+
not read the same pending stream rows at the same time. File stream storage uses
|
|
847
|
+
`time_bucket_stream` claims instead, so processors can safely take different
|
|
848
|
+
inactive files in parallel.
|
|
849
|
+
|
|
850
|
+
Rollup writes are safe for multiple processors updating the same bucket:
|
|
851
|
+
|
|
852
|
+
- counts and sums are additive
|
|
853
|
+
- min and max fields merge as min and max
|
|
854
|
+
- interval state only moves forward
|
|
855
|
+
- late processing writes to the event's original `started_at` bucket
|
|
856
|
+
- split file rollups use per-batch `_applied` guards so retries do not double count partial writes
|
|
857
|
+
|
|
858
|
+
Malformed stream rows are marked processed and deleted instead of blocking the
|
|
859
|
+
queue. They do not contribute to rollups.
|
|
860
|
+
|
|
861
|
+
## Cleanup
|
|
862
|
+
|
|
863
|
+
Enable automatic cleanup if this app processes events continuously:
|
|
864
|
+
|
|
865
|
+
```ruby
|
|
866
|
+
EventMeter.configure do |config|
|
|
867
|
+
config.auto_cleanup_history = true
|
|
868
|
+
config.cleanup_history_retention = 31 * 24 * 60 * 60
|
|
869
|
+
config.cleanup_history_interval = 60 * 60
|
|
870
|
+
end
|
|
871
|
+
```
|
|
872
|
+
|
|
873
|
+
Automatic cleanup is off by default. With this setting, `process_pending`
|
|
874
|
+
periodically removes old processed data:
|
|
875
|
+
|
|
876
|
+
- minute and hour rollup buckets older than `cleanup_history_retention`
|
|
877
|
+
- interval state older than `cleanup_history_retention`
|
|
878
|
+
- leftover processed-entry markers from interrupted cleanup or failed stream deletion
|
|
879
|
+
|
|
880
|
+
The cleanup pass is guarded by the rollup storage lock and a storage-backed
|
|
881
|
+
watermark, so busy workers do not all clean on every call. Raw unprocessed
|
|
882
|
+
stream events are not deleted by age; they are removed only after they are
|
|
883
|
+
successfully processed. File stream quarantine retention is handled by
|
|
884
|
+
`time_bucket_stream` when that stream is read.
|
|
885
|
+
|
|
886
|
+
Cleanup settings:
|
|
887
|
+
|
|
888
|
+
| Setting | Default | Meaning |
|
|
889
|
+
| --- | --- | --- |
|
|
890
|
+
| `auto_cleanup_history` | `false` | When true, `process_pending` occasionally runs history cleanup. |
|
|
891
|
+
| `cleanup_history_retention` | 31 days | Processed report data older than this can be removed. |
|
|
892
|
+
| `cleanup_history_interval` | 1 hour | Minimum time between automatic cleanup passes. |
|
|
893
|
+
| `auto_cleanup_error_handler` | `warn` | Callable invoked when automatic cleanup fails without interrupting processing. |
|
|
894
|
+
|
|
895
|
+
Use a longer retention if you need longer report windows:
|
|
896
|
+
|
|
897
|
+
```ruby
|
|
898
|
+
config.cleanup_history_retention = 90 * 24 * 60 * 60
|
|
899
|
+
```
|
|
900
|
+
|
|
901
|
+
Use a custom cleanup error handler when cleanup failures should go to your app's
|
|
902
|
+
logger or error tracker:
|
|
903
|
+
|
|
904
|
+
```ruby
|
|
905
|
+
config.auto_cleanup_error_handler = ->(error) { Rails.logger.warn(error.message) }
|
|
906
|
+
```
|
|
907
|
+
|
|
908
|
+
Manual cleanup is also available:
|
|
909
|
+
|
|
910
|
+
```ruby
|
|
911
|
+
EventMeter.cleanup_history(before: Time.now.utc - 30 * 24 * 60 * 60)
|
|
912
|
+
```
|
|
913
|
+
|
|
914
|
+
Example output:
|
|
915
|
+
|
|
916
|
+
```ruby
|
|
917
|
+
{
|
|
918
|
+
rollup_keys_deleted: 120,
|
|
919
|
+
interval_state_keys_deleted: 7,
|
|
920
|
+
processed_entries_deleted: 842
|
|
921
|
+
}
|
|
922
|
+
```
|
|
923
|
+
|
|
924
|
+
Clean only selected event names:
|
|
925
|
+
|
|
926
|
+
```ruby
|
|
927
|
+
EventMeter.cleanup_history(
|
|
928
|
+
before: Time.now.utc - 30 * 24 * 60 * 60,
|
|
929
|
+
events: ["invoice_delivery"]
|
|
930
|
+
)
|
|
931
|
+
```
|
|
932
|
+
|
|
933
|
+
File rollup processed sidecars live under one event/version folder, so file
|
|
934
|
+
cleanup can remove orphan sidecars for that report. Redis and PostgreSQL
|
|
935
|
+
processed-entry markers are also event-scoped, so `events:` can clean old
|
|
936
|
+
markers for only the selected event names. Redis markers also expire with
|
|
937
|
+
`rollup_ttl`.
|
|
938
|
+
|
|
939
|
+
In normal successful processing, processed-entry markers are deleted right after
|
|
940
|
+
their stream rows are deleted; cleanup is mainly a fallback for interrupted
|
|
941
|
+
cleanup or old retained data.
|
|
942
|
+
|
|
943
|
+
## Development
|
|
944
|
+
|
|
945
|
+
Redis and PostgreSQL clients are development dependencies, so the full test
|
|
946
|
+
suite can exercise real storage backends when services are available.
|
|
947
|
+
|
|
948
|
+
```sh
|
|
949
|
+
bundle install
|
|
950
|
+
bundle exec rake test
|
|
951
|
+
```
|
|
952
|
+
|
|
953
|
+
Redis integration tests use `EVENT_METER_REDIS_URL` when it is set, otherwise
|
|
954
|
+
they try the default local Redis connection:
|
|
955
|
+
|
|
956
|
+
```sh
|
|
957
|
+
EVENT_METER_REDIS_URL=redis://127.0.0.1:6379/0 bundle exec ruby -Itest test/storage_test.rb
|
|
958
|
+
```
|
|
959
|
+
|
|
960
|
+
PostgreSQL integration tests use `EVENT_METER_POSTGRES_URL` or `DATABASE_URL`
|
|
961
|
+
when either is set. Without those variables, they try a local
|
|
962
|
+
`postgres:///event_meter_test` database. If local PostgreSQL is reachable and
|
|
963
|
+
that database does not exist, the tests create it automatically.
|
|
964
|
+
|
|
965
|
+
Each test still creates isolated tables with a random prefix and drops those
|
|
966
|
+
tables afterward:
|
|
967
|
+
|
|
968
|
+
```sh
|
|
969
|
+
EVENT_METER_POSTGRES_URL=postgres://localhost/event_meter_test bundle exec ruby -Itest test/storage_test.rb
|
|
970
|
+
```
|
|
971
|
+
|
|
972
|
+
With a normal local PostgreSQL install, this also works:
|
|
973
|
+
|
|
974
|
+
```sh
|
|
975
|
+
bundle exec ruby -Itest test/storage_test.rb
|
|
976
|
+
```
|
|
977
|
+
|
|
978
|
+
The storage stress test runs every stream and rollup pairing. It writes from
|
|
979
|
+
multiple threads and forked processes, kills a processor after rollups are
|
|
980
|
+
written but before stream rows are deleted, then starts competing retry
|
|
981
|
+
processors and verifies the final report counts:
|
|
982
|
+
|
|
983
|
+
```sh
|
|
984
|
+
EVENT_METER_POSTGRES_URL=postgres://localhost/event_meter_test bundle exec ruby -Itest test/stress_test.rb
|
|
985
|
+
```
|
|
986
|
+
|
|
987
|
+
Increase the load when you want a longer local run:
|
|
988
|
+
|
|
989
|
+
```sh
|
|
990
|
+
EVENT_METER_STRESS_COUNT=5000 EVENT_METER_POSTGRES_URL=postgres://localhost/event_meter_test bundle exec ruby -Itest test/stress_test.rb
|
|
991
|
+
```
|
|
992
|
+
|
|
993
|
+
The lightweight performance checks are part of the default test task and can be
|
|
994
|
+
run directly when changing storage, cleanup, scan, or batch-processing code:
|
|
995
|
+
|
|
996
|
+
```sh
|
|
997
|
+
bundle exec rake performance
|
|
998
|
+
```
|
|
999
|
+
|
|
1000
|
+
They use broad wall-clock budgets to catch accidental nonlinear behavior without
|
|
1001
|
+
turning normal development into benchmarking theater. Print JSON timing samples:
|
|
1002
|
+
|
|
1003
|
+
```sh
|
|
1004
|
+
EVENT_METER_PERFORMANCE_REPORT=1 bundle exec rake performance
|
|
1005
|
+
```
|
|
1006
|
+
|
|
1007
|
+
Useful performance options:
|
|
1008
|
+
|
|
1009
|
+
| Option | Default | Meaning |
|
|
1010
|
+
| --- | --- | --- |
|
|
1011
|
+
| `EVENT_METER_PERFORMANCE_BATCH_EVENTS` | `4000` | Events in the large in-memory processor batch. |
|
|
1012
|
+
| `EVENT_METER_PERFORMANCE_BATCH_SECONDS` | `4.0` | Time budget for processing that batch. |
|
|
1013
|
+
| `EVENT_METER_PERFORMANCE_FILE_HISTORY_MINUTES` | `720` | File-store history buckets created before cleanup. |
|
|
1014
|
+
| `EVENT_METER_PERFORMANCE_FILE_CLEANUP_SECONDS` | `6.0` | Time budget for file rollup cleanup. |
|
|
1015
|
+
| `EVENT_METER_PERFORMANCE_REDIS_SCAN_KEYS` | `4000` | Old rollup, state, and processed-entry keys used by the Redis scan cleanup check. |
|
|
1016
|
+
| `EVENT_METER_PERFORMANCE_REDIS_SCAN_SECONDS` | `3.0` | Time budget for Redis-style scan cleanup. |
|
|
1017
|
+
|
|
1018
|
+
Run the soak test when you want to watch memory, file handles, stream files,
|
|
1019
|
+
rollup rows, and cleanup behavior over time. It is opt-in and is not part of
|
|
1020
|
+
the default test task:
|
|
1021
|
+
|
|
1022
|
+
```sh
|
|
1023
|
+
EVENT_METER_SOAK_SECONDS=30 bundle exec rake soak
|
|
1024
|
+
```
|
|
1025
|
+
|
|
1026
|
+
Use PostgreSQL rollups:
|
|
1027
|
+
|
|
1028
|
+
```sh
|
|
1029
|
+
EVENT_METER_SOAK_SECONDS=120 \
|
|
1030
|
+
EVENT_METER_SOAK_STREAM=file \
|
|
1031
|
+
EVENT_METER_SOAK_ROLLUP=postgres \
|
|
1032
|
+
EVENT_METER_POSTGRES_URL=postgres://localhost/event_meter_test \
|
|
1033
|
+
bundle exec rake soak
|
|
1034
|
+
```
|
|
1035
|
+
|
|
1036
|
+
Useful soak options:
|
|
1037
|
+
|
|
1038
|
+
| Option | Default | Meaning |
|
|
1039
|
+
| --- | --- | --- |
|
|
1040
|
+
| `EVENT_METER_SOAK_SECONDS` | `10` | How long to keep appending and processing. |
|
|
1041
|
+
| `EVENT_METER_SOAK_BATCH_SIZE` | `100` | Events written each loop. |
|
|
1042
|
+
| `EVENT_METER_SOAK_SLEEP_SECONDS` | `0.05` | Pause between loops. |
|
|
1043
|
+
| `EVENT_METER_SOAK_REPORT_SECONDS` | `5` | How often to print resource samples. |
|
|
1044
|
+
| `EVENT_METER_SOAK_STREAM` | `file` | `file` or `redis`. |
|
|
1045
|
+
| `EVENT_METER_SOAK_ROLLUP` | `postgres` when a PostgreSQL URL is present, otherwise `file` | `file`, `redis`, or `postgres`. |
|
|
1046
|
+
| `EVENT_METER_SOAK_CUSTOMERS` | `100` | Distinct customer IDs used for interval metrics. |
|
|
1047
|
+
| `EVENT_METER_SOAK_DELETE_FAIL_EVERY` | `0` | Simulate a stream-delete miss every N process runs. |
|
|
1048
|
+
| `EVENT_METER_SOAK_CLEANUP_SECONDS` | `0` | Run `cleanup_history` every N seconds. |
|
|
1049
|
+
|
|
1050
|
+
The soak runner prints JSON lines for `soak_start`, `soak_sample`, and
|
|
1051
|
+
`soak_finish`. The final report includes before/after resource snapshots and
|
|
1052
|
+
deltas. It also fails if final summaries do not match the number of written
|
|
1053
|
+
events.
|
|
1054
|
+
|
|
1055
|
+
## Public API
|
|
1056
|
+
|
|
1057
|
+
```ruby
|
|
1058
|
+
EventMeter.configure
|
|
1059
|
+
EventMeter.start
|
|
1060
|
+
EventMeter.process_pending
|
|
1061
|
+
EventMeter.summary
|
|
1062
|
+
EventMeter.series
|
|
1063
|
+
EventMeter.compare
|
|
1064
|
+
EventMeter.report_definition
|
|
1065
|
+
EventMeter.cleanup_history
|
|
1066
|
+
```
|
|
1067
|
+
|
|
1068
|
+
Lifecycle event methods:
|
|
1069
|
+
|
|
1070
|
+
```ruby
|
|
1071
|
+
event.success
|
|
1072
|
+
event.skip
|
|
1073
|
+
event.failure
|
|
1074
|
+
```
|
|
1075
|
+
|
|
1076
|
+
Report definition methods:
|
|
1077
|
+
|
|
1078
|
+
```ruby
|
|
1079
|
+
report.index_by
|
|
1080
|
+
report.measure_interval_by
|
|
1081
|
+
```
|