whodunit-chronicles 0.1.0.pre β†’ 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md CHANGED
@@ -8,20 +8,167 @@
8
8
 
9
9
  > **The complete historical record of your _Whodunit Dun Wat?_ data**
10
10
 
11
+ > **πŸ’‘ Origin Story:** Chronicles is inspired by the challenge of streaming database changes for real-time analytics without impacting application performance. The concept proved so effective in a previous project that it became the foundation for this Ruby implementation.
12
+
11
13
  While [Whodunit](https://github.com/kanutocd/whodunit) tracks _who_ made changes, **Chronicles** captures _what_ changed by streaming database events into comprehensive audit trails with **zero Rails application overhead**.
12
14
 
13
15
  ## ✨ Features
14
16
 
15
- - **πŸš„ Zero-Latency Streaming**: PostgreSQL logical replication
17
+ - **πŸš„ Zero-Latency Streaming**: PostgreSQL logical replication + MySQL/MariaDB binary log streaming
16
18
  - **πŸ”„ Zero Application Overhead**: No Rails callbacks or Active Record hooks required
17
- - **πŸ—οΈ Database Agnostic**: Abstract adapter pattern supports PostgreSQL (TODO: MySQL/MariaDB support)
19
+ - **πŸ—οΈ Database Agnostic**: Abstract adapter pattern supports PostgreSQL and MySQL/MariaDB
18
20
  - **⚑ Thread-Safe**: Concurrent processing with configurable thread pools
19
21
  - **πŸ›‘οΈ Resilient**: Built-in error handling, retry logic, and monitoring
20
22
  - **πŸ“Š Complete Audit Trail**: Captures INSERT, UPDATE, DELETE with full before/after data
21
- - **πŸ§ͺ VERY Soon to be Production Ready**: 94%+ test coverage with comprehensive error scenarios
23
+ - **πŸ§ͺ Code Coverage**: 94%+ test coverage with comprehensive error scenarios
22
24
 
23
25
  ## πŸš€ Quick Start
24
26
 
27
+ ### 🎯 Usage Scenarios
28
+
29
+ Chronicles excels at transforming database changes into business intelligence. Here are two common patterns:
30
+
31
+ #### 1. Basic Audit Trail Integration
32
+
33
+ ---
34
+
35
+ Perfect for applications that need comprehensive change tracking alongside Whodunit's user attribution:
36
+
37
+ ```ruby
38
+ # Basic setup for user activity tracking
39
+ class BasicProcessor < Whodunit::Chronicles::Processor
40
+ def build_chronicles_record(change_event)
41
+ super.tap do |record|
42
+ # Add basic business context
43
+ record[:change_category] = categorize_change(change_event)
44
+ record[:business_impact] = assess_impact(change_event)
45
+ end
46
+ end
47
+
48
+ private
49
+
50
+ def categorize_change(change_event)
51
+ case change_event.table_name
52
+ when 'users' then 'user_management'
53
+ when 'posts' then 'content'
54
+ when 'comments' then 'engagement'
55
+ else 'system'
56
+ end
57
+ end
58
+ end
59
+ ```
60
+
61
+ **Use Case**: Blog platform tracking user posts and comments for community management and content moderation.
62
+
63
+ #### 2. Advanced Recruitment Analytics
64
+
65
+ Sophisticated business intelligence for talent acquisition platforms:
66
+
67
+ ```ruby
68
+ # Advanced processor for recruitment metrics
69
+ class RecruitmentAnalyticsProcessor < Whodunit::Chronicles::Processor
70
+ def build_chronicles_record(change_event)
71
+ super.tap do |record|
72
+ # Add recruitment-specific business metrics
73
+ record[:recruitment_stage] = determine_stage(change_event)
74
+ record[:funnel_position] = calculate_funnel_position(change_event)
75
+ record[:time_to_hire_impact] = assess_time_impact(change_event)
76
+ record[:cost_per_hire_impact] = calculate_cost_impact(change_event)
77
+
78
+ # Campaign attribution
79
+ record[:utm_source] = extract_utm_source(change_event)
80
+ record[:campaign_id] = extract_campaign_id(change_event)
81
+
82
+ # Quality metrics
83
+ record[:candidate_quality_score] = assess_candidate_quality(change_event)
84
+ end
85
+ end
86
+
87
+ def process(change_event)
88
+ record = build_chronicles_record(change_event)
89
+ store_audit_record(record)
90
+
91
+ # Stream to analytics platforms
92
+ stream_to_prometheus(record) if track_metrics?
93
+ update_grafana_dashboard(record)
94
+ trigger_real_time_alerts(record) if alert_worthy?(record)
95
+ end
96
+
97
+ private
98
+
99
+ def determine_stage(change_event)
100
+ return 'unknown' unless change_event.table_name == 'applications'
101
+
102
+ case change_event.new_data&.dig('status')
103
+ when 'submitted' then 'application'
104
+ when 'screening', 'in_review' then 'screening'
105
+ when 'interview_scheduled', 'interviewed' then 'interview'
106
+ when 'offer_extended', 'offer_accepted' then 'offer'
107
+ when 'hired' then 'hire'
108
+ else 'unknown'
109
+ end
110
+ end
111
+
112
+ def stream_to_prometheus(record)
113
+ # Track key recruitment metrics
114
+ RECRUITMENT_APPLICATIONS_TOTAL.increment(
115
+ source: record[:utm_source],
116
+ department: record.dig(:new_data, 'department')
117
+ )
118
+
119
+ if record[:action] == 'UPDATE' && status_changed_to_hired?(record)
120
+ RECRUITMENT_HIRES_TOTAL.increment(
121
+ source: record[:utm_source],
122
+ time_to_hire: record[:time_to_hire_impact]
123
+ )
124
+ end
125
+ end
126
+
127
+ def update_grafana_dashboard(record)
128
+ # Send time-series data for Grafana visualization
129
+ InfluxDB.write_point('recruitment_events', {
130
+ timestamp: record[:occurred_at],
131
+ table: record[:table_name],
132
+ action: record[:action],
133
+ stage: record[:recruitment_stage],
134
+ source: record[:utm_source],
135
+ cost_impact: record[:cost_per_hire_impact],
136
+ quality_score: record[:candidate_quality_score]
137
+ })
138
+ end
139
+ end
140
+ ```
141
+
142
+ **Use Case**: Imagine a Spherical Cow Talent acquisition platform tracking candidate journey from application through hire, with real-time dashboards showing conversion rates, time-to-hire, cost-per-hire, and source effectiveness.
143
+
144
+ #### πŸ“Š Visual Analytics Dashboard
145
+
146
+ The recruitment analytics processor creates comprehensive Grafana dashboards for executive reporting and operational insights:
147
+
148
+ <div align="center">
149
+
150
+ **Campaign Performance Analytics**
151
+ <a href="examples/images/campaign-performance-analytics.png" title="Click to view full size image">
152
+ <img src="examples/images/campaign-performance-analytics.png" width="300" />
153
+ </a>
154
+ _Track campaign ROI, cost-per-hire by channel, and conversion rates across marketing sources_
155
+
156
+ **Candidate Journey Analytics**
157
+ <a href="examples/images/candidate-journey-analytics.png" title="Click to view full size image">
158
+ <img src="examples/images/candidate-journey-analytics.png" width="300" />
159
+ </a>
160
+ _Monitor candidate engagement, funnel conversion rates, and application completion patterns_
161
+
162
+ **Recruitment Funnel Analytics**
163
+ <a href="examples/images/recruitment-funnel-analytics.png" title="Click to view full size image">
164
+ <img src="examples/images/recruitment-funnel-analytics.png" width="300" />
165
+ </a>
166
+ _Analyze hiring pipeline progression, department performance, and time-series trends_
167
+
168
+ </div>
169
+
170
+ These dashboards are automatically populated by Chronicles as candidates move through your hiring funnel, providing real-time visibility into recruitment performance without any manual data entry.
171
+
25
172
  ### Installation
26
173
 
27
174
  Add to your Gemfile:
@@ -41,7 +188,9 @@ gem install whodunit-chronicles
41
188
  ```ruby
42
189
  require 'whodunit/chronicles'
43
190
 
44
- # PostgreSQL Configuration
191
+ # Database Configuration
192
+
193
+ ## PostgreSQL Configuration
45
194
  Whodunit::Chronicles.configure do |config|
46
195
  config.adapter = :postgresql
47
196
  config.database_url = 'postgresql://localhost/myapp_production'
@@ -50,6 +199,14 @@ Whodunit::Chronicles.configure do |config|
50
199
  config.replication_slot_name = 'myapp_chronicles_slot'
51
200
  end
52
201
 
202
+ ## MySQL/MariaDB Configuration
203
+ Whodunit::Chronicles.configure do |config|
204
+ config.adapter = :mysql
205
+ config.database_url = 'mysql://user:password@localhost/myapp_production'
206
+ config.audit_database_url = 'mysql://user:password@localhost/myapp_audit'
207
+ config.mysql_server_id = 1001 # Unique server ID for replication
208
+ end
209
+
53
210
  # Create and start the service
54
211
  service = Whodunit::Chronicles.service
55
212
  service.setup! # Create publication/replication setup
@@ -65,7 +222,7 @@ service.teardown! # Clean up database objects
65
222
 
66
223
  ## πŸ—οΈ Architecture
67
224
 
68
- Chronicles uses **PostgreSQL logical replication** (TODO: **MySQL/MariaDB binary log streaming**) to capture database changes without impacting your application:
225
+ Chronicles uses **PostgreSQL logical replication** and **MySQL/MariaDB binary log streaming** to capture database changes without impacting your application:
69
226
 
70
227
  ```
71
228
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
@@ -86,9 +243,9 @@ Chronicles uses **PostgreSQL logical replication** (TODO: **MySQL/MariaDB binary
86
243
 
87
244
  ### Core Components
88
245
 
89
- - **StreamAdapter**: Database-specific change streaming (PostgreSQL, MySQL/MariaDB)
246
+ - **StreamAdapter**: Database-specific change streaming (PostgreSQL logical replication, MySQL/MariaDB binary log streaming)
90
247
  - **ChangeEvent**: Unified change representation across adapters
91
- - **AuditProcessor**: Transforms changes into searchable audit records
248
+ - **Processor**: Transforms changes into searchable audit records
92
249
  - **Service**: Orchestrates streaming with error handling and retry logic
93
250
 
94
251
  ## βš™οΈ Configuration
@@ -150,30 +307,162 @@ Chronicles creates structured audit records for each database change:
150
307
 
151
308
  ## πŸ”§ Advanced Usage
152
309
 
153
- ### Custom Audit Processing
310
+ ### Custom Processors for Analytics & Monitoring
311
+
312
+ **The real power of Chronicles** comes from creating custom processors tailored for your specific analytics needs. While Whodunit captures basic "who changed what," Chronicles lets you build sophisticated data pipelines for tools like **Grafana**, **DataDog**, or **Elasticsearch**.
313
+
314
+ Transform database changes into actionable business intelligence with features like:
315
+
316
+ - **25+ Custom Metrics**: Track business KPIs like conversion rates, time-to-hire, and cost-per-acquisition
317
+ - **Real-time Dashboards**: Stream data to Grafana for executive reporting and operational monitoring
318
+ - **Smart Alerting**: Trigger notifications based on business rules and thresholds
319
+ - **Multi-destination Streaming**: Send data simultaneously to multiple analytics platforms
320
+
321
+ #### Analytics-Focused Processor
154
322
 
155
323
  ```ruby
156
- class MyCustomProcessor < Whodunit::Chronicles::AuditProcessor
324
+ class AnalyticsProcessor < Whodunit::Chronicles::Processor
157
325
  def build_chronicles_record(change_event)
158
326
  super.tap do |record|
159
- record[:custom_field] = extract_custom_data(change_event)
160
- record[:environment] = Rails.env
327
+ # Add business metrics
328
+ record[:business_impact] = calculate_business_impact(change_event)
329
+ record[:user_segment] = determine_user_segment(change_event)
330
+ record[:feature_flag] = current_feature_flags
331
+
332
+ # Add performance metrics
333
+ record[:change_size] = calculate_change_size(change_event)
334
+ record[:peak_hours] = during_peak_hours?
335
+ record[:geographic_region] = user_region(change_event)
336
+
337
+ # Add time-series friendly fields for Grafana
338
+ record[:hour_of_day] = Time.current.hour
339
+ record[:day_of_week] = Time.current.wday
340
+ record[:is_weekend] = weekend?
341
+
342
+ # Custom tagging for filtering
343
+ record[:tags] = generate_tags(change_event)
344
+ end
345
+ end
346
+
347
+ private
348
+
349
+ def calculate_business_impact(change_event)
350
+ case change_event.table_name
351
+ when 'orders' then 'revenue_critical'
352
+ when 'users' then 'customer_critical'
353
+ when 'products' then 'inventory_critical'
354
+ else 'standard'
161
355
  end
162
356
  end
163
357
 
358
+ def determine_user_segment(change_event)
359
+ return 'anonymous' unless change_event.user_id
360
+
361
+ # Look up user tier from your business logic
362
+ User.find(change_event.user_id)&.tier || 'standard'
363
+ end
364
+
365
+ def generate_tags(change_event)
366
+ tags = [change_event.action.downcase]
367
+ tags << 'bulk_operation' if bulk_operation?(change_event)
368
+ tags << 'api_driven' if api_request?
369
+ tags << 'admin_action' if admin_user?(change_event.user_id)
370
+ tags
371
+ end
372
+ end
373
+ ```
374
+
375
+ #### Grafana Dashboard Ready
376
+
377
+ ```ruby
378
+ class GrafanaProcessor < Whodunit::Chronicles::Processor
379
+ def build_chronicles_record(change_event)
380
+ {
381
+ # Core metrics for Grafana time series
382
+ timestamp: change_event.occurred_at,
383
+ table_name: change_event.table_name,
384
+ action: change_event.action,
385
+
386
+ # Numerical metrics for graphs
387
+ records_affected: calculate_records_affected(change_event),
388
+ change_magnitude: calculate_change_magnitude(change_event),
389
+ user_session_duration: calculate_session_duration(change_event),
390
+
391
+ # Categorical dimensions for filtering
392
+ environment: Rails.env,
393
+ application_version: app_version,
394
+ database_instance: database_identifier,
395
+
396
+ # Business KPIs
397
+ revenue_impact: calculate_revenue_impact(change_event),
398
+ customer_satisfaction_risk: assess_satisfaction_risk(change_event),
399
+
400
+ # Performance indicators
401
+ query_duration_ms: extract_query_duration(change_event),
402
+ concurrent_users: current_concurrent_users,
403
+ system_load: current_system_load
404
+ }
405
+ end
406
+ end
407
+ ```
408
+
409
+ #### Real-Time Alerts Processor
410
+
411
+ ```ruby
412
+ class AlertingProcessor < Whodunit::Chronicles::Processor
413
+ def process(change_event)
414
+ record = build_chronicles_record(change_event)
415
+
416
+ # Store the audit record
417
+ store_audit_record(record)
418
+
419
+ # Real-time alerting logic
420
+ send_alert(record) if alert_worthy?(record)
421
+
422
+ # Stream to monitoring systems
423
+ stream_to_datadog(record) if production?
424
+ stream_to_grafana(record)
425
+ end
426
+
164
427
  private
165
428
 
166
- def extract_custom_data(change_event)
167
- # Your custom logic here
429
+ def alert_worthy?(record)
430
+ # Define your alerting criteria
431
+ record[:business_impact] == 'revenue_critical' ||
432
+ record[:records_affected] > 1000 ||
433
+ record[:action] == 'DELETE' && record[:table_name] == 'orders'
434
+ end
435
+
436
+ def stream_to_grafana(record)
437
+ # Send metrics to Grafana via InfluxDB/Prometheus
438
+ InfluxDB.write_point("chronicles_events", record)
168
439
  end
169
440
  end
441
+ ```
442
+
443
+ #### Multiple Processor Pipeline
170
444
 
171
- # Use custom processor
445
+ ```ruby
446
+ # Chain multiple processors for different purposes
172
447
  service = Whodunit::Chronicles::Service.new(
173
- processor: MyCustomProcessor.new
448
+ adapter: Adapters::PostgreSQL.new,
449
+ processor: CompositeProcessor.new([
450
+ AnalyticsProcessor.new, # For business intelligence
451
+ AlertingProcessor.new, # For real-time monitoring
452
+ ComplianceProcessor.new, # For regulatory requirements
453
+ ArchivalProcessor.new # For long-term storage
454
+ ])
174
455
  )
175
456
  ```
176
457
 
458
+ **Use Cases:**
459
+
460
+ - **πŸ“Š Business Intelligence**: Track user behavior patterns, feature adoption, revenue impact
461
+ - **🚨 Real-Time Monitoring**: Alert on suspicious activities, bulk operations, data anomalies
462
+ - **πŸ“ˆ Performance Analytics**: Database performance metrics, query optimization insights
463
+ - **πŸ” Compliance Auditing**: Regulatory compliance, data governance, access patterns
464
+ - **πŸ’‘ Product Analytics**: Feature usage, A/B testing data, user journey tracking
465
+
177
466
  ### Service Monitoring
178
467
 
179
468
  ```ruby
@@ -198,6 +487,80 @@ end
198
487
 
199
488
  ## πŸ§ͺ Testing
200
489
 
490
+ ### Integration Testing
491
+
492
+ Test Chronicles with your Rails application using these patterns:
493
+
494
+ #### Basic Testing Pattern
495
+
496
+ ```ruby
497
+ # Test basic Chronicles functionality
498
+ class ChroniclesIntegrationTest < ActiveSupport::TestCase
499
+ def setup
500
+ @service = Whodunit::Chronicles.service
501
+ @service.setup!
502
+ @service.start
503
+ end
504
+
505
+ def teardown
506
+ @service.stop
507
+ @service.teardown!
508
+ end
509
+
510
+ def test_audit_record_creation
511
+ # Create a user (triggers Whodunit)
512
+ user = User.create!(name: "John", email: "john@example.com")
513
+
514
+ # Wait for Chronicles to process
515
+ sleep 1
516
+
517
+ # Check Chronicles audit record
518
+ audit_record = AuditRecord.find_by(
519
+ table_name: 'users',
520
+ action: 'INSERT',
521
+ record_id: { 'id' => user.id }
522
+ )
523
+
524
+ assert audit_record
525
+ assert_equal 'INSERT', audit_record.action
526
+ assert_equal user.name, audit_record.new_data['name']
527
+ end
528
+ end
529
+ ```
530
+
531
+ #### Advanced Analytics Testing
532
+
533
+ ```ruby
534
+ # Test custom processor functionality
535
+ class RecruitmentAnalyticsTest < ActiveSupport::TestCase
536
+ def setup
537
+ @processor = RecruitmentAnalyticsProcessor.new
538
+ end
539
+
540
+ def test_recruitment_stage_determination
541
+ change_event = create_change_event(
542
+ table_name: 'applications',
543
+ action: 'UPDATE',
544
+ new_data: { 'status' => 'hired' }
545
+ )
546
+
547
+ record = @processor.build_chronicles_record(change_event)
548
+
549
+ assert_equal 'hire', record[:recruitment_stage]
550
+ assert record[:cost_per_hire_impact]
551
+ end
552
+
553
+ def test_metrics_streaming
554
+ # Mock Prometheus and Grafana integrations
555
+ assert_difference 'RECRUITMENT_HIRES_TOTAL.get' do
556
+ @processor.stream_to_prometheus(hired_record)
557
+ end
558
+ end
559
+ end
560
+ ```
561
+
562
+ ### Unit Testing
563
+
201
564
  Chronicles includes comprehensive test coverage:
202
565
 
203
566
  ```bash
@@ -230,28 +593,50 @@ bundle exec brakeman
230
593
 
231
594
  ## 🀝 Contributing
232
595
 
596
+ We welcome contributions! Chronicles is designed to be extensible and work across different business domains.
597
+
233
598
  1. Fork the repository
234
- 2. Create your feature branch (`git checkout -b feature/amazing-feature`)
235
- 3. Make your changes with tests
236
- 4. Ensure tests pass (`bundle exec rake test`)
237
- 5. Ensure RuboCop passes (`bundle exec rubocop`)
599
+ 2. Set up your development environment:
600
+ ```bash
601
+ bundle install
602
+ bundle exec rake test # Ensure tests pass
603
+ ```
604
+ 3. Create your feature branch (`git checkout -b feature/amazing-feature`)
605
+ 4. Make your changes with comprehensive tests
606
+ 5. Test your changes:
607
+ - Unit tests: `bundle exec rake test`
608
+ - Code style: `bundle exec rubocop`
609
+ - Security: `bundle exec bundler-audit check`
238
610
  6. Commit your changes (`git commit -m 'Add amazing feature'`)
239
611
  7. Push to the branch (`git push origin feature/amazing-feature`)
240
- 8. Open a Pull Request
612
+ 8. Open a Pull Request with a detailed description
613
+
614
+ ### Contributing Custom Processors
615
+
616
+ We especially welcome custom processors for different business domains. Consider contributing processors for:
617
+
618
+ - E-commerce analytics (order tracking, inventory management)
619
+ - Financial services (transaction monitoring, compliance reporting)
620
+ - Healthcare (patient data tracking, regulatory compliance)
621
+ - Education (student progress, course analytics)
622
+ - SaaS metrics (user engagement, feature adoption)
241
623
 
242
624
  ## πŸ“‹ Requirements
243
625
 
244
626
  - **Ruby**: 3.1.0 or higher
245
627
  - **PostgreSQL**: 10.0 or higher (with logical replication enabled)
628
+ - **MySQL/MariaDB**: 5.6+ (with binary logging enabled)
246
629
 
247
630
  ## πŸ—ΊοΈ Roadmap
248
631
 
249
- - [ ] **MySQL/MariaDB Support**: MySQL/MariaDB databases binlog streaming adapter
632
+ - [ ] **Prometheus Metrics**: Production monitoring integration (with complete codebase included in examples/)
633
+ - [ ] **Advanced Example Apps**: Real-world use cases with complete monitoring stack (with complete codebase included in examples/)
634
+ - [ ] **Custom Analytics Processors**: Business intelligence and real-time monitoring (with complete codebase included in examples/)
635
+ - [x] **MySQL/MariaDB Support**: MySQL/MariaDB databases binlog streaming adapter
250
636
  - [ ] **Redis Streams**: Alternative lightweight streaming backend
251
637
  - [ ] **Compression**: Optional audit record compression
252
638
  - [ ] **Retention Policies**: Automated audit record cleanup
253
639
  - [ ] **Web UI**: Management interface for monitoring and configuration
254
- - [ ] **Prometheus Metrics**: Production monitoring integration
255
640
 
256
641
  ## πŸ“š Documentation
257
642
 
@@ -259,6 +644,7 @@ bundle exec brakeman
259
644
  - **[Configuration Guide](docs/configuration-todo.md)**
260
645
  - **[Architecture Deep Dive](docs/architecture-todo.md)**
261
646
  - **[PostgreSQL Setup](docs/postgresql-setup-todo.md)**
647
+ - **[MySQL/MariaDB Setup](docs/mysql-setup.md)**
262
648
  - **[Production Deployment](docs/production-todo.md)**
263
649
 
264
650
  ## πŸ“„ License