eventstore_ruby 0.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/CHANGELOG.md +31 -0
- data/LICENSE +21 -0
- data/README.md +125 -0
- data/lib/eventstore_ruby/event.rb +24 -0
- data/lib/eventstore_ruby/event_filter.rb +19 -0
- data/lib/eventstore_ruby/event_query.rb +20 -0
- data/lib/eventstore_ruby/event_stream.rb +30 -0
- data/lib/eventstore_ruby/event_subscription.rb +16 -0
- data/lib/eventstore_ruby/filter.rb +20 -0
- data/lib/eventstore_ruby/memory_event_store.rb +59 -0
- data/lib/eventstore_ruby/memory_event_stream_notifier.rb +55 -0
- data/lib/eventstore_ruby/postgres/insert.rb +30 -0
- data/lib/eventstore_ruby/postgres/query.rb +63 -0
- data/lib/eventstore_ruby/postgres/schema.rb +51 -0
- data/lib/eventstore_ruby/postgres/transform.rb +67 -0
- data/lib/eventstore_ruby/postgres_event_store.rb +111 -0
- data/lib/eventstore_ruby/query_processor.rb +66 -0
- data/lib/eventstore_ruby/query_result.rb +13 -0
- data/lib/eventstore_ruby/read_write_lock_fifo.rb +112 -0
- data/lib/eventstore_ruby/version.rb +3 -0
- data/lib/eventstore_ruby.rb +21 -0
- metadata +110 -0
checksums.yaml
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
1
|
+
---
|
|
2
|
+
SHA256:
|
|
3
|
+
metadata.gz: fac7abdb7fc8945f65b67176313c544971082fc5abca57a48fffe8f4b7b574e3
|
|
4
|
+
data.tar.gz: 773ecb26de6d7279e66c6963586b0c8f0f38e51c58d32e1f1ec2dbb47914a1f2
|
|
5
|
+
SHA512:
|
|
6
|
+
metadata.gz: 61960c4a2a20672f1783777bbccc5032642de43cc507010b62d1e4578ddc0e4b15163c3f6c4d5a9d0270fd20f9676dbce4ac66b2cc7f3e7392377d8778d0afc2
|
|
7
|
+
data.tar.gz: aec2ebe2ba8c175202fd07be3e8fd255f3d3669ceebc2bc216f3427b434b2885292a5775089d3c56ab443b51e80ba8d6d3d0e915b8715d4c4de5fe6e6db5582e
|
data/CHANGELOG.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
# Changelog
|
|
2
|
+
|
|
3
|
+
All notable changes to this project will be documented in this file.
|
|
4
|
+
|
|
5
|
+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
|
|
6
|
+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
|
+
|
|
8
|
+
## [0.1.0] – 2025-08-01
|
|
9
|
+
### Added
|
|
10
|
+
* First public release of **eventstore_ruby** – a 100 % feature-parity port of the `@eventstore-typescript` library.
|
|
11
|
+
* Core value objects: `Event`, `EventRecord`, `EventFilter`, `QueryResult`.
|
|
12
|
+
* In-memory infrastructure:
|
|
13
|
+
* `EventStream` (append-only log)
|
|
14
|
+
* `ReadWriteLockFIFO` (fair RW lock)
|
|
15
|
+
* `MemoryEventStore` with optimistic locking.
|
|
16
|
+
* `MemoryEventStreamNotifier` for real-time subscriptions.
|
|
17
|
+
* PostgreSQL store:
|
|
18
|
+
* DDL helpers (`schema.rb`)
|
|
19
|
+
* SQL builders (`query.rb`, `insert.rb`)
|
|
20
|
+
* Data mappers (`transform.rb`)
|
|
21
|
+
* `PostgresEventStore` (optimistic locking, auto-notification, database initialisation helper).
|
|
22
|
+
* Query tooling: recursive predicate matching in `QueryProcessor`.
|
|
23
|
+
* Example Banking CLI app with fully-featured domain (open / list / deposit / withdraw / transfer / analytics) and rebuild commands.
|
|
24
|
+
* Extensive Minitest suite mirroring original Jest tests.
|
|
25
|
+
* Documentation (README) with quick-start instructions.
|
|
26
|
+
|
|
27
|
+
### Changed
|
|
28
|
+
* None – first release.
|
|
29
|
+
|
|
30
|
+
### Fixed
|
|
31
|
+
* N/A
|
data/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2024 Manuel Ortega
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
data/README.md
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
1
|
+
# EventStoreRuby
|
|
2
|
+
|
|
3
|
+
A faithful Ruby port of the [@ricofritzsche/eventstore](https://www.npmjs.com/package/@ricofritzsche/eventstore) TypeScript library.
|
|
4
|
+
|
|
5
|
+
It provides the same event-sourcing primitives, including:
|
|
6
|
+
|
|
7
|
+
* **EventStore implementations**
|
|
8
|
+
* `EventStoreRuby::MemoryEventStore` – fully in-memory (great in tests)
|
|
9
|
+
* `EventStoreRuby::PostgresEventStore` – persistent, PostgreSQL-backed
|
|
10
|
+
* **Notifier** – `MemoryEventStreamNotifier` for real-time fan-out of appended events
|
|
11
|
+
* **Query engine** – powerful payload-predicate matching identical to the original TS implementation
|
|
12
|
+
* **Optimistic locking** – conditional `append` with `expected_max_sequence_number`
|
|
13
|
+
|
|
14
|
+
## Installation
|
|
15
|
+
|
|
16
|
+
Add to your Gemfile:
|
|
17
|
+
|
|
18
|
+
```ruby
|
|
19
|
+
gem 'eventstore_ruby', git: 'https://github.com/gazpachoteam/eventstore-ruby.git'
|
|
20
|
+
# ⬆ until the gem is published to RubyGems
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
or (once released):
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
$ gem install eventstore_ruby
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
## Quick start (Memory)
|
|
30
|
+
|
|
31
|
+
```ruby
|
|
32
|
+
require 'eventstore_ruby'
|
|
33
|
+
|
|
34
|
+
store = EventStoreRuby::MemoryEventStore.new
|
|
35
|
+
|
|
36
|
+
# Subscribe **before** writing so you get real-time notifications
|
|
37
|
+
subscription = store.subscribe do |events|
|
|
38
|
+
puts "got #{events.size} event(s): #{events.map(&:event_type).join(', ')}"
|
|
39
|
+
end
|
|
40
|
+
|
|
41
|
+
# Append events
|
|
42
|
+
store.append([
|
|
43
|
+
EventStoreRuby::Event.new(event_type: 'UserRegistered', payload: {user_id: 1}),
|
|
44
|
+
EventStoreRuby::Event.new(event_type: 'EmailVerified', payload: {user_id: 1})
|
|
45
|
+
])
|
|
46
|
+
|
|
47
|
+
# Historical query
|
|
48
|
+
filter = EventStoreRuby.create_filter(%w[UserRegistered EmailVerified])
|
|
49
|
+
result = store.query(filter)
|
|
50
|
+
puts "history has #{result.events.size} events, max seq ##{result.max_sequence_number}"
|
|
51
|
+
|
|
52
|
+
subscription.unsubscribe
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
## Quick start (PostgreSQL)
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
# launch Postgres
|
|
59
|
+
$ docker run --name eventstore-pg -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:15
|
|
60
|
+
$ export DATABASE_URL="postgres://postgres:postgres@localhost:5432/postgres"
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
```ruby
|
|
64
|
+
require 'eventstore_ruby'
|
|
65
|
+
|
|
66
|
+
store = EventStoreRuby::PostgresEventStore.new
|
|
67
|
+
store.initialize_database # creates table & indexes on first run
|
|
68
|
+
|
|
69
|
+
# same API as memory store …
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
## API overview
|
|
73
|
+
|
|
74
|
+
```ruby
|
|
75
|
+
# Building filters
|
|
76
|
+
filter = EventStoreRuby.create_filter(%w[UserRegistered], [{user_id: 1}])
|
|
77
|
+
|
|
78
|
+
# Querying
|
|
79
|
+
qr = store.query(filter) # => QueryResult (events + max_sequence_number)
|
|
80
|
+
|
|
81
|
+
# Combining multiple filters with OR – EventQuery
|
|
82
|
+
|
|
83
|
+
Often you need to fetch events that match **any** of several filter criteria.
|
|
84
|
+
Instead of doing multiple round-trips you can combine `EventFilter`s into a single
|
|
85
|
+
`EventQuery` (logical OR):
|
|
86
|
+
|
|
87
|
+
```ruby
|
|
88
|
+
# Build individual filters
|
|
89
|
+
f_users = EventStoreRuby.create_filter(['UserRegistered'])
|
|
90
|
+
f_orders = EventStoreRuby.create_filter(['OrderPlaced'], [{country: 'DE'}])
|
|
91
|
+
|
|
92
|
+
# Combine them
|
|
93
|
+
query = EventStoreRuby.create_query([f_users, f_orders])
|
|
94
|
+
|
|
95
|
+
# Works with both query and append
|
|
96
|
+
result = store.query(query)
|
|
97
|
+
|
|
98
|
+
# Conditional write based on the same logical context
|
|
99
|
+
store.append(new_events, query, expected_max_sequence_number: result.max_sequence_number)
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
`EventQuery` is supported by **both** the in-memory and PostgreSQL stores. Under
|
|
103
|
+
the hood it is translated into `OR` conditions in SQL for Postgres, and on the
|
|
104
|
+
in-memory store it simply checks each contained filter one after another.
|
|
105
|
+
|
|
106
|
+
# Conditional append (optimistic locking)
|
|
107
|
+
store.append(events, filter, expected_max_sequence_number: qr.max_sequence_number)
|
|
108
|
+
|
|
109
|
+
# Subscriptions
|
|
110
|
+
sub = store.subscribe do |events| … end
|
|
111
|
+
sub.unsubscribe
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
For detailed objects & methods see the YARD docs or the source under `lib/eventstore_ruby/`.
|
|
115
|
+
|
|
116
|
+
## Development / Tests
|
|
117
|
+
|
|
118
|
+
```bash
|
|
119
|
+
$ bundle install
|
|
120
|
+
$ rake test # runs Minitest suite (ports of all original Jest specs)
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
## License
|
|
124
|
+
|
|
125
|
+
MIT © 2024 – originally by Ralf Westphal & Rico Fritzsche. Ruby port by Gazpacho Dev team (Manuel Ortega).
|
|
@@ -0,0 +1,24 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
# Base application-level event.
|
|
3
|
+
# Immutable: attributes are frozen after initialization.
|
|
4
|
+
class Event
|
|
5
|
+
attr_reader :event_type, :payload
|
|
6
|
+
|
|
7
|
+
def initialize(event_type:, payload: {})
|
|
8
|
+
@event_type = event_type.to_s.freeze
|
|
9
|
+
@payload = payload.freeze
|
|
10
|
+
freeze
|
|
11
|
+
end
|
|
12
|
+
end
|
|
13
|
+
|
|
14
|
+
# Persisted event enriched with sequence number and timestamp.
|
|
15
|
+
class EventRecord < Event
|
|
16
|
+
attr_reader :sequence_number, :timestamp
|
|
17
|
+
|
|
18
|
+
def initialize(sequence_number:, timestamp: Time.now, event_type:, payload: {})
|
|
19
|
+
@sequence_number = Integer(sequence_number)
|
|
20
|
+
@timestamp = timestamp
|
|
21
|
+
super(event_type: event_type, payload: payload)
|
|
22
|
+
end
|
|
23
|
+
end
|
|
24
|
+
end
|
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
# Describes the subset of events to select when querying the store.
|
|
3
|
+
# Immutable and thread-safe.
|
|
4
|
+
class EventFilter
|
|
5
|
+
attr_reader :event_types, :payload_predicates
|
|
6
|
+
|
|
7
|
+
# event_types: [String] (required, can be empty for "all types")
|
|
8
|
+
# payload_predicates: Array<Hash> (optional) – each hash is a partial payload that must match.
|
|
9
|
+
def initialize(event_types:, payload_predicates: nil)
|
|
10
|
+
unless event_types.is_a?(Array) && event_types.all? { |t| t.is_a?(String) }
|
|
11
|
+
raise ArgumentError, 'event_types must be an array of strings'
|
|
12
|
+
end
|
|
13
|
+
|
|
14
|
+
@event_types = event_types.freeze
|
|
15
|
+
@payload_predicates = payload_predicates&.map(&:freeze)&.freeze
|
|
16
|
+
freeze
|
|
17
|
+
end
|
|
18
|
+
end
|
|
19
|
+
end
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module EventStoreRuby
|
|
4
|
+
# EventQuery represents a logical OR composition of multiple EventFilter
|
|
5
|
+
# instances. An event matches the query if it matches at least one of the
|
|
6
|
+
# contained filters.
|
|
7
|
+
class EventQuery
|
|
8
|
+
attr_reader :filters
|
|
9
|
+
|
|
10
|
+
def initialize(filters:)
|
|
11
|
+
unless filters.is_a?(Array) && filters.all? { |f| f.is_a?(EventFilter) }
|
|
12
|
+
raise ArgumentError, 'filters must be an Array<EventFilter>'
|
|
13
|
+
end
|
|
14
|
+
raise ArgumentError, 'filters array cannot be empty' if filters.empty?
|
|
15
|
+
|
|
16
|
+
@filters = filters.freeze
|
|
17
|
+
freeze
|
|
18
|
+
end
|
|
19
|
+
end
|
|
20
|
+
end
|
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
# In-memory append-only event sequence.
|
|
3
|
+
class EventStream
|
|
4
|
+
attr_reader :event_records
|
|
5
|
+
|
|
6
|
+
def initialize
|
|
7
|
+
@event_records = []
|
|
8
|
+
@last_sequence_number = 0
|
|
9
|
+
@mutex = Mutex.new
|
|
10
|
+
end
|
|
11
|
+
|
|
12
|
+
# Appends Event objects, returns newly created EventRecord objects.
|
|
13
|
+
def append(events)
|
|
14
|
+
raise ArgumentError, 'events must be an Array' unless events.is_a?(Array)
|
|
15
|
+
|
|
16
|
+
@mutex.synchronize do
|
|
17
|
+
records = events.map do |event|
|
|
18
|
+
EventRecord.new(
|
|
19
|
+
sequence_number: (@last_sequence_number += 1),
|
|
20
|
+
timestamp: Time.now,
|
|
21
|
+
event_type: event.event_type,
|
|
22
|
+
payload: event.payload
|
|
23
|
+
)
|
|
24
|
+
end
|
|
25
|
+
@event_records.concat(records)
|
|
26
|
+
records
|
|
27
|
+
end
|
|
28
|
+
end
|
|
29
|
+
end
|
|
30
|
+
end
|
|
@@ -0,0 +1,16 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
# Represents an active subscription to an EventStore.
|
|
3
|
+
class EventSubscription
|
|
4
|
+
attr_reader :id
|
|
5
|
+
|
|
6
|
+
def initialize(id, unsubscribe_proc)
|
|
7
|
+
@id = id
|
|
8
|
+
@unsubscribe_proc = unsubscribe_proc
|
|
9
|
+
freeze
|
|
10
|
+
end
|
|
11
|
+
|
|
12
|
+
def unsubscribe
|
|
13
|
+
@unsubscribe_proc.call
|
|
14
|
+
end
|
|
15
|
+
end
|
|
16
|
+
end
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
# Factory/helper for building EventFilter objects conveniently.
|
|
3
|
+
# Usage: EventStoreRuby.create_filter(%w[UserCreated], [{ 'user_id' => 42 }])
|
|
4
|
+
def create_filter(event_types, payload_predicates = nil)
|
|
5
|
+
EventFilter.new(event_types: event_types, payload_predicates: payload_predicates)
|
|
6
|
+
end
|
|
7
|
+
|
|
8
|
+
module_function :create_filter
|
|
9
|
+
|
|
10
|
+
# Build an EventQuery combining multiple EventFilter objects with OR
|
|
11
|
+
# semantics. Example:
|
|
12
|
+
# f1 = create_filter(['UserCreated'])
|
|
13
|
+
# f2 = create_filter(['OrderPlaced'])
|
|
14
|
+
# query = create_query([f1, f2])
|
|
15
|
+
def create_query(filters)
|
|
16
|
+
EventQuery.new(filters: filters)
|
|
17
|
+
end
|
|
18
|
+
|
|
19
|
+
module_function :create_query
|
|
20
|
+
end
|
|
@@ -0,0 +1,59 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
# Fully in-memory EventStore implementation.
|
|
3
|
+
class MemoryEventStore
|
|
4
|
+
def initialize(notifier: MemoryEventStreamNotifier.new)
|
|
5
|
+
@event_stream = EventStream.new
|
|
6
|
+
@notifier = notifier
|
|
7
|
+
@lock = ReadWriteLockFIFO.new
|
|
8
|
+
end
|
|
9
|
+
|
|
10
|
+
# Query events with optional filter.
|
|
11
|
+
# Returns QueryResult.
|
|
12
|
+
def query(filter = nil)
|
|
13
|
+
query_with_lock(filter, @lock)
|
|
14
|
+
end
|
|
15
|
+
|
|
16
|
+
# Convenience: return all events in the store
|
|
17
|
+
def query_all
|
|
18
|
+
query(nil)
|
|
19
|
+
end
|
|
20
|
+
|
|
21
|
+
def append(events, filter = nil, expected_max_sequence_number: nil)
|
|
22
|
+
raise ArgumentError, 'events must be an Array' unless events.is_a?(Array)
|
|
23
|
+
|
|
24
|
+
@lock.acquire_write
|
|
25
|
+
begin
|
|
26
|
+
if expected_max_sequence_number
|
|
27
|
+
current_query_result = query_with_lock(filter, nil) # Already under write lock, no read lock needed
|
|
28
|
+
if current_query_result.max_sequence_number != expected_max_sequence_number
|
|
29
|
+
raise 'eventstore-stores-memory-err05: Context changed: events were modified between query() and append()'
|
|
30
|
+
end
|
|
31
|
+
end
|
|
32
|
+
|
|
33
|
+
event_records = @event_stream.append(events)
|
|
34
|
+
@notifier.notify(event_records)
|
|
35
|
+
ensure
|
|
36
|
+
@lock.release_write
|
|
37
|
+
end
|
|
38
|
+
end
|
|
39
|
+
|
|
40
|
+
def subscribe(&handle)
|
|
41
|
+
@notifier.subscribe(handle)
|
|
42
|
+
end
|
|
43
|
+
|
|
44
|
+
private
|
|
45
|
+
|
|
46
|
+
def query_with_lock(filter, lock)
|
|
47
|
+
lock&.acquire_read
|
|
48
|
+
begin
|
|
49
|
+
matching = QueryProcessor.process(@event_stream.event_records, filter)
|
|
50
|
+
max_seq = matching.empty? ? 0 : matching.last.sequence_number
|
|
51
|
+
# With deterministic ReadWriteLockFIFO the additional sleep is no
|
|
52
|
+
# longer needed to satisfy ordering expectations in tests.
|
|
53
|
+
QueryResult.new(events: matching, max_sequence_number: max_seq)
|
|
54
|
+
ensure
|
|
55
|
+
lock&.release_read
|
|
56
|
+
end
|
|
57
|
+
end
|
|
58
|
+
end
|
|
59
|
+
end
|
|
@@ -0,0 +1,55 @@
|
|
|
1
|
+
require 'concurrent'
|
|
2
|
+
|
|
3
|
+
module EventStoreRuby
|
|
4
|
+
# In-memory notifier broadcasting EventRecord arrays to subscribers.
|
|
5
|
+
class MemoryEventStreamNotifier
|
|
6
|
+
def initialize
|
|
7
|
+
@subscriptions = {}
|
|
8
|
+
@counter = 0
|
|
9
|
+
@mutex = Mutex.new
|
|
10
|
+
end
|
|
11
|
+
|
|
12
|
+
# Subscribe with a handler Proc accepting events Array<EventRecord>.
|
|
13
|
+
# Returns EventSubscription.
|
|
14
|
+
def subscribe(handle)
|
|
15
|
+
raise ArgumentError, 'handle must respond_to :call' unless handle.respond_to?(:call)
|
|
16
|
+
|
|
17
|
+
id = nil
|
|
18
|
+
unsubscribe_proc = nil
|
|
19
|
+
|
|
20
|
+
@mutex.synchronize do
|
|
21
|
+
id = "notifier-sub-#{@counter += 1}"
|
|
22
|
+
@subscriptions[id] = handle
|
|
23
|
+
unsubscribe_proc = -> {
|
|
24
|
+
@mutex.synchronize { @subscriptions.delete(id) }
|
|
25
|
+
}
|
|
26
|
+
end
|
|
27
|
+
|
|
28
|
+
EventSubscription.new(id, unsubscribe_proc)
|
|
29
|
+
end
|
|
30
|
+
|
|
31
|
+
# Notify subscribers concurrently; errors are rescued and printed.
|
|
32
|
+
def notify(events)
|
|
33
|
+
return if events.empty?
|
|
34
|
+
subs = nil
|
|
35
|
+
@mutex.synchronize { subs = @subscriptions.values.dup }
|
|
36
|
+
|
|
37
|
+
# run each handler in a future so notification order is preserved per subscription but not blocked by others
|
|
38
|
+
futures = subs.map do |handle|
|
|
39
|
+
Concurrent::Future.execute do
|
|
40
|
+
begin
|
|
41
|
+
handle.call(events)
|
|
42
|
+
rescue StandardError => e
|
|
43
|
+
Kernel.warn "notifiers-memory-err01: Error notifying subscriber: #{e.message}\n#{e.backtrace.join("\n")}"
|
|
44
|
+
end
|
|
45
|
+
end
|
|
46
|
+
end
|
|
47
|
+
|
|
48
|
+
futures.each(&:wait) # Equivalent to Promise.allSettled
|
|
49
|
+
end
|
|
50
|
+
|
|
51
|
+
def close
|
|
52
|
+
@mutex.synchronize { @subscriptions.clear }
|
|
53
|
+
end
|
|
54
|
+
end
|
|
55
|
+
end
|
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
module Postgres
|
|
3
|
+
module InsertBuilder
|
|
4
|
+
module_function
|
|
5
|
+
|
|
6
|
+
# Returns [sql, params]
|
|
7
|
+
def build_cte_insert_query(filter, expected_max_seq)
|
|
8
|
+
version_sql, version_params = QueryBuilder.build_context_version_query(filter)
|
|
9
|
+
context_param_count = version_params.length
|
|
10
|
+
event_types_param = context_param_count + 1
|
|
11
|
+
payloads_param = context_param_count + 2
|
|
12
|
+
|
|
13
|
+
sql = <<~SQL
|
|
14
|
+
WITH context AS (
|
|
15
|
+
SELECT MAX(sequence_number) AS max_seq
|
|
16
|
+
FROM events
|
|
17
|
+
WHERE #{version_sql}
|
|
18
|
+
)
|
|
19
|
+
INSERT INTO events (event_type, payload)
|
|
20
|
+
SELECT unnest($#{event_types_param}::text[]), unnest($#{payloads_param}::jsonb[])
|
|
21
|
+
FROM context
|
|
22
|
+
WHERE COALESCE(max_seq, 0) = #{expected_max_seq}
|
|
23
|
+
RETURNING *
|
|
24
|
+
SQL
|
|
25
|
+
|
|
26
|
+
[sql, version_params]
|
|
27
|
+
end
|
|
28
|
+
end
|
|
29
|
+
end
|
|
30
|
+
end
|
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
module Postgres
|
|
3
|
+
module QueryBuilder
|
|
4
|
+
module_function
|
|
5
|
+
|
|
6
|
+
def pg_array_literal(strings)
|
|
7
|
+
'{' + strings.map { |s| s.gsub('"', '""') }.join(',') + '}'
|
|
8
|
+
end
|
|
9
|
+
|
|
10
|
+
# Returns [sql, params] for a SELECT query returning all matching events,
|
|
11
|
+
# ordered by ascending sequence number. Accepts an EventFilter or an
|
|
12
|
+
# EventQuery (logical OR of filters).
|
|
13
|
+
def build_context_query(filter_or_query)
|
|
14
|
+
where_sql, params = compile_conditions(filter_or_query)
|
|
15
|
+
["SELECT * FROM events WHERE #{where_sql} ORDER BY sequence_number ASC", params]
|
|
16
|
+
end
|
|
17
|
+
|
|
18
|
+
# Returns [where_sql, params] for usage inside CTEs (no SELECT/ORDER BY).
|
|
19
|
+
def build_context_version_query(filter_or_query)
|
|
20
|
+
compile_conditions(filter_or_query)
|
|
21
|
+
end
|
|
22
|
+
|
|
23
|
+
# ------------------------------------------------------------------
|
|
24
|
+
# Internal helpers
|
|
25
|
+
# ------------------------------------------------------------------
|
|
26
|
+
|
|
27
|
+
def compile_conditions(filter_or_query)
|
|
28
|
+
sql_fragments = []
|
|
29
|
+
params = []
|
|
30
|
+
|
|
31
|
+
if filter_or_query.is_a?(EventStoreRuby::EventQuery)
|
|
32
|
+
filter_or_query.filters.each do |flt|
|
|
33
|
+
sql_fragments << single_filter_condition(flt, params)
|
|
34
|
+
end
|
|
35
|
+
else
|
|
36
|
+
sql_fragments << single_filter_condition(filter_or_query, params)
|
|
37
|
+
end
|
|
38
|
+
|
|
39
|
+
[sql_fragments.join(' OR '), params]
|
|
40
|
+
end
|
|
41
|
+
|
|
42
|
+
# Builds the SQL condition for a single EventFilter and pushes the
|
|
43
|
+
# corresponding parameters onto +params+ Array. Returns the condition
|
|
44
|
+
# string wrapped in parentheses.
|
|
45
|
+
def single_filter_condition(filter, params)
|
|
46
|
+
param_index = params.length + 1
|
|
47
|
+
params << pg_array_literal(filter.event_types)
|
|
48
|
+
condition = "event_type = ANY($#{param_index}::text[])"
|
|
49
|
+
|
|
50
|
+
if filter.payload_predicates&.any?
|
|
51
|
+
or_conditions = filter.payload_predicates.map do |predicate|
|
|
52
|
+
param_index = params.length + 1
|
|
53
|
+
params << predicate.to_json
|
|
54
|
+
"payload @> $#{param_index}"
|
|
55
|
+
end
|
|
56
|
+
condition += " AND (#{or_conditions.join(' OR ')})"
|
|
57
|
+
end
|
|
58
|
+
|
|
59
|
+
"(#{condition})"
|
|
60
|
+
end
|
|
61
|
+
end
|
|
62
|
+
end
|
|
63
|
+
end
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
require 'pg'
|
|
2
|
+
require 'uri'
|
|
3
|
+
|
|
4
|
+
module EventStoreRuby
|
|
5
|
+
module Postgres
|
|
6
|
+
CREATE_EVENTS_TABLE = <<~SQL.freeze
|
|
7
|
+
CREATE TABLE IF NOT EXISTS events (
|
|
8
|
+
sequence_number BIGSERIAL PRIMARY KEY,
|
|
9
|
+
occurred_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
|
10
|
+
event_type TEXT NOT NULL,
|
|
11
|
+
payload JSONB NOT NULL
|
|
12
|
+
)
|
|
13
|
+
SQL
|
|
14
|
+
|
|
15
|
+
CREATE_EVENT_TYPE_INDEX = <<~SQL.freeze
|
|
16
|
+
CREATE INDEX IF NOT EXISTS idx_events_type ON events(event_type)
|
|
17
|
+
SQL
|
|
18
|
+
|
|
19
|
+
CREATE_OCCURRED_AT_INDEX = <<~SQL.freeze
|
|
20
|
+
CREATE INDEX IF NOT EXISTS idx_events_occurred_at ON events(occurred_at)
|
|
21
|
+
SQL
|
|
22
|
+
|
|
23
|
+
CREATE_PAYLOAD_GIN_INDEX = <<~SQL.freeze
|
|
24
|
+
CREATE INDEX IF NOT EXISTS idx_events_payload_gin ON events USING gin(payload)
|
|
25
|
+
SQL
|
|
26
|
+
|
|
27
|
+
module_function
|
|
28
|
+
|
|
29
|
+
def create_database_query(db_name)
|
|
30
|
+
# Safe quote identifier without needing PG C-extension helpers
|
|
31
|
+
quoted = '"' + db_name.gsub('"', '""') + '"'
|
|
32
|
+
"CREATE DATABASE #{quoted}"
|
|
33
|
+
end
|
|
34
|
+
|
|
35
|
+
# Replace DB name in a postgres connection url. naive but effective.
|
|
36
|
+
def change_database_in_connection_string(conn_str, new_db_name)
|
|
37
|
+
uri = URI.parse(conn_str)
|
|
38
|
+
uri.path = "/#{new_db_name}"
|
|
39
|
+
uri.to_s
|
|
40
|
+
end
|
|
41
|
+
|
|
42
|
+
def get_database_name_from_connection_string(conn_str)
|
|
43
|
+
uri = URI.parse(conn_str)
|
|
44
|
+
dbname = uri.path.start_with?('/') ? uri.path[1..] : uri.path
|
|
45
|
+
dbname.empty? ? nil : dbname
|
|
46
|
+
rescue URI::InvalidURIError
|
|
47
|
+
warn 'eventstore-stores-postgres-err01: Invalid connection string'
|
|
48
|
+
nil
|
|
49
|
+
end
|
|
50
|
+
end
|
|
51
|
+
end
|
|
@@ -0,0 +1,67 @@
|
|
|
1
|
+
require 'json'
|
|
2
|
+
|
|
3
|
+
module EventStoreRuby
|
|
4
|
+
module Postgres
|
|
5
|
+
module Transform
|
|
6
|
+
module_function
|
|
7
|
+
|
|
8
|
+
def deserialize_event(row)
|
|
9
|
+
raw_payload = row['payload'].is_a?(String) ? JSON.parse(row['payload']) : row['payload']
|
|
10
|
+
normalized = deep_snake_symbolize(raw_payload)
|
|
11
|
+
EventRecord.new(
|
|
12
|
+
sequence_number: row['sequence_number'].to_i,
|
|
13
|
+
timestamp: row['occurred_at'],
|
|
14
|
+
event_type: row['event_type'],
|
|
15
|
+
payload: normalized
|
|
16
|
+
)
|
|
17
|
+
end
|
|
18
|
+
|
|
19
|
+
# Recursively convert hash keys from camelCase / PascalCase strings to snake_case symbols
|
|
20
|
+
def deep_snake_symbolize(obj)
|
|
21
|
+
case obj
|
|
22
|
+
when Array
|
|
23
|
+
obj.map { |v| deep_snake_symbolize(v) }
|
|
24
|
+
when Hash
|
|
25
|
+
obj.each_with_object({}) do |(k, v), h|
|
|
26
|
+
snake = k.to_s.gsub(/([a-z\d])([A-Z])/, '\1_\2').downcase.to_sym
|
|
27
|
+
h[snake] = deep_snake_symbolize(v)
|
|
28
|
+
end
|
|
29
|
+
else
|
|
30
|
+
obj
|
|
31
|
+
end
|
|
32
|
+
end
|
|
33
|
+
|
|
34
|
+
def map_records_to_events(result)
|
|
35
|
+
result.map { |row| deserialize_event(row) }
|
|
36
|
+
end
|
|
37
|
+
|
|
38
|
+
def extract_max_sequence_number(result)
|
|
39
|
+
return 0 if result.nil?
|
|
40
|
+
|
|
41
|
+
if result.respond_to?(:ntuples)
|
|
42
|
+
return 0 if result.ntuples.zero?
|
|
43
|
+
last_row = result[result.ntuples - 1]
|
|
44
|
+
return last_row['sequence_number'].to_i
|
|
45
|
+
elsif result.is_a?(Array)
|
|
46
|
+
return 0 if result.empty?
|
|
47
|
+
return result.last['sequence_number'].to_i
|
|
48
|
+
else
|
|
49
|
+
0
|
|
50
|
+
end
|
|
51
|
+
end
|
|
52
|
+
|
|
53
|
+
def prepare_insert_params(events, context_params)
|
|
54
|
+
event_types_arr = []
|
|
55
|
+
payload_strings = []
|
|
56
|
+
events.each do |event|
|
|
57
|
+
event_types_arr << event.event_type
|
|
58
|
+
payload_strings << event.payload.to_json
|
|
59
|
+
end
|
|
60
|
+
|
|
61
|
+
evtypes_literal = '{' + event_types_arr.map { |s| s.gsub('"', '""') }.join(',') + '}'
|
|
62
|
+
payload_literal = '{' + payload_strings.map { |j| '"' + j.gsub('"','\"').gsub('\\','\\\\') + '"' }.join(',') + '}'
|
|
63
|
+
context_params + [evtypes_literal, payload_literal]
|
|
64
|
+
end
|
|
65
|
+
end
|
|
66
|
+
end
|
|
67
|
+
end
|
|
@@ -0,0 +1,111 @@
|
|
|
1
|
+
require 'pg'
|
|
2
|
+
require 'uri'
|
|
3
|
+
require 'json'
|
|
4
|
+
|
|
5
|
+
module EventStoreRuby
|
|
6
|
+
# Postgres-backed EventStore implementation.
|
|
7
|
+
class PostgresEventStore
|
|
8
|
+
NON_EXISTENT_EVENT_TYPE = "__NON_EXISTENT__#{rand(36**8).to_s(36)}".freeze
|
|
9
|
+
|
|
10
|
+
Options = Struct.new(:connection_string, :notifier, keyword_init: true)
|
|
11
|
+
|
|
12
|
+
def initialize(options = {})
|
|
13
|
+
opts = Options.new(options)
|
|
14
|
+
conn_str = opts.connection_string || ENV['DATABASE_URL']
|
|
15
|
+
raise 'eventstore-stores-postgres-err02: Connection string missing. DATABASE_URL environment variable not set.' unless conn_str
|
|
16
|
+
|
|
17
|
+
@database_name = Postgres.get_database_name_from_connection_string(conn_str)
|
|
18
|
+
raise "eventstore-stores-postgres-err03: Database name not found. Invalid connection string: #{conn_str}" unless @database_name
|
|
19
|
+
|
|
20
|
+
@conn_str = conn_str
|
|
21
|
+
@notifier = opts.notifier || MemoryEventStreamNotifier.new
|
|
22
|
+
@conn = PG.connect(conn_str)
|
|
23
|
+
end
|
|
24
|
+
|
|
25
|
+
# Returns QueryResult
|
|
26
|
+
def query(filter)
|
|
27
|
+
sql, params = Postgres::QueryBuilder.build_context_query(filter)
|
|
28
|
+
result = @conn.exec_params(sql, params)
|
|
29
|
+
QueryResult.new(
|
|
30
|
+
events: Postgres::Transform.map_records_to_events(result),
|
|
31
|
+
max_sequence_number: Postgres::Transform.extract_max_sequence_number(result)
|
|
32
|
+
)
|
|
33
|
+
end
|
|
34
|
+
|
|
35
|
+
def subscribe(&handle)
|
|
36
|
+
@notifier.subscribe(handle)
|
|
37
|
+
end
|
|
38
|
+
|
|
39
|
+
def append(events, filter = nil, expected_max_sequence_number: nil)
|
|
40
|
+
return if events.empty?
|
|
41
|
+
|
|
42
|
+
# Normalize the +filter+ to always be an EventQuery so the SQL helpers
|
|
43
|
+
# can work consistently. The TypeScript original expects an EventQuery
|
|
44
|
+
# (logical OR combination of EventFilter objects).
|
|
45
|
+
|
|
46
|
+
if filter.nil?
|
|
47
|
+
# No filter supplied – treat this as an "append to empty stream" use
|
|
48
|
+
# case. Create a dummy filter that can never match so the CTE INSERT
|
|
49
|
+
# only succeeds when the table is still empty (max_seq = 0).
|
|
50
|
+
dummy_filter = EventStoreRuby.create_filter([NON_EXISTENT_EVENT_TYPE])
|
|
51
|
+
filter = EventStoreRuby.create_query([dummy_filter])
|
|
52
|
+
expected_max_sequence_number = 0
|
|
53
|
+
elsif filter.is_a?(EventFilter)
|
|
54
|
+
# Wrap single EventFilter in an EventQuery for uniform handling
|
|
55
|
+
filter = EventStoreRuby.create_query([filter])
|
|
56
|
+
end
|
|
57
|
+
|
|
58
|
+
# At this point +filter+ is guaranteed to be an EventQuery with at least
|
|
59
|
+
# one inner EventFilter, so no further guard needed.
|
|
60
|
+
|
|
61
|
+
raise 'eventstore-stores-postgres-err04: Expected max sequence number is required when a filter is provided!' if expected_max_sequence_number.nil?
|
|
62
|
+
|
|
63
|
+
cte_sql, context_params = Postgres::InsertBuilder.build_cte_insert_query(filter, expected_max_sequence_number)
|
|
64
|
+
params = Postgres::Transform.prepare_insert_params(events, context_params)
|
|
65
|
+
|
|
66
|
+
result = @conn.exec_params(cte_sql, params)
|
|
67
|
+
if result.ntuples.zero?
|
|
68
|
+
raise 'eventstore-stores-postgres-err05: Context changed: events were modified between query() and append()'
|
|
69
|
+
end
|
|
70
|
+
|
|
71
|
+
inserted_events = Postgres::Transform.map_records_to_events(result)
|
|
72
|
+
@notifier.notify(inserted_events)
|
|
73
|
+
end
|
|
74
|
+
|
|
75
|
+
# Creates the database (if possible) and tables/indexes
|
|
76
|
+
def initialize_database
|
|
77
|
+
create_database
|
|
78
|
+
create_table_and_indexes
|
|
79
|
+
end
|
|
80
|
+
|
|
81
|
+
def close
|
|
82
|
+
@notifier.close
|
|
83
|
+
@conn.close
|
|
84
|
+
end
|
|
85
|
+
|
|
86
|
+
private
|
|
87
|
+
|
|
88
|
+
def create_database
|
|
89
|
+
# Connect to postgres default db to create target database
|
|
90
|
+
admin_conn_str = Postgres.change_database_in_connection_string(@conn_str, 'postgres')
|
|
91
|
+
admin_conn = PG.connect(admin_conn_str)
|
|
92
|
+
begin
|
|
93
|
+
admin_conn.exec(Postgres.create_database_query(@database_name))
|
|
94
|
+
puts "Database created: #{@database_name}"
|
|
95
|
+
rescue PG::DuplicateDatabase
|
|
96
|
+
puts "eventstore-stores-postgres-err06: Database already exists: #{@database_name}"
|
|
97
|
+
ensure
|
|
98
|
+
admin_conn.close
|
|
99
|
+
end
|
|
100
|
+
end
|
|
101
|
+
|
|
102
|
+
def create_table_and_indexes
|
|
103
|
+
[Postgres::CREATE_EVENTS_TABLE,
|
|
104
|
+
Postgres::CREATE_EVENT_TYPE_INDEX,
|
|
105
|
+
Postgres::CREATE_OCCURRED_AT_INDEX,
|
|
106
|
+
Postgres::CREATE_PAYLOAD_GIN_INDEX].each do |sql|
|
|
107
|
+
@conn.exec(sql)
|
|
108
|
+
end
|
|
109
|
+
end
|
|
110
|
+
end
|
|
111
|
+
end
|
|
@@ -0,0 +1,66 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
# In-memory filtering helpers. Mirrors the behaviour of the TypeScript queryprocessor.
|
|
3
|
+
module QueryProcessor
|
|
4
|
+
module_function
|
|
5
|
+
|
|
6
|
+
# Public: returns an array of EventRecord that satisfy the given filter.
|
|
7
|
+
# events - Array<EventRecord>
|
|
8
|
+
# filter - EventFilter or nil
|
|
9
|
+
def process(events, filter = nil)
|
|
10
|
+
return events.dup if filter.nil? # return a shallow copy to avoid mutating original array
|
|
11
|
+
|
|
12
|
+
events.select { |event| check_event(event, filter) }
|
|
13
|
+
end
|
|
14
|
+
|
|
15
|
+
# Internal -----------------------------------------------------------------
|
|
16
|
+
|
|
17
|
+
def check_event(event, filter)
|
|
18
|
+
if filter.is_a?(EventQuery)
|
|
19
|
+
# Match if any inner filter matches
|
|
20
|
+
filter.filters.any? { |inner| check_event(event, inner) }
|
|
21
|
+
else
|
|
22
|
+
check_event_types(event.event_type, filter.event_types) &&
|
|
23
|
+
check_predicates(event.payload, filter.payload_predicates)
|
|
24
|
+
end
|
|
25
|
+
end
|
|
26
|
+
|
|
27
|
+
def check_event_types(event_type, event_types)
|
|
28
|
+
return true if event_types.nil? || event_types.empty?
|
|
29
|
+
event_types.include?(event_type)
|
|
30
|
+
end
|
|
31
|
+
|
|
32
|
+
def check_predicates(payload, predicates)
|
|
33
|
+
return true if predicates.nil? || predicates.empty?
|
|
34
|
+
predicates.any? { |predicate| subset?(payload, predicate) }
|
|
35
|
+
end
|
|
36
|
+
|
|
37
|
+
# Recursive inclusion check: is predicate a subset of payload?
|
|
38
|
+
def subset?(payload, predicate)
|
|
39
|
+
return true if predicate.nil?
|
|
40
|
+
return false if payload.nil?
|
|
41
|
+
|
|
42
|
+
# Primitive values
|
|
43
|
+
if !payload.is_a?(Array) && !payload.is_a?(Hash)
|
|
44
|
+
return payload == predicate
|
|
45
|
+
end
|
|
46
|
+
|
|
47
|
+
if predicate.is_a?(Array)
|
|
48
|
+
return false unless payload.is_a?(Array)
|
|
49
|
+
# Every element of predicate must match at least one element of payload.
|
|
50
|
+
return predicate.all? do |pred_elem|
|
|
51
|
+
payload.any? { |payload_elem| subset?(payload_elem, pred_elem) }
|
|
52
|
+
end
|
|
53
|
+
elsif predicate.is_a?(Hash)
|
|
54
|
+
return false unless payload.is_a?(Hash)
|
|
55
|
+
predicate.each do |key, pred_val|
|
|
56
|
+
return false unless payload.key?(key)
|
|
57
|
+
return false unless subset?(payload[key], pred_val)
|
|
58
|
+
end
|
|
59
|
+
true
|
|
60
|
+
else
|
|
61
|
+
# Non-Array, non-Hash but payload might be Array/Hash when predicate is primitive.
|
|
62
|
+
payload == predicate
|
|
63
|
+
end
|
|
64
|
+
end
|
|
65
|
+
end
|
|
66
|
+
end
|
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
# Result object returned from EventStore#query.
|
|
3
|
+
# Contains the matching events and the max sequence_number observed.
|
|
4
|
+
class QueryResult
|
|
5
|
+
attr_reader :events, :max_sequence_number
|
|
6
|
+
|
|
7
|
+
def initialize(events:, max_sequence_number: 0)
|
|
8
|
+
@events = events.freeze
|
|
9
|
+
@max_sequence_number = Integer(max_sequence_number)
|
|
10
|
+
freeze
|
|
11
|
+
end
|
|
12
|
+
end
|
|
13
|
+
end
|
|
@@ -0,0 +1,112 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module EventStoreRuby
|
|
4
|
+
# A deterministic FIFO read-write lock mirroring the original TypeScript logic
|
|
5
|
+
# but using explicit ticket numbers so Ruby thread scheduling cannot reorder
|
|
6
|
+
# requests. Readers queued before a writer are granted together; writers get
|
|
7
|
+
# exclusive access in the exact order they arrived.
|
|
8
|
+
class ReadWriteLockFIFO
|
|
9
|
+
Entry = Struct.new(:type, :ticket, :cond, :granted)
|
|
10
|
+
|
|
11
|
+
def initialize
|
|
12
|
+
@read_count = 0
|
|
13
|
+
@write_locked = false
|
|
14
|
+
@queue = []
|
|
15
|
+
@next_ticket = 0
|
|
16
|
+
@mutex = Mutex.new
|
|
17
|
+
end
|
|
18
|
+
|
|
19
|
+
# Acquire a read lock (blocks until granted)
|
|
20
|
+
def acquire_read
|
|
21
|
+
wait_for_grant(create_entry(:read))
|
|
22
|
+
end
|
|
23
|
+
|
|
24
|
+
# Acquire a write lock (blocks until granted)
|
|
25
|
+
def acquire_write
|
|
26
|
+
wait_for_grant(create_entry(:write))
|
|
27
|
+
end
|
|
28
|
+
|
|
29
|
+
def release_read
|
|
30
|
+
@mutex.synchronize do
|
|
31
|
+
raise 'No read locks to release' if @read_count.zero?
|
|
32
|
+
@read_count -= 1
|
|
33
|
+
process_queue if @read_count.zero?
|
|
34
|
+
end
|
|
35
|
+
end
|
|
36
|
+
|
|
37
|
+
def release_write
|
|
38
|
+
@mutex.synchronize do
|
|
39
|
+
raise 'No write lock to release' unless @write_locked
|
|
40
|
+
@write_locked = false
|
|
41
|
+
process_queue
|
|
42
|
+
end
|
|
43
|
+
end
|
|
44
|
+
|
|
45
|
+
private
|
|
46
|
+
|
|
47
|
+
def create_entry(type)
|
|
48
|
+
@mutex.synchronize do
|
|
49
|
+
ticket = (@next_ticket += 1)
|
|
50
|
+
entry = Entry.new(type, ticket, ConditionVariable.new, false)
|
|
51
|
+
@queue << entry
|
|
52
|
+
process_queue
|
|
53
|
+
entry
|
|
54
|
+
end
|
|
55
|
+
end
|
|
56
|
+
|
|
57
|
+
def wait_for_grant(entry)
|
|
58
|
+
@mutex.synchronize do
|
|
59
|
+
entry.cond.wait(@mutex) until entry.granted
|
|
60
|
+
end
|
|
61
|
+
nil
|
|
62
|
+
end
|
|
63
|
+
|
|
64
|
+
# Grant locks according to arrival order
|
|
65
|
+
def process_queue
|
|
66
|
+
# called with mutex held
|
|
67
|
+
loop do
|
|
68
|
+
break if @queue.empty?
|
|
69
|
+
|
|
70
|
+
head = @queue.first
|
|
71
|
+
|
|
72
|
+
if head.type == :write
|
|
73
|
+
# writers need exclusive access
|
|
74
|
+
break if @read_count.positive? || @write_locked
|
|
75
|
+
|
|
76
|
+
grant_write(head)
|
|
77
|
+
@queue.shift
|
|
78
|
+
break # single writer granted
|
|
79
|
+
else
|
|
80
|
+
# grant all contiguous readers at front
|
|
81
|
+
break if @write_locked
|
|
82
|
+
|
|
83
|
+
readers = take_contiguous_front_readers
|
|
84
|
+
break if readers.empty?
|
|
85
|
+
|
|
86
|
+
readers.each { |e| grant_read(e) }
|
|
87
|
+
break
|
|
88
|
+
end
|
|
89
|
+
end
|
|
90
|
+
end
|
|
91
|
+
|
|
92
|
+
def take_contiguous_front_readers
|
|
93
|
+
readers = []
|
|
94
|
+
while (entry = @queue.first) && entry.type == :read
|
|
95
|
+
readers << @queue.shift
|
|
96
|
+
end
|
|
97
|
+
readers
|
|
98
|
+
end
|
|
99
|
+
|
|
100
|
+
def grant_read(entry)
|
|
101
|
+
entry.granted = true
|
|
102
|
+
@read_count += 1
|
|
103
|
+
entry.cond.signal
|
|
104
|
+
end
|
|
105
|
+
|
|
106
|
+
def grant_write(entry)
|
|
107
|
+
entry.granted = true
|
|
108
|
+
@write_locked = true
|
|
109
|
+
entry.cond.signal
|
|
110
|
+
end
|
|
111
|
+
end
|
|
112
|
+
end
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
module EventStoreRuby
|
|
2
|
+
require_relative 'eventstore_ruby/version'
|
|
3
|
+
require_relative 'eventstore_ruby/event'
|
|
4
|
+
require_relative 'eventstore_ruby/event_query'
|
|
5
|
+
require_relative 'eventstore_ruby/event_filter'
|
|
6
|
+
require_relative 'eventstore_ruby/query_result'
|
|
7
|
+
require_relative 'eventstore_ruby/filter'
|
|
8
|
+
require_relative 'eventstore_ruby/query_processor'
|
|
9
|
+
require_relative 'eventstore_ruby/read_write_lock_fifo'
|
|
10
|
+
require_relative 'eventstore_ruby/event_stream'
|
|
11
|
+
require_relative 'eventstore_ruby/event_subscription'
|
|
12
|
+
require_relative 'eventstore_ruby/memory_event_stream_notifier'
|
|
13
|
+
require_relative 'eventstore_ruby/memory_event_store'
|
|
14
|
+
require_relative 'eventstore_ruby/postgres/schema'
|
|
15
|
+
require_relative 'eventstore_ruby/postgres/query'
|
|
16
|
+
require_relative 'eventstore_ruby/postgres/insert'
|
|
17
|
+
require_relative 'eventstore_ruby/postgres/transform'
|
|
18
|
+
require_relative 'eventstore_ruby/postgres_event_store'
|
|
19
|
+
|
|
20
|
+
# Additional requires will be added here as we port more classes.
|
|
21
|
+
end
|
metadata
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
|
2
|
+
name: eventstore_ruby
|
|
3
|
+
version: !ruby/object:Gem::Version
|
|
4
|
+
version: 0.1.1
|
|
5
|
+
platform: ruby
|
|
6
|
+
authors:
|
|
7
|
+
- Manuel Ortega
|
|
8
|
+
autorequire:
|
|
9
|
+
bindir: bin
|
|
10
|
+
cert_chain: []
|
|
11
|
+
date: 2025-08-03 00:00:00.000000000 Z
|
|
12
|
+
dependencies:
|
|
13
|
+
- !ruby/object:Gem::Dependency
|
|
14
|
+
name: concurrent-ruby
|
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
|
16
|
+
requirements:
|
|
17
|
+
- - "~>"
|
|
18
|
+
- !ruby/object:Gem::Version
|
|
19
|
+
version: '1.2'
|
|
20
|
+
type: :runtime
|
|
21
|
+
prerelease: false
|
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
23
|
+
requirements:
|
|
24
|
+
- - "~>"
|
|
25
|
+
- !ruby/object:Gem::Version
|
|
26
|
+
version: '1.2'
|
|
27
|
+
- !ruby/object:Gem::Dependency
|
|
28
|
+
name: pg
|
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
|
30
|
+
requirements:
|
|
31
|
+
- - "~>"
|
|
32
|
+
- !ruby/object:Gem::Version
|
|
33
|
+
version: '1.4'
|
|
34
|
+
type: :runtime
|
|
35
|
+
prerelease: false
|
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
37
|
+
requirements:
|
|
38
|
+
- - "~>"
|
|
39
|
+
- !ruby/object:Gem::Version
|
|
40
|
+
version: '1.4'
|
|
41
|
+
- !ruby/object:Gem::Dependency
|
|
42
|
+
name: minitest
|
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
|
44
|
+
requirements:
|
|
45
|
+
- - "~>"
|
|
46
|
+
- !ruby/object:Gem::Version
|
|
47
|
+
version: '5.18'
|
|
48
|
+
type: :development
|
|
49
|
+
prerelease: false
|
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
51
|
+
requirements:
|
|
52
|
+
- - "~>"
|
|
53
|
+
- !ruby/object:Gem::Version
|
|
54
|
+
version: '5.18'
|
|
55
|
+
description: A Ruby gem providing the same event-sourcing primitives and in-memory/Postgres
|
|
56
|
+
stores as the @eventstore-typescript package.
|
|
57
|
+
email:
|
|
58
|
+
- maoc84@gmail.com
|
|
59
|
+
executables: []
|
|
60
|
+
extensions: []
|
|
61
|
+
extra_rdoc_files: []
|
|
62
|
+
files:
|
|
63
|
+
- CHANGELOG.md
|
|
64
|
+
- LICENSE
|
|
65
|
+
- README.md
|
|
66
|
+
- lib/eventstore_ruby.rb
|
|
67
|
+
- lib/eventstore_ruby/event.rb
|
|
68
|
+
- lib/eventstore_ruby/event_filter.rb
|
|
69
|
+
- lib/eventstore_ruby/event_query.rb
|
|
70
|
+
- lib/eventstore_ruby/event_stream.rb
|
|
71
|
+
- lib/eventstore_ruby/event_subscription.rb
|
|
72
|
+
- lib/eventstore_ruby/filter.rb
|
|
73
|
+
- lib/eventstore_ruby/memory_event_store.rb
|
|
74
|
+
- lib/eventstore_ruby/memory_event_stream_notifier.rb
|
|
75
|
+
- lib/eventstore_ruby/postgres/insert.rb
|
|
76
|
+
- lib/eventstore_ruby/postgres/query.rb
|
|
77
|
+
- lib/eventstore_ruby/postgres/schema.rb
|
|
78
|
+
- lib/eventstore_ruby/postgres/transform.rb
|
|
79
|
+
- lib/eventstore_ruby/postgres_event_store.rb
|
|
80
|
+
- lib/eventstore_ruby/query_processor.rb
|
|
81
|
+
- lib/eventstore_ruby/query_result.rb
|
|
82
|
+
- lib/eventstore_ruby/read_write_lock_fifo.rb
|
|
83
|
+
- lib/eventstore_ruby/version.rb
|
|
84
|
+
homepage: https://github.com/gazpachoteam/eventstore-ruby
|
|
85
|
+
licenses:
|
|
86
|
+
- MIT
|
|
87
|
+
metadata:
|
|
88
|
+
source_code_uri: https://github.com/gazpachoteam/eventstore-ruby
|
|
89
|
+
changelog_uri: https://github.com/gazpachoteam/eventstore-ruby/blob/main/CHANGELOG.md
|
|
90
|
+
rubygems_mfa_required: 'true'
|
|
91
|
+
post_install_message:
|
|
92
|
+
rdoc_options: []
|
|
93
|
+
require_paths:
|
|
94
|
+
- lib
|
|
95
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
|
96
|
+
requirements:
|
|
97
|
+
- - ">="
|
|
98
|
+
- !ruby/object:Gem::Version
|
|
99
|
+
version: '3.0'
|
|
100
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
|
101
|
+
requirements:
|
|
102
|
+
- - ">="
|
|
103
|
+
- !ruby/object:Gem::Version
|
|
104
|
+
version: '0'
|
|
105
|
+
requirements: []
|
|
106
|
+
rubygems_version: 3.5.23
|
|
107
|
+
signing_key:
|
|
108
|
+
specification_version: 4
|
|
109
|
+
summary: Ruby port of the @eventstore-typescript library.
|
|
110
|
+
test_files: []
|