pg_eventstore 0.6.0 → 0.7.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (29) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +5 -0
  3. data/README.md +4 -15
  4. data/db/migrations/1_create_streams.sql +13 -0
  5. data/db/migrations/2_create_event_types.sql +10 -0
  6. data/db/migrations/3_create_events.sql +27 -0
  7. data/lib/pg_eventstore/config.rb +3 -1
  8. data/lib/pg_eventstore/rspec/test_helpers.rb +19 -0
  9. data/lib/pg_eventstore/tasks/setup.rake +26 -21
  10. data/lib/pg_eventstore/version.rb +1 -1
  11. metadata +10 -20
  12. data/db/initial/indexes.sql +0 -13
  13. data/db/initial/primary_and_foreign_keys.sql +0 -11
  14. data/db/initial/tables.sql +0 -21
  15. data/db/migrations/0_improve_all_stream_indexes.sql +0 -3
  16. data/db/migrations/12_improve_events_indexes.sql +0 -1
  17. data/db/migrations/13_remove_duplicated_index.sql +0 -1
  18. data/db/migrations/1_improve_specific_stream_indexes.sql +0 -3
  19. data/db/migrations/2_adjust_global_position_index.sql +0 -4
  20. data/db/migrations/3_extract_type_into_separate_table.sql +0 -17
  21. data/db/migrations/4_populate_event_types.rb +0 -11
  22. data/db/migrations/5_adjust_indexes.sql +0 -6
  23. data/db/migrations/6_change_events_event_type_id_null_constraint.sql +0 -1
  24. data/db/migrations/7_change_events_type_constraint.sql +0 -1
  25. data/db/migrations/8_drop_events_type.sql +0 -1
  26. /data/db/{initial/extensions.sql → migrations/0_create_extensions.sql} +0 -0
  27. /data/db/migrations/{9_create_subscriptions.sql → 4_create_subscriptions.sql} +0 -0
  28. /data/db/migrations/{10_create_subscription_commands.sql → 5_create_subscription_commands.sql} +0 -0
  29. /data/db/migrations/{11_create_subscriptions_set_commands.sql → 6_create_subscriptions_set_commands.sql} +0 -0
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9f21137e2e5210ac3482ac7dc91d6e7bcb7e34e51e24cbbfa558f891d071fdf4
4
- data.tar.gz: 37e42584895549d55d99efd2bc1667916e5ded9c8a0b9539c2081cd2d0a1f323
3
+ metadata.gz: 28165ec2868d1585a23873a0aaeacf93cf5f088773f92b6d16255bc522e9d35f
4
+ data.tar.gz: 1c50c8958c7fd0ff4eb6b98aae2b81907ade4987078ab4d6711d7234cfd11850
5
5
  SHA512:
6
- metadata.gz: e2190a042255a6e9b59c6822d586dbc9b142a2189ffc2c3e192fd4bc96862f1c38ceb5307fcf8f4f16d5ab2a0b57852cb46ef3553855f129a08ce289a2293266
7
- data.tar.gz: 7b9e343f6fd66e620e8ae19c5d0bf480d393b69763c312786d983a7be90e13a2987ea8b6714268860dcd7d4e851674c755e88dc07c1da18f7bee82ae13480837
6
+ metadata.gz: 03f42aa5ee06cc45270e199a901fae2d24bbd2ebeeb68e7c6e05a5b81b28f78cd65b5bb98c348c8a31fa7446ba838181615234a1b9e81bfb2cdac72cfd98106a
7
+ data.tar.gz: c5b25f33166df1ad47616a1e0f3b24813992bdb68a8775b346565d9631f3f7cfc9c5e523ab68696c3dcd53cd4f4720b10d58df393361394d3a4b9a3a6aa294af
data/CHANGELOG.md CHANGED
@@ -1,5 +1,10 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.7.0] - 2024-02-09
4
+
5
+ - Refactor `pg_eventstore:create` and `pg_eventstore:drop` rake tasks. They now actually create/drop the database. You will have to execute `delete from migrations where number > 6` query before deploying this version.
6
+ - Drop legacy migrations
7
+
3
8
  ## [0.6.0] - 2024-02-08
4
9
 
5
10
  - Add stream info into `PgEventstore::WrongExpectedRevisionError` error details
data/README.md CHANGED
@@ -20,19 +20,7 @@ If bundler is not being used to manage dependencies, install the gem by executin
20
20
 
21
21
  ## Usage
22
22
 
23
- Before you start, make sure you created a database where events will be stored. A PostgreSQL user must be a superuser to be able to create tables, indexes, primary/foreign keys, etc. Please don't use an existing database/user for this purpose. Example of creating such database and user:
24
-
25
- ```bash
26
- sudo -u postgres createuser pg_eventstore --superuser
27
- sudo -u postgres psql --command="CREATE DATABASE eventstore OWNER pg_eventstore"
28
- sudo -u postgres psql --command="CREATE DATABASE eventstore OWNER pg_eventstore"
29
- ```
30
-
31
- If necessary - adjust your `pg_hba.conf` to allow `pg_eventstore` user to connect to your PostgreSQL server.
32
-
33
- Next step will be configuring a db connection. Please check the **Configuration** chapter bellow to find out how to do it.
34
-
35
- After the db connection is configured, it is time to create necessary database objects. Please include this line into your `Rakefile`:
23
+ Before start using the gem - you have to create the database. Please include this line into your `Rakefile`:
36
24
 
37
25
  ```ruby
38
26
  load "pg_eventstore/tasks/setup.rake"
@@ -40,12 +28,13 @@ load "pg_eventstore/tasks/setup.rake"
40
28
 
41
29
  This will include necessary rake tasks. You can now run
42
30
  ```bash
43
- export PG_EVENTSTORE_URI="postgresql://postgres:postgres@localhost:5532/postgres" # Replace this with your real connection url
31
+ # Replace this with your real connection url
32
+ export PG_EVENTSTORE_URI="postgresql://postgres:postgres@localhost:5532/eventstore"
44
33
  bundle exec rake pg_eventstore:create
45
34
  bundle exec rake pg_eventstore:migrate
46
35
  ```
47
36
 
48
- to create necessary database objects and migrate them to the actual version. After this step your `pg_eventstore` is ready to use.
37
+ to create the database, necessary database objects and migrate them to the latest version. After this step your `pg_eventstore` is ready to use. There is also a `rake pg_eventstore:drop` task which drops the database.
49
38
 
50
39
  Documentation chapters:
51
40
 
@@ -0,0 +1,13 @@
1
+ CREATE TABLE public.streams
2
+ (
3
+ id bigserial NOT NULL,
4
+ context character varying NOT NULL,
5
+ stream_name character varying NOT NULL,
6
+ stream_id character varying NOT NULL,
7
+ stream_revision integer DEFAULT '-1'::integer NOT NULL
8
+ );
9
+
10
+ ALTER TABLE ONLY public.streams
11
+ ADD CONSTRAINT streams_pkey PRIMARY KEY (id);
12
+
13
+ CREATE UNIQUE INDEX idx_streams_context_and_stream_name_and_stream_id ON public.streams USING btree (context, stream_name, stream_id);
@@ -0,0 +1,10 @@
1
+ CREATE TABLE public.event_types
2
+ (
3
+ id bigserial NOT NULL,
4
+ type character varying NOT NULL
5
+ );
6
+
7
+ ALTER TABLE ONLY public.event_types
8
+ ADD CONSTRAINT event_types_pkey PRIMARY KEY (id);
9
+
10
+ CREATE UNIQUE INDEX idx_event_types_type ON public.event_types USING btree (type);
@@ -0,0 +1,27 @@
1
+ CREATE TABLE public.events
2
+ (
3
+ id uuid DEFAULT public.gen_random_uuid() NOT NULL,
4
+ stream_id bigint NOT NULL,
5
+ global_position bigserial NOT NULL,
6
+ stream_revision integer NOT NULL,
7
+ data jsonb DEFAULT '{}'::jsonb NOT NULL,
8
+ metadata jsonb DEFAULT '{}'::jsonb NOT NULL,
9
+ link_id uuid,
10
+ created_at timestamp without time zone DEFAULT now() NOT NULL,
11
+ event_type_id bigint NOT NULL
12
+ );
13
+
14
+ ALTER TABLE ONLY public.events
15
+ ADD CONSTRAINT events_pkey PRIMARY KEY (id);
16
+
17
+ CREATE INDEX idx_events_event_type_id_and_global_position ON public.events USING btree (event_type_id, global_position);
18
+ CREATE INDEX idx_events_global_position ON public.events USING btree (global_position);
19
+ CREATE INDEX idx_events_link_id ON public.events USING btree (link_id);
20
+ CREATE INDEX idx_events_stream_id_and_revision ON public.events USING btree (stream_id, stream_revision);
21
+
22
+ ALTER TABLE ONLY public.events
23
+ ADD CONSTRAINT events_stream_fk FOREIGN KEY (stream_id) REFERENCES public.streams (id) ON DELETE CASCADE;
24
+ ALTER TABLE ONLY public.events
25
+ ADD CONSTRAINT events_event_type_fk FOREIGN KEY (event_type_id) REFERENCES public.event_types (id);
26
+ ALTER TABLE ONLY public.events
27
+ ADD CONSTRAINT events_link_fk FOREIGN KEY (link_id) REFERENCES public.events (id) ON DELETE CASCADE;
@@ -9,7 +9,9 @@ module PgEventstore
9
9
  # @!attribute pg_uri
10
10
  # @return [String] PostgreSQL connection URI docs
11
11
  # https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS
12
- option(:pg_uri) { 'postgresql://postgres:postgres@localhost:5432/eventstore' }
12
+ option(:pg_uri) do
13
+ ENV.fetch('PG_EVENTSTORE_URI') { 'postgresql://postgres:postgres@localhost:5432/eventstore' }
14
+ end
13
15
  # @!attribute max_count
14
16
  # @return [Integer] Number of events to return in one response when reading from a stream
15
17
  option(:max_count) { 1000 }
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ module PgEventstore
4
+ module TestHelpers
5
+ class << self
6
+ def clean_up_db
7
+ tables_to_purge = PgEventstore.connection.with do |conn|
8
+ conn.exec(<<~SQL)
9
+ SELECT tablename
10
+ FROM pg_catalog.pg_tables WHERE schemaname NOT IN ('pg_catalog', 'information_schema') AND tablename != 'migrations'
11
+ SQL
12
+ end.map { |attrs| attrs['tablename'] }
13
+ tables_to_purge.each do |table_name|
14
+ PgEventstore.connection.with { |c| c.exec("DELETE FROM #{table_name}") }
15
+ end
16
+ end
17
+ end
18
+ end
19
+ end
@@ -1,20 +1,36 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ helpers = Class.new do
4
+ class << self
5
+ def postgres_uri
6
+ @postgres_uri ||=
7
+ begin
8
+ uri = URI.parse(ENV.fetch('PG_EVENTSTORE_URI'))
9
+ uri.path = '/postgres'
10
+ uri.to_s
11
+ end
12
+ end
13
+
14
+ def db_name
15
+ @db_name ||= URI.parse(ENV.fetch('PG_EVENTSTORE_URI')).path&.delete("/")
16
+ end
17
+ end
18
+ end
19
+
3
20
  namespace :pg_eventstore do
4
21
  desc "Creates events table, indexes, etc."
5
22
  task :create do
6
23
  PgEventstore.configure do |config|
7
- config.pg_uri = ENV['PG_EVENTSTORE_URI']
24
+ config.pg_uri = helpers.postgres_uri
8
25
  end
9
26
 
10
- db_files_root = "#{Gem::Specification.find_by_name("pg_eventstore").gem_dir}/db/initial"
11
-
12
27
  PgEventstore.connection.with do |conn|
13
- conn.transaction do
14
- conn.exec(File.read("#{db_files_root}/extensions.sql"))
15
- conn.exec(File.read("#{db_files_root}/tables.sql"))
16
- conn.exec(File.read("#{db_files_root}/primary_and_foreign_keys.sql"))
17
- conn.exec(File.read("#{db_files_root}/indexes.sql"))
28
+ exists =
29
+ conn.exec_params("SELECT 1 as exists FROM pg_database where datname = $1", [helpers.db_name]).first&.dig('exists')
30
+ if exists
31
+ puts "#{helpers.db_name} already exists. Skipping."
32
+ else
33
+ conn.exec("CREATE DATABASE #{conn.escape_string(helpers.db_name)} WITH OWNER #{conn.escape_string(conn.user)}")
18
34
  end
19
35
  end
20
36
  end
@@ -50,22 +66,11 @@ namespace :pg_eventstore do
50
66
  desc "Drops events table and related pg_eventstore objects."
51
67
  task :drop do
52
68
  PgEventstore.configure do |config|
53
- config.pg_uri = ENV['PG_EVENTSTORE_URI']
69
+ config.pg_uri = helpers.postgres_uri
54
70
  end
55
71
 
56
72
  PgEventstore.connection.with do |conn|
57
- conn.exec <<~SQL
58
- DROP TABLE IF EXISTS public.events;
59
- DROP TABLE IF EXISTS public.streams;
60
- DROP TABLE IF EXISTS public.event_types;
61
- DROP TABLE IF EXISTS public.migrations;
62
- DROP TABLE IF EXISTS public.subscription_commands;
63
- DROP TABLE IF EXISTS public.subscriptions_set_commands;
64
- DROP TABLE IF EXISTS public.subscriptions_set;
65
- DROP TABLE IF EXISTS public.subscriptions;
66
- DROP EXTENSION IF EXISTS "uuid-ossp";
67
- DROP EXTENSION IF EXISTS pgcrypto;
68
- SQL
73
+ conn.exec("DROP DATABASE IF EXISTS #{conn.escape_string(helpers.db_name)}")
69
74
  end
70
75
  end
71
76
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module PgEventstore
4
- VERSION = "0.6.0"
4
+ VERSION = "0.7.0"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pg_eventstore
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.6.0
4
+ version: 0.7.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ivan Dzyzenko
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-02-08 00:00:00.000000000 Z
11
+ date: 2024-02-09 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: pg
@@ -49,24 +49,13 @@ files:
49
49
  - CODE_OF_CONDUCT.md
50
50
  - LICENSE.txt
51
51
  - README.md
52
- - db/initial/extensions.sql
53
- - db/initial/indexes.sql
54
- - db/initial/primary_and_foreign_keys.sql
55
- - db/initial/tables.sql
56
- - db/migrations/0_improve_all_stream_indexes.sql
57
- - db/migrations/10_create_subscription_commands.sql
58
- - db/migrations/11_create_subscriptions_set_commands.sql
59
- - db/migrations/12_improve_events_indexes.sql
60
- - db/migrations/13_remove_duplicated_index.sql
61
- - db/migrations/1_improve_specific_stream_indexes.sql
62
- - db/migrations/2_adjust_global_position_index.sql
63
- - db/migrations/3_extract_type_into_separate_table.sql
64
- - db/migrations/4_populate_event_types.rb
65
- - db/migrations/5_adjust_indexes.sql
66
- - db/migrations/6_change_events_event_type_id_null_constraint.sql
67
- - db/migrations/7_change_events_type_constraint.sql
68
- - db/migrations/8_drop_events_type.sql
69
- - db/migrations/9_create_subscriptions.sql
52
+ - db/migrations/0_create_extensions.sql
53
+ - db/migrations/1_create_streams.sql
54
+ - db/migrations/2_create_event_types.sql
55
+ - db/migrations/3_create_events.sql
56
+ - db/migrations/4_create_subscriptions.sql
57
+ - db/migrations/5_create_subscription_commands.sql
58
+ - db/migrations/6_create_subscriptions_set_commands.sql
70
59
  - docs/appending_events.md
71
60
  - docs/configuration.md
72
61
  - docs/events_and_streams.md
@@ -112,6 +101,7 @@ files:
112
101
  - lib/pg_eventstore/queries/transaction_queries.rb
113
102
  - lib/pg_eventstore/query_builders/events_filtering_query.rb
114
103
  - lib/pg_eventstore/rspec/has_option_matcher.rb
104
+ - lib/pg_eventstore/rspec/test_helpers.rb
115
105
  - lib/pg_eventstore/sql_builder.rb
116
106
  - lib/pg_eventstore/stream.rb
117
107
  - lib/pg_eventstore/subscriptions/basic_runner.rb
@@ -1,13 +0,0 @@
1
- CREATE UNIQUE INDEX idx_streams_context_and_stream_name_and_stream_id ON public.streams USING btree (context, stream_name, stream_id);
2
- -- This index is used when searching by the specific stream and event's types
3
- CREATE INDEX idx_events_stream_id_and_revision_and_type ON public.events USING btree (stream_id, stream_revision, type);
4
-
5
- -- This index is used when searching by "all" stream using stream's attributes(context, stream_name, stream_id) and
6
- -- event's types. PG's query planner picks this index when none of the given event's type exist
7
- CREATE INDEX idx_events_type_and_stream_id_and_position ON public.events USING btree (type, stream_id, global_position);
8
-
9
- -- This index is used when searching by "all" stream using stream's attributes(context, stream_name, stream_id) and
10
- -- event's types. PG's query planner picks this index when some of the given event's types exist
11
- CREATE INDEX idx_events_position_and_type ON public.events USING btree (global_position, type);
12
-
13
- CREATE INDEX idx_events_link_id ON public.events USING btree (link_id);
@@ -1,11 +0,0 @@
1
- ALTER TABLE ONLY public.streams
2
- ADD CONSTRAINT streams_pkey PRIMARY KEY (id);
3
- ALTER TABLE ONLY public.events
4
- ADD CONSTRAINT events_pkey PRIMARY KEY (id);
5
-
6
- ALTER TABLE ONLY public.events
7
- ADD CONSTRAINT events_stream_fk FOREIGN KEY (stream_id)
8
- REFERENCES public.streams (id) ON DELETE CASCADE;
9
- ALTER TABLE ONLY public.events
10
- ADD CONSTRAINT events_link_fk FOREIGN KEY (link_id)
11
- REFERENCES public.events (id) ON DELETE CASCADE;
@@ -1,21 +0,0 @@
1
- CREATE TABLE public.streams
2
- (
3
- id bigserial NOT NULL,
4
- context character varying NOT NULL,
5
- stream_name character varying NOT NULL,
6
- stream_id character varying NOT NULL,
7
- stream_revision int DEFAULT -1 NOT NULL
8
- );
9
-
10
- CREATE TABLE public.events
11
- (
12
- id uuid NOT NULL DEFAULT public.gen_random_uuid(),
13
- stream_id bigint NOT NULL,
14
- type character varying NOT NULL,
15
- global_position bigserial NOT NULL,
16
- stream_revision int NOT NULL,
17
- data jsonb NOT NULL DEFAULT '{}'::jsonb,
18
- metadata jsonb NOT NULL DEFAULT '{}'::jsonb,
19
- link_id uuid,
20
- created_at timestamp without time zone NOT NULL DEFAULT now()
21
- );
@@ -1,3 +0,0 @@
1
- CREATE INDEX idx_events_type_and_position ON public.events USING btree (type, global_position);
2
- CREATE INDEX idx_events_global_position ON public.events USING btree (global_position);
3
- DROP INDEX idx_events_position_and_type;
@@ -1 +0,0 @@
1
- CREATE INDEX idx_events_event_type_id_and_global_position ON public.events USING btree (event_type_id, global_position);
@@ -1 +0,0 @@
1
- DROP INDEX idx_events_event_type_id;
@@ -1,3 +0,0 @@
1
- CREATE INDEX idx_events_stream_id_and_type_and_revision ON public.events USING btree (stream_id, type, stream_revision);
2
- CREATE INDEX idx_events_stream_id_and_revision ON public.events USING btree (stream_id, stream_revision);
3
- DROP INDEX idx_events_stream_id_and_revision_and_type;
@@ -1,4 +0,0 @@
1
- CREATE INDEX idx_events_global_position_including_type ON public.events USING btree (global_position) INCLUDE (type);
2
- COMMENT ON INDEX idx_events_global_position_including_type IS 'Usually "type" column has low distinct values. Thus, composit index by "type" and "global_position" columns may not be picked by Query Planner properly. Improve an index by "global_position" by including "type" column which allows Query Planner to perform better by picking the correct index.';
3
-
4
- DROP INDEX idx_events_global_position;
@@ -1,17 +0,0 @@
1
- CREATE TABLE public.event_types
2
- (
3
- id bigserial NOT NULL,
4
- type character varying NOT NULL
5
- );
6
-
7
- ALTER TABLE ONLY public.events ADD COLUMN event_type_id bigint;
8
-
9
- ALTER TABLE ONLY public.event_types
10
- ADD CONSTRAINT event_types_pkey PRIMARY KEY (id);
11
-
12
- ALTER TABLE ONLY public.events
13
- ADD CONSTRAINT events_event_type_fk FOREIGN KEY (event_type_id)
14
- REFERENCES public.event_types (id);
15
-
16
- CREATE UNIQUE INDEX idx_event_types_type ON public.event_types USING btree (type);
17
- CREATE INDEX idx_events_event_type_id ON public.events USING btree (event_type_id);
@@ -1,11 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- PgEventstore.connection.with do |conn|
4
- types = conn.exec('select type from events group by type').to_a.map { |attrs| attrs['type'] }
5
- types.each.with_index(1) do |type, index|
6
- id = conn.exec_params('SELECT id FROM event_types WHERE type = $1', [type]).to_a.first['id']
7
- id ||= conn.exec_params('INSERT INTO event_types (type) VALUES ($1) RETURNING *', [type]).to_a.first['id']
8
- conn.exec_params('UPDATE events SET event_type_id = $1 WHERE type = $2 AND event_type_id IS NULL', [id, type])
9
- puts "Processed #{index} types of #{types.size}"
10
- end
11
- end
@@ -1,6 +0,0 @@
1
- CREATE INDEX idx_events_global_position ON public.events USING btree (global_position);
2
-
3
- DROP INDEX idx_events_stream_id_and_type_and_revision;
4
- DROP INDEX idx_events_type_and_stream_id_and_position;
5
- DROP INDEX idx_events_global_position_including_type;
6
- DROP INDEX idx_events_type_and_position;
@@ -1 +0,0 @@
1
- ALTER TABLE public.events ALTER COLUMN event_type_id SET NOT NULL;
@@ -1 +0,0 @@
1
- ALTER TABLE public.events ALTER COLUMN type DROP NOT NULL;
@@ -1 +0,0 @@
1
- ALTER TABLE public.events DROP COLUMN type;