elasticgraph-local 0.17.1.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 67889904d465a3b0ab2d61c61cd65f23bd9f50a3681655b51cafba0341863231
4
+ data.tar.gz: 17b51c009ae9fdc26b86960ca21a1f3369210804f1d86276ea4396de4ce0dd80
5
+ SHA512:
6
+ metadata.gz: 4f2515eab25e53d9a3ad562d7a47d30b81f961de84b1df3896042be6386092dfbd1e2304389e1549da582d607f803cb68355e09ca2ff9149eff684d8366df1bc
7
+ data.tar.gz: 7251dc4f6bf5075d9c185098dae862a5be5e7df972c89c4e85d5fc1a7cec207a57078174a27be7c9fabf777d78238556e0025e55f6b9b4c445922eb7775aedd2
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2024 Block, Inc.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,80 @@
1
+ # ElasticGraph::Local
2
+
3
+ Provides support for developing and running ElasticGraph applications locally.
4
+ These locally running ElasticGraph applications use 100% fake generated data
5
+ so as not to require a publisher of real data to be implemented.
6
+
7
+ ## Installation
8
+
9
+ Add `elasticgraph-local` to a new project `Gemfile`:
10
+
11
+ ```ruby
12
+ source "https://rubygems.org"
13
+
14
+ group :development do
15
+ gem "factory_bot"
16
+ gem "faker"
17
+ gem "elasticgraph-local"
18
+ end
19
+ ```
20
+
21
+ As shown above, you can also pull in any gems that will help you
22
+ generate fake data. We tend to use [factory_bot](https://github.com/thoughtbot/factory_bot)
23
+ and [faker](https://github.com/faker-ruby/faker). `elasticgraph-local` should be defined
24
+ in the `development` group (you don't want to include it in any staging or production
25
+ deployment).
26
+
27
+ Next, install the `elasticgraph-local` rake tasks in your `Rakefile`, with code like:
28
+
29
+ ``` ruby
30
+ require 'elastic_graph/local/rake_tasks'
31
+
32
+ ElasticGraph::Local::RakeTasks.new(
33
+ local_config_yaml: "config/settings/development.yaml",
34
+ path_to_schema: "config/schema.rb"
35
+ ) do |tasks|
36
+ tasks.define_fake_data_batch_for :widgets do |batch|
37
+ # Use faker/factory_bot etc here to generate fake data
38
+ # and add it to the `batch` array.
39
+ # You'll probably want to put that logic in another file
40
+ # and load it from here.
41
+ end
42
+ end
43
+ ```
44
+
45
+ ## Usage
46
+
47
+ Everything you need is provided by rake tasks. Run the following to see what they are:
48
+
49
+ ```bash
50
+ $ bundle exec rake -T
51
+ ```
52
+
53
+ At a high level, this provides tasks that help you to:
54
+
55
+ 1. Boot Elasticsearch/OpenSearch (+ their corresponding dashboards) locally using the `opensearch:*`/`elasticsearch:*` tasks.
56
+ 2. Generate and validate ElasticGraph schema artifacts using the `schema_artifacts:*` tasks.
57
+ 3. Configure your locally running Elasticsearch/OpenSearch using the `clusters:configure:perform` task.
58
+ 4. Index fake data into Elasticsearch/OpenSearch (either running locally or on AWS) using the `index_fake_data:*` tasks.
59
+ 5. Boot the ElasticGraph GraphQL endpoint and GraphiQL in-browser UI using the `boot_graphiql` task.
60
+
61
+ If you just want to boot ElasticGraph locally without worrying about any of the details, run:
62
+
63
+ ```
64
+ $ bundle exec rake boot_locally
65
+ ```
66
+
67
+ That sequences each of the other tasks so that, with a single command, you can go from nothing to a
68
+ locally running ElasticGraph instance with data that you can query from your browser.
69
+
70
+ ### Managing Elasticsearch/Opensearch
71
+
72
+ The `opensearch:`/`elasticsearch:` tasks will boot the desired Elasticsearch or OpenSearch version using docker
73
+ along with the corresponding dashboards (Kibana for Elasticsearch, OpenSearch Dashboards for OpenSearch). You can
74
+ use either the `:boot` or `:daemon` tasks:
75
+
76
+ * The `:boot` task will keep Elasticsearch/Opensearch in the foreground, allowing you to see the logs.
77
+ * The `:daemon` task runs Elasticsearch/Opensearch as a background daemon task. Notably, it waits to return
78
+ until Elasticsearch/Opensearch are ready to receive traffic.
79
+
80
+ If you use a `:daemon` task, you can later use the corresponding `:halt` task to stop the daemon.
@@ -0,0 +1,25 @@
1
+ # Copyright 2024 Block, Inc.
2
+ #
3
+ # Use of this source code is governed by an MIT-style
4
+ # license that can be found in the LICENSE file or at
5
+ # https://opensource.org/licenses/MIT.
6
+ #
7
+ # frozen_string_literal: true
8
+
9
+ require_relative "../gemspec_helper"
10
+
11
+ ElasticGraphGemspecHelper.define_elasticgraph_gem(gemspec_file: __FILE__, category: :local) do |spec, eg_version|
12
+ spec.summary = "Provides support for developing and running ElasticGraph applications locally."
13
+
14
+ spec.add_dependency "elasticgraph-admin", eg_version
15
+ spec.add_dependency "elasticgraph-graphql", eg_version
16
+ spec.add_dependency "elasticgraph-indexer", eg_version
17
+ spec.add_dependency "elasticgraph-rack", eg_version
18
+ spec.add_dependency "elasticgraph-schema_definition", eg_version
19
+ spec.add_dependency "rackup", "~> 2.1"
20
+ spec.add_dependency "rake", "~> 13.2"
21
+
22
+ spec.add_development_dependency "elasticgraph-elasticsearch", eg_version
23
+ spec.add_development_dependency "elasticgraph-opensearch", eg_version
24
+ spec.add_development_dependency "httpx", ">= 1.2.6", "< 2.0"
25
+ end
@@ -0,0 +1,15 @@
1
+ # Copyright 2024 Block, Inc.
2
+ #
3
+ # Use of this source code is governed by an MIT-style
4
+ # license that can be found in the LICENSE file or at
5
+ # https://opensource.org/licenses/MIT.
6
+ #
7
+ # frozen_string_literal: true
8
+
9
+ # This `config.ru` file is used by the `rake boot_graphiql` task.
10
+
11
+ require "elastic_graph/graphql"
12
+ require "elastic_graph/rack/graphiql"
13
+
14
+ graphql = ElasticGraph::GraphQL.from_yaml_file(ENV.fetch("ELASTICGRAPH_YAML_FILE"))
15
+ run ElasticGraph::Rack::GraphiQL.new(graphql)
@@ -0,0 +1,117 @@
1
+ # Copyright 2024 Block, Inc.
2
+ #
3
+ # Use of this source code is governed by an MIT-style
4
+ # license that can be found in the LICENSE file or at
5
+ # https://opensource.org/licenses/MIT.
6
+ #
7
+ # frozen_string_literal: true
8
+
9
+ require "timeout"
10
+
11
+ module ElasticGraph
12
+ module Local
13
+ # @private
14
+ class DockerRunner
15
+ def initialize(variant, port:, ui_port:, version:, env:, ready_log_line:, daemon_timeout:, output:)
16
+ @variant = variant
17
+ @port = port
18
+ @ui_port = ui_port
19
+ @version = version
20
+ @env = env
21
+ @ready_log_line = ready_log_line
22
+ @daemon_timeout = daemon_timeout
23
+ @output = output
24
+ end
25
+
26
+ def boot
27
+ # :nocov: -- this is actually covered via a call from `boot_as_daemon` but it happens in a forked process so simplecov doesn't see it.
28
+ halt
29
+
30
+ prepare_docker_compose_run "up" do |command|
31
+ exec(command) # we use `exec` so that our process is replaced with that one.
32
+ end
33
+ # :nocov:
34
+ end
35
+
36
+ def halt
37
+ prepare_docker_compose_run "down --volumes" do |command|
38
+ system(command)
39
+ end
40
+ end
41
+
42
+ def boot_as_daemon(halt_command:)
43
+ with_pipe do |read_io, write_io|
44
+ fork do
45
+ # :nocov: -- simplecov can't track coverage that happens in another process
46
+ read_io.close
47
+ Process.daemon
48
+ pid = Process.pid
49
+ $stdout.reopen(write_io)
50
+ $stderr.reopen(write_io)
51
+ puts pid
52
+ boot
53
+ write_io.close
54
+ # :nocov:
55
+ end
56
+
57
+ # The `Process.daemon` call in the subprocess changes the pid so we have to capture it this way instead of using
58
+ # the return value of `fork`.
59
+ pid = read_io.gets.to_i
60
+
61
+ @output.puts "Booting #{@variant}; monitoring logs for readiness..."
62
+
63
+ ::Timeout.timeout(
64
+ @daemon_timeout,
65
+ ::Timeout::Error,
66
+ <<~EOS
67
+ Timed out after #{@daemon_timeout} seconds. The expected "ready" log line[1] was not found in the logs.
68
+
69
+ [1] #{@ready_log_line.inspect}
70
+ EOS
71
+ ) do
72
+ loop do
73
+ sleep 0.01
74
+ line = read_io.gets
75
+ @output.puts line
76
+ break if @ready_log_line.match?(line.to_s)
77
+ end
78
+ end
79
+
80
+ @output.puts
81
+ @output.puts
82
+ @output.puts <<~EOS
83
+ Success! #{@variant} #{@version} (pid: #{pid}) has been booted for the #{@env} environment on port #{@port}.
84
+ It will continue to run in the background as a daemon. To halt it, run:
85
+
86
+ #{halt_command}
87
+ EOS
88
+ end
89
+ end
90
+
91
+ private
92
+
93
+ def prepare_docker_compose_run(*commands)
94
+ name = "#{@env}-#{@version.tr(".", "_")}"
95
+
96
+ full_command = commands.map do |command|
97
+ "VERSION=#{@version} PORT=#{@port} UI_PORT=#{@ui_port} ENV=#{@env} docker-compose --project-name #{name} #{command}"
98
+ end.join(" && ")
99
+
100
+ ::Dir.chdir(::File.join(__dir__.to_s, @variant.to_s)) do
101
+ yield full_command
102
+ end
103
+ end
104
+
105
+ def with_pipe
106
+ read_io, write_io = ::IO.pipe
107
+
108
+ begin
109
+ yield read_io, write_io
110
+ ensure
111
+ read_io.close
112
+ write_io.close
113
+ end
114
+ end
115
+ end
116
+ end
117
+ end
@@ -0,0 +1,3 @@
1
+ ARG VERSION
2
+ FROM elasticsearch:${VERSION}
3
+ RUN bin/elasticsearch-plugin install mapper-size
@@ -0,0 +1,2 @@
1
+ ARG VERSION
2
+ FROM kibana:${VERSION}
@@ -0,0 +1,74 @@
1
+ ---
2
+ networks:
3
+ default:
4
+ name: elastic
5
+ external: false
6
+ services:
7
+ elasticsearch:
8
+ build:
9
+ context: .
10
+ dockerfile: Dockerfile
11
+ args:
12
+ VERSION: ${VERSION}
13
+ container_name: elasticsearch-${VERSION}-${ENV}
14
+ environment:
15
+ # Note: we use `discovery.type=single-node` to ensure that the Elasticsearch node does not
16
+ # try to join a cluster (or let another node join it). This prevents problems when you
17
+ # have multiple projects using elasticgraph-local at the same time. You do not want
18
+ # their Elasticsearch nodes to try to join into a single cluster.
19
+ - discovery.type=single-node
20
+ # Note: we use `xpack.security.enabled=false` to silence an annoying warning Elasticsearch 7.13 has
21
+ # started spewing (as in hundreds of times!) as we run our test suite:
22
+ #
23
+ # > warning: 299 Elasticsearch-7.13.0-5ca8591c6fcdb1260ce95b08a8e023559635c6f3 "Elasticsearch built-in
24
+ # > security features are not enabled. Without authentication, your cluster could be accessible to anyone.
25
+ # > See https://www.elastic.co/guide/en/elasticsearch/reference/7.13/security-minimal-setup.html to enable
26
+ # > security."
27
+ #
28
+ # Since this is only used in local dev/test environments where the added security would make things harder
29
+ # (we'd have to setup credentials in our tests), it's simpler/better just to explicitly disable the security,
30
+ # which silences the warning.
31
+ - xpack.security.enabled=false
32
+ # We disable `xpack.ml` because it's not compatible with the `darwin-aarch64` distribution we use on M1 Macs.
33
+ # Without that flag, we get this error:
34
+ #
35
+ # > [2022-01-20T10:06:54,582][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [myron-macbookpro.local] uncaught exception in thread [main]
36
+ # > org.elasticsearch.bootstrap.StartupException: ElasticsearchException[Failure running machine learning native code. This could be due to running
37
+ # > on an unsupported OS or distribution, missing OS libraries, or a problem with the temp directory. To bypass this problem by running Elasticsearch
38
+ # > without machine learning functionality set [xpack.ml.enabled: false].]
39
+ #
40
+ # See also this github issue: https://github.com/elastic/elasticsearch/pull/68068
41
+ - xpack.ml.enabled=false
42
+ # We don't want Elasticsearch to block writes when the disk allocation passes a threshold for our local/test
43
+ # Elasticsearch we run using this docker setup.
44
+ # https://stackoverflow.com/a/75962819
45
+ #
46
+ # Without this, I frequently get `FORBIDDEN/10/cluster create-index blocked (api)` errors when running tests.
47
+ - cluster.routing.allocation.disk.threshold_enabled=false
48
+ # Necessary on Elasticsearch 8 since our test suites indiscriminately deletes all documents
49
+ # between tests to sandbox the state of each test. Without this setting, we get errors like:
50
+ #
51
+ # > illegal_argument_exception: Wildcard expressions or all indices are not allowed
52
+ - action.destructive_requires_name=false
53
+ - ES_JAVA_OPTS=-Xms4g -Xmx4g
54
+ ulimits:
55
+ nofile:
56
+ soft: 65536
57
+ hard: 65536
58
+ volumes:
59
+ - elasticsearch:/usr/share/elasticsearch/data
60
+ ports:
61
+ - ${PORT:-9200}:9200
62
+ kibana:
63
+ build:
64
+ context: .
65
+ dockerfile: UI-Dockerfile
66
+ args:
67
+ VERSION: ${VERSION}
68
+ container_name: kibana-${VERSION}-${ENV}
69
+ environment:
70
+ - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
71
+ ports:
72
+ - ${UI_PORT:-5601}:5601
73
+ volumes:
74
+ elasticsearch:
@@ -0,0 +1,58 @@
1
+ # Copyright 2024 Block, Inc.
2
+ #
3
+ # Use of this source code is governed by an MIT-style
4
+ # license that can be found in the LICENSE file or at
5
+ # https://opensource.org/licenses/MIT.
6
+ #
7
+ # frozen_string_literal: true
8
+
9
+ require "elastic_graph/indexer/test_support/converters"
10
+
11
+ module ElasticGraph
12
+ module Local
13
+ # Responsible for coordinating the generation and indexing of fake data batches.
14
+ # Designed to be pluggable with different publishing strategies.
15
+ #
16
+ # @private
17
+ class IndexingCoordinator
18
+ PARALLELISM = 8
19
+
20
+ def initialize(fake_data_batch_generator, output: $stdout, &publish_batch)
21
+ @fake_data_batch_generator = fake_data_batch_generator
22
+ @publish_batch = publish_batch
23
+ @output = output
24
+ end
25
+
26
+ def index_fake_data(num_batches)
27
+ batch_queue = ::Thread::Queue.new
28
+
29
+ publishing_threads = Array.new(PARALLELISM) { new_publishing_thread(batch_queue) }
30
+
31
+ num_batches.times do
32
+ batch = [] # : ::Array[::Hash[::String, untyped]]
33
+ @fake_data_batch_generator.call(batch)
34
+ @output.puts "Generated batch of #{batch.size} documents..."
35
+ batch_queue << batch
36
+ end
37
+
38
+ publishing_threads.map { batch_queue << :done }
39
+ publishing_threads.each(&:join)
40
+
41
+ @output.puts "...done."
42
+ end
43
+
44
+ private
45
+
46
+ def new_publishing_thread(batch_queue)
47
+ ::Thread.new do
48
+ loop do
49
+ batch = batch_queue.pop
50
+ break if batch == :done
51
+ @publish_batch.call(ElasticGraph::Indexer::TestSupport::Converters.upsert_events_for_records(batch))
52
+ @output.puts "Published batch of #{batch.size} documents..."
53
+ end
54
+ end
55
+ end
56
+ end
57
+ end
58
+ end
@@ -0,0 +1,28 @@
1
+ # Copyright 2024 Block, Inc.
2
+ #
3
+ # Use of this source code is governed by an MIT-style
4
+ # license that can be found in the LICENSE file or at
5
+ # https://opensource.org/licenses/MIT.
6
+ #
7
+ # frozen_string_literal: true
8
+
9
+ require "elastic_graph/indexer"
10
+ require "elastic_graph/local/indexing_coordinator"
11
+
12
+ module ElasticGraph
13
+ module Local
14
+ # @private
15
+ class LocalIndexer
16
+ def initialize(local_config_yaml, fake_data_batch_generator, output:)
17
+ @local_indexer = ElasticGraph::Indexer.from_yaml_file(local_config_yaml)
18
+ @indexing_coordinator = IndexingCoordinator.new(fake_data_batch_generator, output: output) do |batch|
19
+ @local_indexer.processor.process(batch)
20
+ end
21
+ end
22
+
23
+ def index_fake_data(num_batches)
24
+ @indexing_coordinator.index_fake_data(num_batches)
25
+ end
26
+ end
27
+ end
28
+ end
@@ -0,0 +1,4 @@
1
+ ARG VERSION
2
+ FROM opensearchproject/opensearch:${VERSION}
3
+ RUN /usr/share/opensearch/bin/opensearch-plugin remove opensearch-security
4
+ RUN /usr/share/opensearch/bin/opensearch-plugin install --batch mapper-size
@@ -0,0 +1,2 @@
1
+ ARG VERSION
2
+ FROM opensearchproject/opensearch-dashboards:${VERSION}
@@ -0,0 +1,50 @@
1
+ ---
2
+ networks:
3
+ default:
4
+ name: opensearch
5
+ external: false
6
+ services:
7
+ opensearch:
8
+ build:
9
+ context: .
10
+ dockerfile: Dockerfile
11
+ args:
12
+ VERSION: ${VERSION}
13
+ container_name: opensearch-${VERSION}-${ENV}
14
+ environment:
15
+ # Note: we use `discovery.type=single-node` to ensure that the OpenSearch node does not
16
+ # try to join a cluster (or let another node join it). This prevents problems when you
17
+ # have multiple projects using elasticgraph-local at the same time. You do not want
18
+ # their OpenSearch nodes to try to join into a single cluster.
19
+ - discovery.type=single-node
20
+ # recommended by https://opensearch.org/downloads.html#minimal
21
+ - bootstrap.memory_lock=true
22
+ # We don't want OpenSearch to block writes when the disk allocation passes a threshold for our local/test
23
+ # OpenSearch we run using this docker setup.
24
+ # https://stackoverflow.com/a/75962819
25
+ #
26
+ # Without this, I frequently get `FORBIDDEN/10/cluster create-index blocked (api)` errors when running tests.
27
+ - cluster.routing.allocation.disk.threshold_enabled=false
28
+ - OPENSEARCH_JAVA_OPTS=-Xms4g -Xmx4g
29
+ ulimits:
30
+ nofile:
31
+ soft: 65536
32
+ hard: 65536
33
+ volumes:
34
+ - opensearch:/usr/share/opensearch/data
35
+ ports:
36
+ - ${PORT}:9200
37
+ dashboards:
38
+ build:
39
+ context: .
40
+ dockerfile: UI-Dockerfile
41
+ args:
42
+ VERSION: ${VERSION}
43
+ container_name: dashboards-${VERSION}-${ENV}
44
+ environment:
45
+ - OPENSEARCH_HOSTS=http://opensearch:9200
46
+ - DISABLE_SECURITY_DASHBOARDS_PLUGIN=true
47
+ ports:
48
+ - ${UI_PORT}:5601
49
+ volumes:
50
+ opensearch: