elasticsearch-embedded 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: f7265763b028eb234cf4ab3a26648a29822075a2
4
+ data.tar.gz: 9d4ec60f426bbcd6d268df09e75a3cc107a4ad16
5
+ SHA512:
6
+ metadata.gz: c002d9517dd909df2e743f3c4197dc7843d2f65ba3eb1e424f38c39fada1149e6f44bd75a2005b9026d2d828b1c4c3597aa5e42248f899e2ed7ca1a955be1244
7
+ data.tar.gz: 9fba9f55a6b0b1ad49419b3d0be76ebd9dbad5efc22503e09945c65fb060480c35edb8baa058543825ea1099200674b058efd46f94e254f5ea8b7380f3c3488c
@@ -0,0 +1,22 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
18
+ *.bundle
19
+ *.so
20
+ *.o
21
+ *.a
22
+ mkmf.log
data/.rspec ADDED
@@ -0,0 +1,2 @@
1
+ --color
2
+ --require spec_helper
@@ -0,0 +1,5 @@
1
+ language: ruby
2
+ rvm:
3
+ - 1.9.3
4
+ - 2.0.0
5
+ - 2.1.0
@@ -0,0 +1,3 @@
1
+ ## 0.1.0, released 2014-06-25
2
+
3
+ * Initial version
data/Gemfile ADDED
@@ -0,0 +1,14 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in elasticsearch-embedded.gemspec
4
+ gemspec
5
+
6
+ # Contain a fix for open-uri
7
+ gem 'fakefs', github: 'defunkt/fakefs'
8
+
9
+ # used for code coverage reports
10
+ gem 'coveralls', require: false
11
+
12
+ # Useful for debugging
13
+ gem 'pry'
14
+ gem 'awesome_print'
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2014 Fabio Napoleoni
2
+
3
+ MIT License
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining
6
+ a copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to
10
+ permit persons to whom the Software is furnished to do so, subject to
11
+ the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be
14
+ included in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,132 @@
1
+ # Elasticsearch::Embedded
2
+
3
+ [![Gem Version](https://badge.fury.io/rb/elasticsearch-embedded.png)](http://badge.fury.io/rb/elasticsearch-embedded) [![Build Status](https://travis-ci.org/fabn/elasticsearch-embedded.svg?branch=master)](https://travis-ci.org/fabn/elasticsearch-embedded) [![Coverage Status](https://coveralls.io/repos/fabn/elasticsearch-embedded/badge.png)](https://coveralls.io/r/fabn/elasticsearch-embedded)
4
+
5
+ This gem allows to download and execute elasticsearch (single node or local cluster) within a local folder.
6
+
7
+ It also provides some utilities for usage in RSpec integration tests.
8
+
9
+ ## Installation
10
+
11
+ Add this line to your application's Gemfile:
12
+
13
+ gem 'elasticsearch-embedded'
14
+
15
+ And then execute:
16
+
17
+ $ bundle
18
+
19
+ Or install it yourself as:
20
+
21
+ $ gem install elasticsearch-embedded
22
+
23
+ ## Usage
24
+
25
+ ### Standalone mode
26
+
27
+ After gem installation you can run `embedded-elasticsearch` executable, it accepts some options for cluster configuration
28
+
29
+ ```
30
+ $ embedded-elasticsearch -h
31
+ Usage: embedded-elasticsearch [options]
32
+ -w, --working-dir=WORKING_DIR Elasticsearch working directory (default: `Dir.tmpdir` or `Rails.root.join("tmp")` within rails applications)
33
+ -p, --port=PORT Port on which to run elasticsearch (default: 9250)
34
+ -c, --cluster-name=NAME Cluster name (default: elasticsearch_test)
35
+ -n, --nodes=NODES Number of nodes started in the cluster (default: 1)
36
+ --timeout=TIMEOUT Timeout when starting the cluster (default: 30)
37
+ -l, --log-level=LEVEL Logger verbosity, numbers allowed (1..5) or level names (debug, info, warn, error, fatal)
38
+ -q, --quiet Disable stdout logging
39
+ -S, --show-es-output Enable elasticsearch output in stdout
40
+ -V VERSION Elasticsearch version to use (default 1.2.1)
41
+ -P Configure cluster to persist data across restarts
42
+ -h, --help Show this message
43
+ -v, --version Show gem version
44
+ ```
45
+
46
+ In order to start a single node cluster (with in memory indices) just run
47
+
48
+ ```
49
+ $ embedded-elasticsearch -w tmp
50
+ Starting ES 1.2.1 cluster with working directory set to /Users/fabio/work/elasticsearch-embedded/tmp. Process pid is 57245
51
+ Downloading elasticsearch 1.2.1 | ᗧ| 100% (648 KB/sec) Time: 00:00:34
52
+ Starting 1 Elasticsearch nodes........
53
+ --------------------------------------------------------------------------------
54
+ Cluster: elasticsearch_test
55
+ Status: green
56
+ Nodes: 1
57
+ + node-1 | version: 1.2.1, pid: 57254, address: inet[/0:0:0:0:0:0:0:0:9250]
58
+
59
+ # Your cluster is running and listening to port 9250
60
+ ```
61
+
62
+ ### Usage with foreman
63
+
64
+ ```
65
+ $ cat Procfile
66
+ elasticsearch: embedded-elasticsearch -w tmp
67
+ $ foreman start
68
+ 14:53:51 elasticsearch.1 | started with pid 57524
69
+ 14:53:51 elasticsearch.1 | Starting ES 1.2.1 cluster with working directory set to /Users/fabionapoleoni/Desktop/work/RubyMine/elasticsearch-embedded/tmp. Process pid is 57524
70
+ 14:53:57 elasticsearch.1 | Starting 1 Elasticsearch nodes........
71
+ 14:53:57 elasticsearch.1 | --------------------------------------------------------------------------------
72
+ 14:53:57 elasticsearch.1 | Cluster: elasticsearch_test
73
+ 14:53:57 elasticsearch.1 | Status: green
74
+ 14:53:57 elasticsearch.1 | Nodes: 1
75
+ 14:53:57 elasticsearch.1 | + node-1 | version: 1.2.1, pid: 57528, address: inet[/0:0:0:0:0:0:0:0:9250]
76
+ ^CSIGINT received
77
+ 14:54:02 system | sending SIGTERM to all processes
78
+ 14:54:02 elasticsearch.1 | exited with code 0%
79
+ ```
80
+
81
+ ### With RSpec
82
+
83
+ ```ruby
84
+ # In spec/spec_helper.rb
85
+ require 'elasticsearch-embedded'
86
+ # Activate gem behavior with :elasticsearch tagged specs
87
+ Elasticsearch::Embedded::RSpec.configure
88
+ # Alternatively you could specify for which tags you want gem support
89
+ Elasticsearch::Embedded::RSpec.configure_with :search
90
+
91
+ # In tagged specs
92
+ describe Something, :elasticsearch do
93
+
94
+ before(:each) do
95
+ # all indices are deleted automatically at beginning of each spec
96
+ expect(client.indices.get_settings).to be_empty
97
+ end
98
+
99
+ # If elastic search client is defined (i.e. with gem 'elasticsearch' or require 'elasticsearch')
100
+ it 'should make elastic search client available' do
101
+ expect(client).to be_an_instance_of(::Elasticsearch::Transport::Client)
102
+ # Create a document into test cluster
103
+ client.index index: 'test', type: 'test-type', id: 1, body: {title: 'Test'}
104
+ end
105
+
106
+ # Cluster object is also exposed in a helper
107
+ it 'should make cluster object available' do
108
+ expect(cluster).to be_an_instance_of(::Elasticsearch::Embedded::Cluster)
109
+ end
110
+
111
+ # If elastic search client is not defined client return an URI instance with base url for the cluster
112
+ it 'should return cluster uri when elasticsearch is not defined' do
113
+ ::Elasticsearch.send(:remove_const, :Client)
114
+ expect(client).to be_an_instance_of(::URI::HTTP)
115
+ expect(Net::HTTP.get(client)).to include('You Know, for Search')
116
+ end
117
+
118
+ end
119
+
120
+ ```
121
+
122
+ ### With Test::Unit/minitest
123
+
124
+ Pull requests are welcome.
125
+
126
+ ## Contributing
127
+
128
+ 1. Fork it ( https://github.com/fabn/elasticsearch-embedded/fork )
129
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
130
+ 3. Commit your changes (`git commit -am 'Add some feature'`)
131
+ 4. Push to the branch (`git push origin my-new-feature`)
132
+ 5. Create a new Pull Request
@@ -0,0 +1,7 @@
1
+ require 'bundler/gem_tasks'
2
+
3
+ require 'rspec/core/rake_task'
4
+
5
+ RSpec::Core::RakeTask.new(:spec)
6
+
7
+ task default: :spec
@@ -0,0 +1,66 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require 'elasticsearch-embedded'
4
+ require 'optparse'
5
+
6
+ cluster = Elasticsearch::Embedded::Cluster.new
7
+
8
+ OptionParser.new do |opts|
9
+
10
+ opts.on '-w', '--working-dir=WORKING_DIR', 'Elasticsearch working directory (default: `Dir.tmpdir` or `Rails.root.join("tmp")` within rails applications)' do |wd|
11
+ cluster.working_dir = File.expand_path(wd)
12
+ end
13
+
14
+ opts.on '-p', '--port=PORT', Integer, 'Port on which to run elasticsearch (default: 9250)' do |p|
15
+ cluster.port = p
16
+ end
17
+
18
+ opts.on '-c', '--cluster-name=NAME', 'Cluster name (default: elasticsearch_test)' do |cn|
19
+ cluster.cluster_name = cn
20
+ end
21
+
22
+ opts.on '-n', '--nodes=NODES', Integer, 'Number of nodes started in the cluster (default: 1)' do |n|
23
+ cluster.nodes = n
24
+ end
25
+
26
+ opts.on '--timeout=TIMEOUT', Integer, 'Timeout when starting the cluster (default: 30)' do |t|
27
+ cluster.timeout = t
28
+ end
29
+
30
+ opts.on '-l', '--log-level=LEVEL', "Logger verbosity, numbers allowed (1..5) or level names (#{Logging::LEVELS.keys.join(', ')})" do |l|
31
+ ::Elasticsearch::Embedded.verbosity(l)
32
+ end
33
+
34
+ opts.on '-q', '--quiet', 'Disable stdout logging' do
35
+ ::Elasticsearch::Embedded.mute!
36
+ end
37
+
38
+ opts.on '-S', '--show-es-output', 'Enable elasticsearch output in stdout' do
39
+ cluster.verbose = true
40
+ end
41
+
42
+ opts.on '-V VERSION', "Elasticsearch version to use (default #{Elasticsearch::Embedded::Downloader::DEFAULT_VERSION})" do |v|
43
+ cluster.version = v
44
+ end
45
+
46
+ opts.on '-P', 'Configure cluster to persist data across restarts' do |p|
47
+ cluster.persistent = !!p
48
+ end
49
+
50
+ opts.on_tail '-h', '--help', 'Show this message' do
51
+ puts opts
52
+ exit
53
+ end
54
+
55
+ opts.on_tail '-v', '--version', 'Show gem version' do
56
+ puts "Gem version: #{Elasticsearch::Embedded::VERSION}"
57
+ puts "Elasticsearch version: #{cluster.version}"
58
+ exit
59
+ end
60
+
61
+ end.parse!
62
+
63
+ # Forward additional arguments to elasticsearch
64
+ cluster.additional_options = ARGV.join(' ') unless ARGV.empty?
65
+ # Start the cluster
66
+ cluster.start_and_wait!
@@ -0,0 +1,31 @@
1
+ # coding: utf-8
2
+ lib = File.expand_path('../lib', __FILE__)
3
+ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
4
+ require 'elasticsearch/embedded/version'
5
+
6
+ Gem::Specification.new do |spec|
7
+ spec.name = 'elasticsearch-embedded'
8
+ spec.version = Elasticsearch::Embedded::VERSION
9
+ spec.authors = ['Fabio Napoleoni']
10
+ spec.email = ['f.napoleoni@gmail.com']
11
+ spec.summary = %q{Install an embedded version of elasticsearch into your project}
12
+ spec.homepage = 'https://github.com/fabn/elasticsearch-embedded'
13
+ spec.license = 'MIT'
14
+
15
+ spec.files = `git ls-files -z`.split("\x0")
16
+ spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
17
+ spec.test_files = spec.files.grep(%r{^(test|spec|features)/})
18
+ spec.require_paths = ['lib']
19
+
20
+ spec.required_ruby_version = '>= 1.9.2'
21
+
22
+ spec.add_runtime_dependency 'ruby-progressbar', '~> 1.5.1'
23
+ spec.add_runtime_dependency 'rubyzip', '~> 1.0.0'
24
+ spec.add_runtime_dependency 'logging', '~> 1.8.0'
25
+
26
+ spec.add_development_dependency 'bundler', '~> 1.6'
27
+ spec.add_development_dependency 'rake'
28
+ spec.add_development_dependency 'rspec', '~> 3.0.0'
29
+ spec.add_development_dependency 'fakefs', '~> 0.5.0'
30
+ spec.add_development_dependency 'elasticsearch', '~> 1.0.2'
31
+ end
@@ -0,0 +1 @@
1
+ require 'elasticsearch/embedded'
@@ -0,0 +1,15 @@
1
+ require 'elasticsearch/embedded/version'
2
+ require 'elasticsearch/embedded/logger_configuration'
3
+
4
+ module Elasticsearch
5
+ module Embedded
6
+
7
+ autoload :Downloader, 'elasticsearch/embedded/downloader'
8
+ autoload :Cluster, 'elasticsearch/embedded/cluster'
9
+ autoload :RSpec, 'elasticsearch/embedded/rspec_configuration'
10
+
11
+ # Configure logging for this module
12
+ extend LoggerConfiguration
13
+
14
+ end
15
+ end
@@ -0,0 +1,303 @@
1
+ require 'timeout'
2
+ require 'net/http'
3
+ require 'uri'
4
+ require 'json'
5
+
6
+ module Elasticsearch
7
+ module Embedded
8
+
9
+ # Class used to manage a local cluster of elasticsearch nodes
10
+ class Cluster
11
+
12
+ # Make logger method available
13
+ include Logging.globally
14
+
15
+ # Options for cluster
16
+ attr_accessor :port, :cluster_name, :nodes, :timeout, :persistent, :additional_options, :verbose
17
+
18
+ # Options for downloader
19
+ attr_accessor :downloader, :version, :working_dir
20
+
21
+ # Assign default values to options
22
+ def initialize
23
+ @nodes = 1
24
+ @port = 9250
25
+ @version = Downloader::DEFAULT_VERSION
26
+ @working_dir = Downloader::TEMPORARY_PATH
27
+ @timeout = 30
28
+ @cluster_name = 'elasticsearch_test'
29
+ @pids = []
30
+ @pids_lock = Mutex.new
31
+ end
32
+
33
+ # Start an elasticsearch cluster and return immediately
34
+ def start
35
+ @downloader = Downloader.download(version: version, path: working_dir)
36
+ start_cluster
37
+ apply_development_template! if persistent
38
+ end
39
+
40
+ # Start an elasticsearch cluster and wait until running, also register
41
+ # a signal handler to close the cluster on INT, TERM and QUIT signals
42
+ def start_and_wait!
43
+ # register handler before starting cluster
44
+ register_shutdown_handler
45
+ # start the cluster
46
+ start
47
+ # Wait for all child processes to end then return
48
+ Process.waitall
49
+ end
50
+
51
+ # Stop the cluster and return after all child processes are dead
52
+ def stop
53
+ logger.warn 'Cluster is still starting, wait until startup is complete before sending shutdown command' if @pids_lock.locked?
54
+ @pids_lock.synchronize do
55
+ http_object.post('/_shutdown', nil)
56
+ logger.debug 'Cluster stopped succesfully using shutdown api'
57
+ Timeout.timeout(2) { Process.waitall }
58
+ # Reset running pids reader
59
+ @pids = []
60
+ end
61
+ rescue
62
+ logger.warn "Following processes are still alive #{pids}, kill them with signals"
63
+ # Send term signal if post request fails to all processes still alive after 2 seconds
64
+ pids.each { |pid| wait_or_kill(pid) }
65
+ end
66
+
67
+ # Thread safe access to all spawned process pids
68
+ def pids
69
+ @pids_lock.synchronize { @pids }
70
+ end
71
+
72
+ # Start server unless it's running
73
+ def ensure_started!
74
+ start unless running?
75
+ end
76
+
77
+ # Returns true when started cluster is running
78
+ #
79
+ # @return Boolean
80
+ def running?
81
+ cluster_health = Timeout::timeout(0.25) { __get_cluster_health } rescue nil
82
+ # Response is present, cluster name is the same and number of nodes is the same
83
+ !!cluster_health && cluster_health['cluster_name'] == cluster_name && cluster_health['number_of_nodes'] == nodes
84
+ end
85
+
86
+ # Remove all indices in the cluster
87
+ #
88
+ # @return [Array<Net::HTTPResponse>] raw http responses
89
+ def delete_all_indices!
90
+ delete_index! :_all
91
+ end
92
+
93
+ # Remove the indices given as args
94
+ #
95
+ # @param [Array<String,Symbol>] args list of indices to delet
96
+ # @return [Array<Net::HTTPResponse>] raw http responses
97
+ def delete_index!(*args)
98
+ args.map { |index| http_object.request(Net::HTTP::Delete.new("/#{index}")) }
99
+ end
100
+
101
+ # Used for persistent clusters, otherwise cluster won't get green state because of missing replicas
102
+ def apply_development_template!
103
+ development_settings = {
104
+ template: '*',
105
+ settings: {
106
+ number_of_shards: 1,
107
+ number_of_replicas: 0,
108
+ }
109
+ }
110
+ # Create the template on cluster
111
+ http_object.put('/_template/development_template', JSON.dump(development_settings))
112
+ end
113
+
114
+ private
115
+
116
+ # Build command line to launch an instance
117
+ def build_command_line(instance_number)
118
+ [
119
+ downloader.executable,
120
+ '-D es.foreground=yes',
121
+ "-D es.cluster.name=#{cluster_name}",
122
+ "-D es.node.name=node-#{instance_number}",
123
+ "-D es.http.port=#{port + (instance_number - 1)}",
124
+ "-D es.gateway.type=#{cluster_options[:gateway_type]}",
125
+ "-D es.index.store.type=#{cluster_options[:index_store]}",
126
+ "-D es.path.data=#{cluster_options[:path_data]}-#{instance_number}",
127
+ "-D es.path.work=#{cluster_options[:path_work]}-#{instance_number}",
128
+ '-D es.network.host=0.0.0.0',
129
+ '-D es.discovery.zen.ping.multicast.enabled=true',
130
+ '-D es.script.disable_dynamic=false',
131
+ '-D es.node.test=true',
132
+ '-D es.node.bench=true',
133
+ additional_options,
134
+ verbose ? nil : '> /dev/null'
135
+ ].compact.join(' ')
136
+ end
137
+
138
+ # Spawn an elasticsearch process and return its pid
139
+ def launch_instance(instance_number = 1)
140
+ # Start the process within a new process group to avoid signal propagation
141
+ Process.spawn(build_command_line(instance_number), pgroup: true).tap do |pid|
142
+ logger.debug "Launched elasticsearch process with pid #{pid}, detaching it"
143
+ Process.detach pid
144
+ end
145
+ end
146
+
147
+ # Return running instances pids, borrowed from code in Elasticsearch::Extensions::Test::Cluster.
148
+ # This method returns elasticsearch nodes pids and not spawned command pids, they are different because of
149
+ # elasticsearch shell wrapper used to launch the daemon
150
+ def nodes_pids
151
+ # Try to fetch node info from running cluster
152
+ nodes = JSON.parse(http_object.get('/_nodes/?process').body) rescue []
153
+ # Fetch pids from returned data
154
+ nodes.empty? ? nodes : nodes['nodes'].map { |_, info| info['process']['id'] }
155
+ end
156
+
157
+ def start_cluster
158
+ logger.info "Starting ES #{version} cluster with working directory set to #{working_dir}. Process pid is #{$$}"
159
+ if running?
160
+ logger.warn "Elasticsearch cluster already running on port #{port}"
161
+ wait_for_green(timeout)
162
+ return
163
+ end
164
+ # Launch single node instances of elasticsearch with synchronization
165
+ @pids_lock.synchronize do
166
+ 1.upto(nodes).each do |i|
167
+ @pids << launch_instance(i)
168
+ end
169
+ # Wait for cluster green state before releasing lock
170
+ wait_for_green(timeout)
171
+ # Add started nodes pids to pid array
172
+ @pids.concat(nodes_pids)
173
+ end
174
+ end
175
+
176
+ def wait_or_kill(pid)
177
+ begin
178
+ Timeout::timeout(2) do
179
+ Process.kill(:TERM, pid)
180
+ logger.debug "Sent SIGTERM to process #{pid}"
181
+ Process.waitpid(pid)
182
+ logger.info "Process #{pid} exited successfully"
183
+ end
184
+ rescue Errno::ESRCH, Errno::ECHILD
185
+ # No such process or no child => process is already dead
186
+ logger.debug "Process with pid #{pid} is already dead"
187
+ rescue Timeout::Error
188
+ logger.info "Process #{pid} still running after 2 seconds, sending SIGKILL to it"
189
+ Process.kill(:KILL, pid) rescue nil
190
+ ensure
191
+ logger.debug "Removing #{pid} from running pids"
192
+ @pids_lock.synchronize { @pids.delete(pid) }
193
+ end
194
+ end
195
+
196
+ # Used as arguments for building command line to launch elasticsearch
197
+ def cluster_options
198
+ {
199
+ port: port,
200
+ nodes: nodes,
201
+ cluster_name: cluster_name,
202
+ timeout: timeout,
203
+ # command to run is taken from downloader object
204
+ command: downloader.executable,
205
+ # persistency options
206
+ gateway_type: persistent ? 'local' : 'none',
207
+ index_store: persistent ? 'mmapfs' : 'memory',
208
+ path_data: File.join(persistent ? downloader.working_dir : Dir.tmpdir, 'cluster_data'),
209
+ path_work: File.join(persistent ? downloader.working_dir : Dir.tmpdir, 'cluster_workdir'),
210
+ }
211
+ end
212
+
213
+ # Return an http object to make requests
214
+ def http_object
215
+ @http ||= Net::HTTP.new('localhost', port)
216
+ end
217
+
218
+ # Register a shutdown proc which handles INT, TERM and QUIT signals
219
+ def register_shutdown_handler
220
+ stopper = ->(sig) do
221
+ Thread.new do
222
+ logger.info "Received SIG#{Signal.signame(sig)}, quitting"
223
+ stop
224
+ end
225
+ end
226
+ # Stop cluster on Ctrl+C, TERM (foreman) or QUIT (other)
227
+ [:TERM, :INT, :QUIT].each { |sig| Signal.trap(sig, &stopper) }
228
+ end
229
+
230
+ # Waits until the cluster is green and prints information
231
+ #
232
+ # @example Print the information about the default cluster
233
+ # Elasticsearch::Extensions::Test::Cluster.wait_for_green
234
+ #
235
+ # @param (see #__wait_for_status)
236
+ #
237
+ # @return Boolean
238
+ #
239
+ def wait_for_green(timeout = 60)
240
+ __wait_for_status('green', timeout)
241
+ end
242
+
243
+ # Blocks the process and waits for the cluster to be in a "green" state.
244
+ #
245
+ # Prints information about the cluster on STDOUT if the cluster is available.
246
+ #
247
+ # @param status [String] The status to wait for (yellow, green)
248
+ # @param timeout [Integer] The explicit timeout for the operation
249
+ #
250
+ # @api private
251
+ #
252
+ # @return Boolean
253
+ #
254
+ def __wait_for_status(status='green', timeout = 30)
255
+ Timeout::timeout(timeout) do
256
+ loop do
257
+ response = JSON.parse(http_object.get("/_cluster/health?wait_for_status=#{status}").body) rescue {}
258
+
259
+ # check response and return if ok
260
+ if response['status'] == status && nodes == response['number_of_nodes'].to_i
261
+ __print_cluster_info and break
262
+ end
263
+
264
+ logger.debug "Still waiting for #{status} status in #{cluster_name}"
265
+ sleep 1
266
+ end
267
+ end
268
+
269
+ true
270
+ end
271
+
272
+ # Print information about the cluster on STDOUT
273
+ #
274
+ # @api private
275
+ #
276
+ def __print_cluster_info
277
+ health = JSON.parse(http_object.get('/_cluster/health').body)
278
+ nodes = JSON.parse(http_object.get('/_nodes/process,http').body)
279
+ master = JSON.parse(http_object.get('/_cluster/state').body)['master_node']
280
+
281
+ logger.info '-'*80
282
+ logger.info 'Cluster: '.ljust(12) + health['cluster_name'].to_s
283
+ logger.info 'Status: '.ljust(12) + health['status'].to_s
284
+ logger.info 'Nodes: '.ljust(12) + health['number_of_nodes'].to_s
285
+
286
+ nodes['nodes'].each do |id, info|
287
+ m = id == master ? '+' : '-'
288
+ logger.info ''.ljust(12) + "#{m} #{info['name']} | version: #{info['version']}, pid: #{info['process']['id']}, address: #{info['http']['bound_address']}"
289
+ end
290
+ end
291
+
292
+ # Tries to load cluster health information
293
+ #
294
+ # @api private
295
+ #
296
+ def __get_cluster_health
297
+ JSON.parse(http_object.get('/_cluster/health').body) rescue nil
298
+ end
299
+
300
+ end
301
+
302
+ end
303
+ end