synapse 0.9.1 → 0.10.0

Sign up to get free protection for your applications and to get access to all the features.
data/README.md CHANGED
@@ -1,4 +1,5 @@
1
1
  [![Build Status](https://travis-ci.org/airbnb/synapse.png?branch=master)](https://travis-ci.org/airbnb/synapse)
2
+ [![Inline docs](http://inch-pages.github.io/github/airbnb/synapse.png)](http://inch-pages.github.io/github/airbnb/synapse)
2
3
 
3
4
  # Synapse #
4
5
 
@@ -20,7 +21,7 @@ One solution to this problem is a discovery service, like [Apache Zookeeper](htt
20
21
  However, Zookeeper and similar services have their own problems:
21
22
 
22
23
  * Service discovery is embedded in all of your apps; often, integration is not simple
23
- * The discovery layer itself it subject to failure
24
+ * The discovery layer itself is subject to failure
24
25
  * Requires additional servers/instances
25
26
 
26
27
  Synapse solves these difficulties in a simple and fault-tolerant way.
@@ -39,7 +40,7 @@ It is easy to write your own watchers for your use case, and we encourage submit
39
40
 
40
41
  ## Example Migration ##
41
42
 
42
- Lets suppose your rails application depends on a Postgre database instance.
43
+ Let's suppose your rails application depends on a Postgres database instance.
43
44
  The database.yaml file has the DB host and port hardcoded:
44
45
 
45
46
  ```yaml
@@ -126,8 +127,9 @@ The name is just a human-readable string; it will be used in logs and notificati
126
127
  Each value in the services hash is also a hash, and should contain the following keys:
127
128
 
128
129
  * `discovery`: how synapse will discover hosts providing this service (see below)
129
- * `default_servers`: the list of default servers providing this service; synapse uses these if none others can be discovered
130
+ * `default_servers`: the list of default servers providing this service; synapse uses these if no others can be discovered
130
131
  * `haproxy`: how will the haproxy section for this service be configured
132
+ * `shared_frontend`: optional: haproxy configuration directives for a shared http frontend (see below)
131
133
 
132
134
  #### Service Discovery ####
133
135
 
@@ -136,7 +138,7 @@ Put these into the `discovery` section of the service hash, with these options:
136
138
 
137
139
  ##### Stub #####
138
140
 
139
- The stub watcher, this is useful in situations where you only want to use the servers in the `default_servers` list.
141
+ The stub watcher is useful in situations where you only want to use the servers in the `default_servers` list.
140
142
  It has only one option:
141
143
 
142
144
  * `method`: stub
@@ -156,7 +158,7 @@ We assume that the data contains a hostname and a port for service servers.
156
158
 
157
159
  ##### Docker #####
158
160
 
159
- This watcher retrieves a list of [docker](http://www.docker.io/) containers via docker's [HTTP API](http://docs.docker.io/en/latest/api/docker_remote_api/).
161
+ This watcher retrieves a list of [docker](http://www.docker.io/) containers via docker's [HTTP API](http://docs.docker.io/en/latest/reference/api/docker_remote_api/).
160
162
  It takes the following options:
161
163
 
162
164
  * `method`: docker
@@ -176,11 +178,13 @@ Each hash in that section has the following options:
176
178
 
177
179
  The `default_servers` list is used only when service discovery returns no servers.
178
180
  In that case, the service proxy will be created with the servers listed here.
179
- If you do not list any default servers, no proxy will be created.
181
+ If you do not list any default servers, no proxy will be created. The
182
+ `default_servers` will also be used in addition to discovered servers if the
183
+ `keep_default_servers` option is set.
180
184
 
181
185
  #### The `haproxy` Section ####
182
186
 
183
- This section is it's own hash, which should contain the following keys:
187
+ This section is its own hash, which should contain the following keys:
184
188
 
185
189
  * `port`: the port (on localhost) where HAProxy will listen for connections to the service.
186
190
  * `server_port_override`: the port that discovered servers listen on; you should specify this if your discovery mechanism only discovers names or addresses (like the DNS watcher). If the discovery method discovers a port along with hostnames (like the zookeeper watcher) this option may be left out, but will be used in preference if given.
@@ -199,10 +203,101 @@ The `haproxy` section of the config file has the following options:
199
203
  * `do_reloads`: whether or not Synapse will reload HAProxy (default to `true`)
200
204
  * `global`: options listed here will be written into the `global` section of the HAProxy config
201
205
  * `defaults`: options listed here will be written into the `defaults` section of the HAProxy config
202
- * `bind_address`: force HAProxy to listen on this address (default is localhost)
206
+ * `bind_address`: force HAProxy to listen on this address (default is localhost)
207
+ * `shared_fronted`: (OPTIONAL) additional lines passed to the HAProxy config used to configure a shared HTTP frontend (see below)
203
208
 
204
209
  Note that a non-default `bind_address` can be dangerous: it is up to you to ensure that HAProxy will not attempt to bind an address:port combination that is not already in use by one of your services.
205
210
 
211
+ ### HAProxy shared HTTP Frontend ###
212
+
213
+ For HTTP-only services, it is not always necessary or desirable to dedicate a TCP port per service, since HAProxy can route traffic based on host headers.
214
+ To support this, the optional `shared_fronted` section can be added to both the `haproxy` section and each indvidual service definition: synapse will concatenate them all into a single frontend section in the generated haproxy.cfg file.
215
+ Note that synapse does not assemble the routing ACLs for you: you have to do that yourself based on your needs.
216
+ This is probably most useful in combination with the `service_conf_dir` directive in a case where the individual service config files are being distributed by a configuration manager such as puppet or chef, or bundled into service packages.
217
+ For example:
218
+
219
+ ```yaml
220
+ {
221
+ "haproxy": {
222
+ "shared_frontend": [
223
+ "bind 127.0.0.1:8081"
224
+ ],
225
+ "reload_command": "service haproxy reload",
226
+ "config_file_path": "/etc/haproxy/haproxy.cfg",
227
+ "socket_file_path": "/var/run/haproxy.sock",
228
+ "global": [
229
+ "daemon",
230
+ "user haproxy",
231
+ "group haproxy",
232
+ "maxconn 4096",
233
+ "log 127.0.0.1 local2 notice",
234
+ "stats socket /var/run/haproxy.sock"
235
+ ],
236
+ "defaults": [
237
+ "log global",
238
+ "balance roundrobin"
239
+ ]
240
+ },
241
+ "services": {
242
+ "service1": {
243
+ "discovery": {
244
+ "method": "zookeeper",
245
+ "path": "/nerve/services/service1",
246
+ "hosts": [ "0.zookeeper.example.com:2181" ]
247
+ },
248
+ "haproxy": {
249
+ "server_options": "check inter 2s rise 3 fall 2",
250
+ "shared_frontend": [
251
+ "acl is_service1 hdr_dom(host) -i service1.lb.example.com",
252
+ "use_backend service1 if is_service1"
253
+ ],
254
+ "backend": [
255
+ "mode http"
256
+ ]
257
+ }
258
+ },
259
+ "service2": {
260
+ "discovery": {
261
+ "method": "zookeeper",
262
+ "path": "/nerve/services/service2",
263
+ "hosts": [ "0.zookeeper.example.com:2181" ]
264
+ },
265
+ "haproxy": {
266
+ "server_options": "check inter 2s rise 3 fall 2",
267
+ "shared_frontend": [
268
+ "acl is_service1 hdr_dom(host) -i service2.lb.example.com",
269
+ "use_backend service2 if is_service2"
270
+ ],
271
+ "backend": [
272
+ "mode http"
273
+ ]
274
+ }
275
+ }
276
+ }
277
+ }
278
+ ```
279
+
280
+ This would produce an haproxy.cfg much like the following:
281
+
282
+ ```
283
+ backend service1
284
+ mode http
285
+ server server1.example.net:80 server1.example.net:80 check inter 2s rise 3 fall 2
286
+
287
+ backend service2
288
+ mode http
289
+ server server2.example.net:80 server2.example.net:80 check inter 2s rise 3 fall 2
290
+
291
+ frontend shared-frontend
292
+ bind 127.0.0.1:8081
293
+ acl is_service1 hdr_dom(host) -i service1.lb
294
+ use_backend service1 if is_service1
295
+ acl is_service2 hdr_dom(host) -i service2.lb
296
+ use_backend service2 if is_service2
297
+ ```
298
+
299
+ Non-HTTP backends such as MySQL or RabbitMQ will obviously continue to need their own dedicated ports.
300
+
206
301
  ## Contributing
207
302
 
208
303
  1. Fork it
data/bin/synapse CHANGED
@@ -37,7 +37,7 @@ def parseconfig(filename)
37
37
  raise ArgumentError, "config file does not exist:\n#{e.inspect}"
38
38
  rescue Errno::EACCES => e
39
39
  raise ArgumentError, "could not open config file:\n#{e.inspect}"
40
- rescue YAML::ParseError => e
40
+ rescue YAML::SyntaxError => e
41
41
  raise "config file #{filename} is not yaml:\n#{e.inspect}"
42
42
  end
43
43
  return c.to_ruby
@@ -543,18 +543,35 @@ module Synapse
543
543
  # generates a new config based on the state of the watchers
544
544
  def generate_config(watchers)
545
545
  new_config = generate_base_config
546
+ shared_frontend_lines = generate_shared_frontend
546
547
 
547
548
  watchers.each do |watcher|
548
549
  @watcher_configs[watcher.name] ||= parse_watcher_config(watcher)
549
-
550
550
  new_config << generate_frontend_stanza(watcher, @watcher_configs[watcher.name]['frontend'])
551
551
  new_config << generate_backend_stanza(watcher, @watcher_configs[watcher.name]['backend'])
552
+ if watcher.haproxy.include?('shared_frontend')
553
+ if @opts['shared_frontend'] == nil
554
+ log.warn "synapse: service #{watcher.name} contains a shared frontend section but the base config does not! skipping."
555
+ else
556
+ shared_frontend_lines << validate_haproxy_stanza(watcher.haproxy['shared_frontend'].map{|l| "\t#{l}"}, "frontend", "shared frontend section for #{watcher.name}")
557
+ end
558
+ end
552
559
  end
560
+ new_config << shared_frontend_lines.flatten if shared_frontend_lines
553
561
 
554
562
  log.debug "synapse: new haproxy config: #{new_config}"
555
563
  return new_config.flatten.join("\n")
556
564
  end
557
565
 
566
+ # pull out the shared frontend section if any
567
+ def generate_shared_frontend
568
+ return nil unless @opts.include?('shared_frontend')
569
+ log.debug "synapse: found a shared frontend section"
570
+ shared_frontend_lines = ["\nfrontend shared-frontend"]
571
+ shared_frontend_lines << validate_haproxy_stanza(@opts['shared_frontend'].map{|l| "\t#{l}"}, "frontend", "shared frontend")
572
+ return shared_frontend_lines
573
+ end
574
+
558
575
  # generates the global and defaults sections of the config file
559
576
  def generate_base_config
560
577
  base_config = ["# auto-generated by synapse at #{Time.now}\n"]
@@ -593,20 +610,24 @@ module Synapse
593
610
  })
594
611
 
595
612
  # pick only those fields that are valid and warn about the invalid ones
596
- config[section].select!{|setting|
597
- parsed_setting = setting.strip.gsub(/\s+/, ' ').downcase
598
- if @@section_fields[section].any? {|field| parsed_setting.start_with?(field)}
599
- true
600
- else
601
- log.warn "synapse: service #{watcher.name} contains invalid #{section} setting: '#{setting}'"
602
- false
603
- end
604
- }
613
+ config[section] = validate_haproxy_stanza(config[section], section, watcher.name)
605
614
  end
606
615
 
607
616
  return config
608
617
  end
609
618
 
619
+ def validate_haproxy_stanza(stanza, stanza_type, service_name)
620
+ return stanza.select {|setting|
621
+ parsed_setting = setting.strip.gsub(/\s+/, ' ').downcase
622
+ if @@section_fields[stanza_type].any? {|field| parsed_setting.start_with?(field)}
623
+ true
624
+ else
625
+ log.warn "synapse: service #{service_name} contains invalid #{stanza_type} setting: '#{setting}', discarding"
626
+ false
627
+ end
628
+ }
629
+ end
630
+
610
631
  # generates an individual stanza for a particular watcher
611
632
  def generate_frontend_stanza(watcher, config)
612
633
  unless watcher.haproxy.has_key?("port")
@@ -642,7 +663,7 @@ module Synapse
642
663
  # first, get a list of existing servers for various backends
643
664
  begin
644
665
  s = UNIXSocket.new(@opts['socket_file_path'])
645
- s.write('show stat;')
666
+ s.write("show stat\n")
646
667
  info = s.read()
647
668
  rescue StandardError => e
648
669
  log.warn "synapse: unhandled error reading stats socket: #{e.inspect}"
@@ -690,9 +711,9 @@ module Synapse
690
711
  cur_backends.each do |section, backends|
691
712
  backends.each do |backend|
692
713
  if enabled_backends[section].include? backend
693
- command = "enable server #{section}/#{backend};"
714
+ command = "enable server #{section}/#{backend}\n"
694
715
  else
695
- command = "disable server #{section}/#{backend};"
716
+ command = "disable server #{section}/#{backend}\n"
696
717
  end
697
718
 
698
719
  # actually write the command to the socket
@@ -713,6 +734,8 @@ module Synapse
713
734
  end
714
735
  end
715
736
  end
737
+
738
+ log.info "synapse: reconfigured haproxy"
716
739
  end
717
740
 
718
741
  # writes the config
@@ -741,6 +764,7 @@ module Synapse
741
764
  # do the actual restart
742
765
  res = `#{opts['reload_command']}`.chomp
743
766
  raise "failed to reload haproxy via #{opts['reload_command']}: #{res}" unless $?.success?
767
+ log.info "synapse: restarted haproxy"
744
768
 
745
769
  @last_restart = Time.now()
746
770
  @restart_required = false
@@ -3,16 +3,18 @@ require "synapse/service_watcher/zookeeper"
3
3
  require "synapse/service_watcher/ec2tag"
4
4
  require "synapse/service_watcher/dns"
5
5
  require "synapse/service_watcher/docker"
6
+ require "synapse/service_watcher/zookeeper_dns"
6
7
 
7
8
  module Synapse
8
9
  class ServiceWatcher
9
10
 
10
11
  @watchers = {
11
- 'base'=>BaseWatcher,
12
- 'zookeeper'=>ZookeeperWatcher,
13
- 'ec2tag'=>EC2Watcher,
12
+ 'base' => BaseWatcher,
13
+ 'zookeeper' => ZookeeperWatcher,
14
+ 'ec2tag' => EC2Watcher,
14
15
  'dns' => DnsWatcher,
15
- 'docker' => DockerWatcher
16
+ 'docker' => DockerWatcher,
17
+ 'zookeeper_dns' => ZookeeperDnsWatcher,
16
18
  }
17
19
 
18
20
  # the method which actually dispatches watcher creation requests
@@ -40,6 +40,8 @@ module Synapse
40
40
  @default_servers = opts['default_servers'] || []
41
41
  @backends = @default_servers
42
42
 
43
+ @keep_default_servers = opts['keep_default_servers'] || false
44
+
43
45
  # set a flag used to tell the watchers to exit
44
46
  # this is not used in every watcher
45
47
  @should_exit = false
@@ -91,5 +93,17 @@ module Synapse
91
93
 
92
94
  log.warn "synapse: warning: a stub watcher with no default servers is pretty useless" if @default_servers.empty?
93
95
  end
96
+
97
+ def set_backends(new_backends)
98
+ if @keep_default_servers
99
+ @backends = @default_servers + new_backends
100
+ else
101
+ @backends = new_backends
102
+ end
103
+ end
104
+
105
+ def reconfigure!
106
+ @synapse.reconfigure!
107
+ end
94
108
  end
95
109
  end
@@ -15,7 +15,11 @@ module Synapse
15
15
  end
16
16
 
17
17
  def ping?
18
- !(resolver.getaddresses('airbnb.com').empty?)
18
+ @watcher.alive? && !(resolver.getaddresses('airbnb.com').empty?)
19
+ end
20
+
21
+ def discovery_servers
22
+ @discovery['servers']
19
23
  end
20
24
 
21
25
  private
@@ -23,7 +27,7 @@ module Synapse
23
27
  raise ArgumentError, "invalid discovery method #{@discovery['method']}" \
24
28
  unless @discovery['method'] == 'dns'
25
29
  raise ArgumentError, "a non-empty list of servers is required" \
26
- if @discovery['servers'].empty?
30
+ if discovery_servers.empty?
27
31
  end
28
32
 
29
33
  def watch
@@ -57,7 +61,7 @@ module Synapse
57
61
 
58
62
  def resolve_servers
59
63
  resolver.tap do |dns|
60
- resolution = @discovery['servers'].map do |server|
64
+ resolution = discovery_servers.map do |server|
61
65
  addresses = dns.getaddresses(server['host']).map(&:to_s)
62
66
  [server, addresses.sort]
63
67
  end
@@ -79,7 +83,8 @@ module Synapse
79
83
  addresses.map do |address|
80
84
  {
81
85
  'host' => address,
82
- 'port' => server['port']
86
+ 'port' => server['port'],
87
+ 'name' => server['name'],
83
88
  }
84
89
  end
85
90
  end
@@ -95,9 +100,10 @@ module Synapse
95
100
  end
96
101
  else
97
102
  log.info "synapse: discovered #{new_backends.length} backends for service #{@name}"
98
- @backends = new_backends
103
+ set_backends(new_backends)
99
104
  end
100
- @synapse.reconfigure!
105
+
106
+ reconfigure!
101
107
  end
102
108
  end
103
109
  end
@@ -111,9 +111,9 @@ module Synapse
111
111
  end
112
112
  else
113
113
  log.info "synapse: discovered #{new_backends.length} backends for service #{@name}"
114
- @backends = new_backends
114
+ set_backends(new_backends)
115
115
  end
116
- @synapse.reconfigure!
116
+ reconfigure!
117
117
  end
118
118
 
119
119
  end
@@ -88,7 +88,7 @@ module Synapse
88
88
  end
89
89
  else
90
90
  log.info "synapse: discovered #{new_backends.length} backends for service #{@name}"
91
- @backends = new_backends
91
+ set_backends(new_backends)
92
92
  end
93
93
  end
94
94
 
@@ -108,7 +108,7 @@ module Synapse
108
108
  # Rediscover
109
109
  discover
110
110
  # send a message to calling class to reconfigure
111
- @synapse.reconfigure!
111
+ reconfigure!
112
112
  end
113
113
  end
114
114
 
@@ -0,0 +1,232 @@
1
+ require 'synapse/service_watcher/base'
2
+ require 'synapse/service_watcher/dns'
3
+ require 'synapse/service_watcher/zookeeper'
4
+
5
+ require 'thread'
6
+
7
+ # Watcher for watching Zookeeper for entries containing DNS names that are
8
+ # continuously resolved to IP Addresses. The use case for this watcher is to
9
+ # allow services that are addressed by DNS to be reconfigured via Zookeeper
10
+ # instead of an update of the synapse config.
11
+ #
12
+ # The implementation builds on top of the existing DNS and Zookeeper watchers.
13
+ # This watcher creates a thread to manage the lifecycle of the DNS and
14
+ # Zookeeper watchers. This thread also publishes messages on a queue to
15
+ # indicate that DNS should be re-resolved (after the check interval) or that
16
+ # the DNS watcher should be shut down. The Zookeeper watcher waits for changes
17
+ # in backends from zookeeper and publishes those changes on an internal queue
18
+ # consumed by the DNS watcher. The DNS watcher blocks on this queue waiting
19
+ # for messages indicating that new servers are available, the check interval
20
+ # has passed (triggering a re-resolve), or that the watcher should shut down.
21
+ # The DNS watcher is responsible for the actual reconfiguring of backends.
22
+ module Synapse
23
+ class ZookeeperDnsWatcher < BaseWatcher
24
+
25
+ # Valid messages that can be passed through the internal message queue
26
+ module Messages
27
+ class InvalidMessageError < RuntimeError; end
28
+
29
+ # Indicates new servers identified by DNS names to be resolved. This is
30
+ # sent from Zookeeper on events that modify the ZK node. The payload is
31
+ # an array of hashes containing {'host', 'port', 'name'}
32
+ class NewServers < Struct.new(:servers); end
33
+
34
+ # Indicates that DNS should be re-resolved. This is sent by the
35
+ # ZookeeperDnsWatcher thread every check_interval seconds to cause a
36
+ # refresh of the IP addresses.
37
+ class CheckInterval; end
38
+
39
+ # Indicates that the DNS watcher should shut down. This is sent when
40
+ # stop is called.
41
+ class StopWatcher; end
42
+
43
+ # Saved instances of message types with contents that cannot vary. This
44
+ # reduces object allocation.
45
+ STOP_WATCHER_MESSAGE = StopWatcher.new
46
+ CHECK_INTERVAL_MESSAGE = CheckInterval.new
47
+ end
48
+
49
+ class Dns < Synapse::DnsWatcher
50
+
51
+ # Overrides the discovery_servers method on the parent class
52
+ attr_accessor :discovery_servers
53
+
54
+ def initialize(opts={}, synapse, message_queue)
55
+ @message_queue = message_queue
56
+
57
+ super(opts, synapse)
58
+ end
59
+
60
+ def stop
61
+ @message_queue.push(Messages::STOP_WATCHER_MESSAGE)
62
+ end
63
+
64
+ def watch
65
+ last_resolution = nil
66
+ while true
67
+ # Blocks on message queue, the message will be a signal to stop
68
+ # watching, to check a new set of servers from ZK, or to re-resolve
69
+ # the DNS (triggered every check_interval seconds)
70
+ message = @message_queue.pop
71
+
72
+ log.debug "synapse: received message #{message.inspect}"
73
+
74
+ case message
75
+ when Messages::StopWatcher
76
+ break
77
+ when Messages::NewServers
78
+ self.discovery_servers = message.servers
79
+ when Messages::CheckInterval
80
+ # Proceed to re-resolve the DNS
81
+ else
82
+ raise Messages::InvalidMessageError,
83
+ "Received unrecognized message: #{message.inspect}"
84
+ end
85
+
86
+ # Empty servers means we haven't heard back from ZK yet or ZK is
87
+ # empty. This should only occur if we don't get results from ZK
88
+ # within check_interval seconds or if ZK is empty.
89
+ if self.discovery_servers.nil? || self.discovery_servers.empty?
90
+ log.warn "synapse: no backends for service #{@name}"
91
+ else
92
+ # Resolve DNS names with the nameserver
93
+ current_resolution = resolve_servers
94
+ unless last_resolution == current_resolution
95
+ last_resolution = current_resolution
96
+ configure_backends(last_resolution)
97
+ end
98
+ end
99
+ end
100
+ end
101
+
102
+ private
103
+
104
+ # Validation is skipped as it has already occurred in the parent watcher
105
+ def validate_discovery_opts
106
+ end
107
+ end
108
+
109
+ class Zookeeper < Synapse::ZookeeperWatcher
110
+ def initialize(opts={}, synapse, message_queue)
111
+ super(opts, synapse)
112
+
113
+ @message_queue = message_queue
114
+ end
115
+
116
+ # Overrides reconfigure! to cause the new list of servers to be messaged
117
+ # to the DNS watcher rather than invoking a synapse reconfigure directly
118
+ def reconfigure!
119
+ # push the new backends onto the queue
120
+ @message_queue.push(Messages::NewServers.new(@backends))
121
+ end
122
+
123
+ private
124
+
125
+ # Validation is skipped as it has already occurred in the parent watcher
126
+ def validate_discovery_opts
127
+ end
128
+ end
129
+
130
+ def start
131
+ dns_discovery_opts = @discovery.select do |k,_|
132
+ k == 'nameserver'
133
+ end
134
+
135
+ zookeeper_discovery_opts = @discovery.select do |k,_|
136
+ k == 'hosts' || k == 'path'
137
+ end
138
+
139
+ @check_interval = @discovery['check_interval'] || 30.0
140
+
141
+ @message_queue = Queue.new
142
+
143
+ @dns = Dns.new(
144
+ mk_child_watcher_opts(dns_discovery_opts),
145
+ @synapse,
146
+ @message_queue
147
+ )
148
+
149
+ @zk = Zookeeper.new(
150
+ mk_child_watcher_opts(zookeeper_discovery_opts),
151
+ @synapse,
152
+ @message_queue
153
+ )
154
+
155
+ @zk.start
156
+ @dns.start
157
+
158
+ @watcher = Thread.new do
159
+ until @should_exit
160
+ # Trigger a DNS resolve every @check_interval seconds
161
+ sleep @check_interval
162
+
163
+ # Only trigger the resolve if the queue is empty, every other message
164
+ # on the queue would either cause a resolve or stop the watcher
165
+ if @message_queue.empty?
166
+ @message_queue.push(Messages::CHECK_INTERVAL_MESSAGE)
167
+ end
168
+
169
+ end
170
+ log.info "synapse: zookeeper_dns watcher exited successfully"
171
+ end
172
+ end
173
+
174
+ def ping?
175
+ @watcher.alive? && @dns.ping? && @zk.ping?
176
+ end
177
+
178
+ def stop
179
+ super
180
+
181
+ @dns.stop
182
+ @zk.stop
183
+ end
184
+
185
+ def backends
186
+ @dns.backends
187
+ end
188
+
189
+ private
190
+
191
+ def validate_discovery_opts
192
+ unless @discovery['method'] == 'zookeeper_dns'
193
+ raise ArgumentError, "invalid discovery method #{@discovery['method']}"
194
+ end
195
+
196
+ unless @discovery['hosts']
197
+ raise ArgumentError, "missing or invalid zookeeper host for service #{@name}"
198
+ end
199
+
200
+ unless @discovery['path']
201
+ raise ArgumentError, "invalid zookeeper path for service #{@name}"
202
+ end
203
+ end
204
+
205
+ # Method to generate a full config for the children (Dns and Zookeeper)
206
+ # watchers
207
+ #
208
+ # Notes on passing in the default_servers:
209
+ #
210
+ # Setting the default_servers here allows the Zookeeper watcher to return
211
+ # a list of backends based on the default servers when it fails to find
212
+ # any matching servers. These are passed on as the discovered backends
213
+ # to the DNS watcher, which will then watch them as normal for DNS
214
+ # changes. The default servers can also come into play if none of the
215
+ # hostnames from Zookeeper resolve to addresses in the DNS watcher. This
216
+ # should generally result in the expected behavior, but caution should be
217
+ # taken when deciding that this is the desired behavior.
218
+ def mk_child_watcher_opts(discovery_opts)
219
+ {
220
+ 'name' => @name,
221
+ 'haproxy' => @haproxy,
222
+ 'discovery' => discovery_opts,
223
+ 'default_servers' => @default_servers,
224
+ }
225
+ end
226
+
227
+ # Override reconfigure! as this class should not explicitly reconfigure
228
+ # synapse
229
+ def reconfigure!
230
+ end
231
+ end
232
+ end
@@ -1,3 +1,3 @@
1
1
  module Synapse
2
- VERSION = "0.9.1"
2
+ VERSION = "0.10.0"
3
3
  end
@@ -41,6 +41,15 @@ describe Synapse::BaseWatcher do
41
41
  default_servers = ['server1', 'server2']
42
42
  let(:args) { testargs.merge({'default_servers' => default_servers}) }
43
43
  it('sets default backends to default_servers') { expect(subject.backends).to equal(default_servers) }
44
+
45
+ context "with keep_default_servers set" do
46
+ let(:args) { testargs.merge({'default_servers' => default_servers, 'keep_default_servers' => true}) }
47
+ let(:new_backends) { ['discovered1', 'discovered2'] }
48
+
49
+ it('keeps default_servers when setting backends') do
50
+ subject.send(:set_backends, new_backends)
51
+ expect(subject.backends).to eq(default_servers + new_backends)
52
+ end
53
+ end
44
54
  end
45
55
  end
46
-
data/spec/spec_helper.rb CHANGED
@@ -6,13 +6,13 @@
6
6
  # See http://rubydoc.info/gems/rspec-core/RSpec/Core/Configuration
7
7
  require "#{File.dirname(__FILE__)}/../lib/synapse"
8
8
  require 'pry'
9
- require 'support/config'
9
+ require 'support/configuration'
10
10
 
11
11
  RSpec.configure do |config|
12
12
  config.treat_symbols_as_metadata_keys_with_true_values = true
13
13
  config.run_all_when_everything_filtered = true
14
14
  config.filter_run :focus
15
- config.include Config
15
+ config.include Configuration
16
16
 
17
17
  # Run specs in random order to surface order dependencies. If you find an
18
18
  # order dependency and want to debug it, you can fix the order by providing
@@ -1,6 +1,6 @@
1
1
  require "yaml"
2
2
 
3
- module Config
3
+ module Configuration
4
4
 
5
5
  def config
6
6
  @config ||= YAML::load_file(File.join(File.dirname(File.expand_path(__FILE__)), 'minimum.conf.yaml'))
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: synapse
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.1
4
+ version: 0.10.0
5
5
  prerelease:
6
6
  platform: ruby
7
7
  authors:
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2014-02-18 00:00:00.000000000 Z
12
+ date: 2014-05-07 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: zk
@@ -141,12 +141,13 @@ files:
141
141
  - lib/synapse/service_watcher/docker.rb
142
142
  - lib/synapse/service_watcher/ec2tag.rb
143
143
  - lib/synapse/service_watcher/zookeeper.rb
144
+ - lib/synapse/service_watcher/zookeeper_dns.rb
144
145
  - lib/synapse/version.rb
145
146
  - spec/lib/synapse/haproxy_spec.rb
146
147
  - spec/lib/synapse/service_watcher_base_spec.rb
147
148
  - spec/lib/synapse/service_watcher_docker_spec.rb
148
149
  - spec/spec_helper.rb
149
- - spec/support/config.rb
150
+ - spec/support/configuration.rb
150
151
  - spec/support/minimum.conf.yaml
151
152
  - synapse.gemspec
152
153
  homepage: ''
@@ -163,7 +164,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
163
164
  version: '0'
164
165
  segments:
165
166
  - 0
166
- hash: 575568743231626432
167
+ hash: 2945433216079037469
167
168
  required_rubygems_version: !ruby/object:Gem::Requirement
168
169
  none: false
169
170
  requirements:
@@ -172,7 +173,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
172
173
  version: '0'
173
174
  segments:
174
175
  - 0
175
- hash: 575568743231626432
176
+ hash: 2945433216079037469
176
177
  requirements: []
177
178
  rubyforge_project:
178
179
  rubygems_version: 1.8.23
@@ -184,5 +185,5 @@ test_files:
184
185
  - spec/lib/synapse/service_watcher_base_spec.rb
185
186
  - spec/lib/synapse/service_watcher_docker_spec.rb
186
187
  - spec/spec_helper.rb
187
- - spec/support/config.rb
188
+ - spec/support/configuration.rb
188
189
  - spec/support/minimum.conf.yaml