synapse 0.14.7 → 0.15.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,15 +1,7 @@
1
1
  ---
2
- !binary "U0hBMQ==":
3
- metadata.gz: !binary |-
4
- OTM5MmQ3MDE5NDg5MzU4MjVhZjEyNTNhOGM3NzhlYmQ1ZmIyM2JlNg==
5
- data.tar.gz: !binary |-
6
- ZTRlOTFjZjUxZTNjYTgzMmQxYzI3NTg5ZWY4NGU2ZGZmZmQ2OTJhYw==
2
+ SHA1:
3
+ metadata.gz: a4fc7fdaab530154bb53815482fdac2ea5d316b2
4
+ data.tar.gz: 8c4d21c1492b5db666dab217b2e5170faa0638f4
7
5
  SHA512:
8
- metadata.gz: !binary |-
9
- OGQyNjNmNjFmMWFkM2RhZjMzNzk3N2JmNmE5ZjM2ZTdjMmMzNjg5MmFiZDQ3
10
- NDU0ZmQyMmU2NWM3Mzc1NmRjZjZlZTM4NjVkMTQxYjAxZGUzYTBkMzY5OThm
11
- ZDNhMDM0NGY0ZjVjNjUyY2QzYzIwMzA1ZjNhNzk1ODg4N2I4NTg=
12
- data.tar.gz: !binary |-
13
- NjFhY2UyODI3NDA2MjU1NzIxYWUyOGJmODdhNDFhOWRhNjcxZDI2OTkwMDg3
14
- YzJjMTQ2ZmMzZTNjZjI0YmM2ZjE2ZDVjNmNmMTkwYWM4YTA0MTQxMGJmOGFm
15
- OWU4YjAxZDFmNjc4MGIyYzFkMzY4MTg4OTg1MWQyNWY3MmM1ZDY=
6
+ metadata.gz: 448734796df25671d0db0c8f693f2e4820c9ae590c85859790137968d5f63870a5489e7cfc765beb02641e3999a284f54676f336d20467058da586d0d1c51d91
7
+ data.tar.gz: 991a90e100616daf2c5c7d941c642abea588a565266b126bbff01cb74dbd02e66e0e17907ab3ccdea8a044a3ac3ead937055b604446395ff8b6fbc6a9f840625
data/README.md CHANGED
@@ -34,11 +34,11 @@ are proven routing components like [HAProxy](http://haproxy.1wt.eu/) or [NGINX](
34
34
  For every external service that your application talks to, we assign a synapse local port on localhost.
35
35
  Synapse creates a proxy from the local port to the service, and you reconfigure your application to talk to the proxy.
36
36
 
37
- Under the hood, Synapse sports `service_watcher`s for service discovery and
37
+ Under the hood, Synapse supports `service_watcher`s for service discovery and
38
38
  `config_generators` for configuring local state (e.g. load balancer configs)
39
39
  based on that service discovery state.
40
40
 
41
- Synapse supports service discovery with with pluggable `service_watcher`s which
41
+ Synapse supports service discovery with pluggable `service_watcher`s which
42
42
  take care of signaling to the `config_generators` so that they can react and
43
43
  reconfigure to point at available servers on the fly.
44
44
 
@@ -183,7 +183,7 @@ relevant routing component. For example if you want to only configure HAProxy an
183
183
  not NGINX for a particular service, you would pass ``disabled`` to the `nginx` section
184
184
  of that service's watcher config.
185
185
 
186
- * [`haproxy`](#haproxysvc): how will the haproxy section for this service be configured
186
+ * [`haproxy`](#haproxysvc): how will the haproxy section for this service be configured. If the corresponding `watcher` is defined to use `zookeeper` and the service publishes its `haproxy` configure on ZK, the `haproxy` hash can be filled/updated via data from the ZK node.
187
187
  * [`nginx`](https://github.com/jolynch/synapse-nginx#service-watcher-config): how will the nginx section for this service be configured. **NOTE** to use this you must have the synapse-nginx [plugin](#plugins) installed.
188
188
 
189
189
  The services hash may contain the following additional keys:
@@ -221,7 +221,7 @@ Given a `label_filters`: `[{ "label": "cluster", "value": "dev", "condition": "e
221
221
 
222
222
  ##### Zookeeper #####
223
223
 
224
- This watcher retrieves a list of servers from zookeeper.
224
+ This watcher retrieves a list of servers and also service config data from zookeeper.
225
225
  It takes the following mandatory arguments:
226
226
 
227
227
  * `method`: zookeeper
@@ -230,6 +230,8 @@ It takes the following mandatory arguments:
230
230
 
231
231
  The watcher assumes that each node under `path` represents a service server.
232
232
 
233
+ The watcher assumes that the data (if any) retrieved at znode `path` is a hash, where each key is named by a valid `config_generator` (e.g. `haproxy`) and the value is a hash that configs the generator.
234
+
233
235
  The following arguments are optional:
234
236
 
235
237
  * `decode`: A hash containing configuration for how to decode the data found in zookeeper.
@@ -5,6 +5,7 @@ require 'json'
5
5
  require 'socket'
6
6
  require 'digest/sha1'
7
7
  require 'set'
8
+ require 'hashdiff'
8
9
 
9
10
  class Synapse::ConfigGenerator
10
11
  class Haproxy < BaseGenerator
@@ -801,6 +802,8 @@ class Synapse::ConfigGenerator
801
802
  # should be enough for anyone right (famous last words)?
802
803
  MAX_SERVER_ID = (2**16 - 1).freeze
803
804
 
805
+ attr_reader :server_id_map, :state_cache
806
+
804
807
  def initialize(opts)
805
808
  super(opts)
806
809
 
@@ -845,8 +848,11 @@ class Synapse::ConfigGenerator
845
848
  @backends_cache = {}
846
849
  @watcher_revisions = {}
847
850
 
848
- @state_file_path = @opts['state_file_path']
849
- @state_file_ttl = @opts.fetch('state_file_ttl', DEFAULT_STATE_FILE_TTL).to_i
851
+ @state_cache = HaproxyState.new(
852
+ @opts['state_file_path'],
853
+ @opts.fetch('state_file_ttl', DEFAULT_STATE_FILE_TTL).to_i,
854
+ self
855
+ )
850
856
 
851
857
  # For giving consistent orders, even if they are random
852
858
  @server_order_seed = @opts.fetch('server_order_seed', rand(2000)).to_i
@@ -907,6 +913,10 @@ class Synapse::ConfigGenerator
907
913
  end
908
914
  end
909
915
 
916
+ def update_state_file(watchers)
917
+ @state_cache.update_state_file(watchers)
918
+ end
919
+
910
920
  # generates a new config based on the state of the watchers
911
921
  def generate_config(watchers)
912
922
  new_config = generate_base_config
@@ -914,8 +924,15 @@ class Synapse::ConfigGenerator
914
924
 
915
925
  watchers.each do |watcher|
916
926
  watcher_config = watcher.config_for_generator[name]
917
- @watcher_configs[watcher.name] ||= parse_watcher_config(watcher)
918
- next if watcher_config['disabled']
927
+ next if watcher_config.nil? || watcher_config.empty? || watcher_config['disabled']
928
+ @watcher_configs[watcher.name] = parse_watcher_config(watcher)
929
+
930
+ # if watcher_config is changed, trigger restart
931
+ config_diff = HashDiff.diff(@state_cache.config_for_generator(watcher.name), watcher_config)
932
+ if !config_diff.empty?
933
+ log.info "synapse: restart required because config_for_generator changed. before: #{@state_cache.config_for_generator(watcher.name)}, after: #{watcher_config}"
934
+ @restart_required = true
935
+ end
919
936
 
920
937
  regenerate = watcher.revision != @watcher_revisions[watcher.name] ||
921
938
  @frontends_cache[watcher.name].nil? ||
@@ -1051,7 +1068,7 @@ class Synapse::ConfigGenerator
1051
1068
 
1052
1069
  # The ordering here is important. First we add all the backends in the
1053
1070
  # disabled state...
1054
- seen.fetch(watcher.name, []).each do |backend_name, backend|
1071
+ @state_cache.backends(watcher).each do |backend_name, backend|
1055
1072
  backends[backend_name] = backend.merge('enabled' => false)
1056
1073
  # We remember the haproxy_server_id from a previous reload here.
1057
1074
  # Note though that if live servers below define haproxy_server_id
@@ -1308,74 +1325,113 @@ class Synapse::ConfigGenerator
1308
1325
  ######################################
1309
1326
  # methods for managing the state file
1310
1327
  ######################################
1311
- def seen
1312
- # if we don't support the state file, return nothing
1313
- return {} if @state_file_path.nil?
1328
+ class HaproxyState
1329
+ include Synapse::Logging
1314
1330
 
1315
- # if we've never needed the backends, now is the time to load them
1316
- @seen = read_state_file if @seen.nil?
1331
+ # TODO: enable version in the Haproxy Cache File
1332
+ KEY_WATCHER_CONFIG_FOR_GENERATOR = "watcher_config_for_generator"
1333
+ NON_BACKENDS_KEYS = [KEY_WATCHER_CONFIG_FOR_GENERATOR]
1317
1334
 
1318
- @seen
1319
- end
1335
+ def initialize(state_file_path, state_file_ttl, haproxy)
1336
+ @state_file_path = state_file_path
1337
+ @state_file_ttl = state_file_ttl
1338
+ @haproxy = haproxy
1339
+ end
1320
1340
 
1321
- def update_state_file(watchers)
1322
- # if we don't support the state file, do nothing
1323
- return if @state_file_path.nil?
1324
-
1325
- log.info "synapse: writing state file"
1326
- timestamp = Time.now.to_i
1327
-
1328
- # Remove stale backends
1329
- seen.each do |watcher_name, backends|
1330
- backends.each do |backend_name, backend|
1331
- ts = backend.fetch('timestamp', 0)
1332
- delta = (timestamp - ts).abs
1333
- if delta > @state_file_ttl
1334
- log.info "synapse: expiring #{backend_name} with age #{delta}"
1335
- backends.delete(backend_name)
1336
- end
1341
+ def backends(watcher_name)
1342
+ if seen.key?(watcher_name)
1343
+ seen[watcher_name].select { |section, data| !NON_BACKENDS_KEYS.include?(section) }
1344
+ else
1345
+ {}
1337
1346
  end
1338
1347
  end
1339
1348
 
1340
- # Remove any services which no longer have any backends
1341
- seen.reject!{|watcher_name, backends| backends.keys.length == 0}
1349
+ def config_for_generator(watcher_name)
1350
+ cache_config = {}
1351
+ if seen.key?(watcher_name) && seen[watcher_name].key?(KEY_WATCHER_CONFIG_FOR_GENERATOR)
1352
+ cache_config = seen[watcher_name][KEY_WATCHER_CONFIG_FOR_GENERATOR]
1353
+ end
1342
1354
 
1343
- # Add backends from watchers
1344
- watchers.each do |watcher|
1345
- seen[watcher.name] ||= {}
1355
+ cache_config
1356
+ end
1346
1357
 
1347
- watcher.backends.each do |backend|
1348
- backend_name = construct_name(backend)
1349
- data = {
1350
- 'timestamp' => timestamp,
1351
- }
1352
- server_id = @server_id_map[watcher.name][backend_name].to_i
1353
- if server_id && server_id > 0 && server_id <= MAX_SERVER_ID
1354
- data['haproxy_server_id'] = server_id
1358
+ def update_state_file(watchers)
1359
+ # if we don't support the state file, do nothing
1360
+ return if @state_file_path.nil?
1361
+
1362
+ log.info "synapse: writing state file"
1363
+ timestamp = Time.now.to_i
1364
+
1365
+ # Remove stale backends
1366
+ seen.each do |watcher_name, data|
1367
+ backends(watcher_name).each do |backend_name, backend|
1368
+ ts = backend.fetch('timestamp', 0)
1369
+ delta = (timestamp - ts).abs
1370
+ if delta > @state_file_ttl
1371
+ log.info "synapse: expiring #{backend_name} with age #{delta}"
1372
+ data.delete(backend_name)
1373
+ end
1355
1374
  end
1375
+ end
1356
1376
 
1357
- seen[watcher.name][backend_name] = data.merge(backend)
1377
+ # Remove any services which no longer have any backends
1378
+ seen.reject!{|watcher_name, data| backends(watcher_name).keys.length == 0}
1379
+
1380
+ # Add backends and config from watchers
1381
+ watchers.each do |watcher|
1382
+ seen[watcher.name] ||= {}
1383
+
1384
+ watcher.backends.each do |backend|
1385
+ backend_name = @haproxy.construct_name(backend)
1386
+ data = {
1387
+ 'timestamp' => timestamp,
1388
+ }
1389
+ server_id = @haproxy.server_id_map[watcher.name][backend_name].to_i
1390
+ if server_id && server_id > 0 && server_id <= MAX_SERVER_ID
1391
+ data['haproxy_server_id'] = server_id
1392
+ end
1393
+
1394
+ seen[watcher.name][backend_name] = data.merge(backend)
1395
+ end
1396
+
1397
+ # Add config for generator from watcher
1398
+ if watcher.config_for_generator.key?(@haproxy.name)
1399
+ seen[watcher.name][KEY_WATCHER_CONFIG_FOR_GENERATOR] =
1400
+ watcher.config_for_generator[@haproxy.name]
1401
+ end
1358
1402
  end
1403
+
1404
+ # write the data!
1405
+ write_data_to_state_file(seen)
1359
1406
  end
1360
1407
 
1361
- # write the data!
1362
- write_data_to_state_file(seen)
1363
- end
1408
+ private
1364
1409
 
1365
- def read_state_file
1366
- # Some versions of JSON return nil on an empty file ...
1367
- JSON.load(File.read(@state_file_path)) || {}
1368
- rescue StandardError => e
1369
- # It's ok if the state file doesn't exist or contains invalid data
1370
- # The state file will be rebuilt automatically
1371
- {}
1372
- end
1410
+ def seen
1411
+ # if we don't support the state file, return nothing
1412
+ return {} if @state_file_path.nil?
1413
+
1414
+ # if we've never needed the backends, now is the time to load them
1415
+ @seen = read_state_file if @seen.nil?
1416
+
1417
+ @seen
1418
+ end
1373
1419
 
1374
- # we do this atomically so the state file is always consistent
1375
- def write_data_to_state_file(data)
1376
- tmp_state_file_path = @state_file_path + ".tmp"
1377
- File.write(tmp_state_file_path, JSON.pretty_generate(data))
1378
- FileUtils.mv(tmp_state_file_path, @state_file_path)
1420
+ def read_state_file
1421
+ # Some versions of JSON return nil on an empty file ...
1422
+ JSON.load(File.read(@state_file_path)) || {}
1423
+ rescue StandardError => e
1424
+ # It's ok if the state file doesn't exist or contains invalid data
1425
+ # The state file will be rebuilt automatically
1426
+ {}
1427
+ end
1428
+
1429
+ # we do this atomically so the state file is always consistent
1430
+ def write_data_to_state_file(data)
1431
+ tmp_state_file_path = @state_file_path + ".tmp"
1432
+ File.write(tmp_state_file_path, JSON.pretty_generate(data))
1433
+ FileUtils.mv(tmp_state_file_path, @state_file_path)
1434
+ end
1379
1435
  end
1380
1436
  end
1381
1437
  end
@@ -1,5 +1,6 @@
1
1
  require 'synapse/log'
2
2
  require 'set'
3
+ require 'hashdiff'
3
4
 
4
5
  class Synapse::ServiceWatcher
5
6
  class BaseWatcher
@@ -7,7 +8,7 @@ class Synapse::ServiceWatcher
7
8
 
8
9
  LEADER_WARN_INTERVAL = 30
9
10
 
10
- attr_reader :name, :config_for_generator, :revision
11
+ attr_reader :name, :revision
11
12
 
12
13
  def initialize(opts={}, synapse)
13
14
  super()
@@ -99,6 +100,11 @@ class Synapse::ServiceWatcher
99
100
  true
100
101
  end
101
102
 
103
+ # deep clone the hash to protect its readonly property
104
+ def config_for_generator
105
+ Marshal.load( Marshal.dump(@config_for_generator))
106
+ end
107
+
102
108
  def backends
103
109
  filtered = backends_filtered_by_labels
104
110
 
@@ -152,7 +158,7 @@ class Synapse::ServiceWatcher
152
158
  end
153
159
  end
154
160
 
155
- def set_backends(new_backends)
161
+ def set_backends(new_backends, new_config_for_generator = {})
156
162
  # Aggregate and deduplicate all potential backend service instances.
157
163
  new_backends = (new_backends + @default_servers) if @keep_default_servers
158
164
  # Substitute backend_port_override for the provided port
@@ -165,7 +171,20 @@ class Synapse::ServiceWatcher
165
171
  [b['host'], b['port'], b.fetch('name', '')]
166
172
  }
167
173
 
174
+ backends_updated = update_backends(new_backends)
175
+ config_updated = update_config_for_generator(new_config_for_generator)
176
+
177
+ if backends_updated || config_updated
178
+ reconfigure!
179
+ return true
180
+ else
181
+ return false
182
+ end
183
+ end
184
+
185
+ def update_backends(new_backends)
168
186
  if new_backends.to_set == @backends.to_set
187
+ log.info "synapse: backends for service #{@name} do not change."
169
188
  return false
170
189
  end
171
190
 
@@ -192,11 +211,28 @@ class Synapse::ServiceWatcher
192
211
  @backends = new_backends
193
212
  end
194
213
 
195
- reconfigure!
196
-
197
214
  return true
198
215
  end
199
216
 
217
+ def update_config_for_generator(new_config_for_generator)
218
+ if new_config_for_generator.empty?
219
+ log.info "synapse: no config_for_generator data from #{name} for" \
220
+ " service #{@name}; keep existing config_for_generator: #{@config_for_generator.inspect}"
221
+ return false
222
+ else
223
+ log.info "synapse: discovered config_for_generator for service #{@name}"
224
+ diff = HashDiff.diff(new_config_for_generator, config_for_generator)
225
+
226
+ if diff.empty?
227
+ log.info "synapse: config_for_generator for service #{@name} does not change."
228
+ return false
229
+ else
230
+ @config_for_generator = new_config_for_generator
231
+ return true
232
+ end
233
+ end
234
+ end
235
+
200
236
  # Subclasses should not invoke this directly; it's only exposed so that it
201
237
  # can be overridden in subclasses.
202
238
  def reconfigure!
@@ -157,7 +157,15 @@ class Synapse::ServiceWatcher
157
157
  end
158
158
  end
159
159
 
160
- set_backends(new_backends)
160
+ node = @zk.get(@discovery['path'], :watch => true)
161
+ begin
162
+ new_config_for_generator = parse_service_config(node.first)
163
+ rescue StandardError => e
164
+ log.error "synapse: invalid config data in ZK node at #{@discovery['path']}: #{e}"
165
+ new_config_for_generator = {}
166
+ end
167
+
168
+ set_backends(new_backends, new_config_for_generator)
161
169
  end
162
170
 
163
171
  # sets up zookeeper callbacks if the data at the discovery path changes
@@ -260,6 +268,37 @@ class Synapse::ServiceWatcher
260
268
 
261
269
  return host, port, name, weight, haproxy_server_options, labels
262
270
  end
271
+
272
+ def parse_service_config(data)
273
+ log.debug "synapse: deserializing process data"
274
+ if data.nil? || data.empty?
275
+ decoded = {}
276
+ else
277
+ decoded = @decode_method.call(data)
278
+ end
279
+
280
+ new_generator_config = {}
281
+ # validate the config. if the config is not empty:
282
+ # each key should be named by one of the available generators
283
+ # each value should be a hash (could be empty)
284
+ decoded.collect.each do |generator_name, generator_config|
285
+ if !@synapse.available_generators.keys.include?(generator_name)
286
+ log.error "synapse: invalid generator name in ZK node at #{@discovery['path']}:" \
287
+ " #{generator_name}"
288
+ next
289
+ else
290
+ if generator_config.nil? || !generator_config.is_a?(Hash)
291
+ log.warn "synapse: invalid generator config in ZK node at #{@discovery['path']}" \
292
+ " for generator #{generator_name}"
293
+ new_generator_config[generator_name] = {}
294
+ else
295
+ new_generator_config[generator_name] = generator_config
296
+ end
297
+ end
298
+ end
299
+
300
+ return new_generator_config
301
+ end
263
302
  end
264
303
  end
265
304
 
@@ -1,3 +1,3 @@
1
1
  module Synapse
2
- VERSION = "0.14.7"
2
+ VERSION = "0.15.0"
3
3
  end
@@ -33,6 +33,28 @@ describe Synapse::ConfigGenerator::Haproxy do
33
33
  mockWatcher
34
34
  end
35
35
 
36
+ let(:mockwatcher_with_non_haproxy_config) do
37
+ mockWatcher = double(Synapse::ServiceWatcher)
38
+ allow(mockWatcher).to receive(:name).and_return('example_service2')
39
+ backends = [{ 'host' => 'somehost', 'port' => 5555, 'haproxy_server_options' => 'id 12 backup'}]
40
+ allow(mockWatcher).to receive(:backends).and_return(backends)
41
+ allow(mockWatcher).to receive(:config_for_generator).and_return({
42
+ 'unknown' => {'server_options' => "check inter 2000 rise 3 fall 2"}
43
+ })
44
+ mockWatcher
45
+ end
46
+
47
+ let(:mockwatcher_with_empty_haproxy_config) do
48
+ mockWatcher = double(Synapse::ServiceWatcher)
49
+ allow(mockWatcher).to receive(:name).and_return('example_service2')
50
+ backends = [{ 'host' => 'somehost', 'port' => 5555, 'haproxy_server_options' => 'id 12 backup'}]
51
+ allow(mockWatcher).to receive(:backends).and_return(backends)
52
+ allow(mockWatcher).to receive(:config_for_generator).and_return({
53
+ 'haproxy' => {}
54
+ })
55
+ mockWatcher
56
+ end
57
+
36
58
  let(:mockwatcher_with_server_id) do
37
59
  mockWatcher = double(Synapse::ServiceWatcher)
38
60
  allow(mockWatcher).to receive(:name).and_return('server_id_svc')
@@ -316,6 +338,46 @@ describe Synapse::ConfigGenerator::Haproxy do
316
338
  subject.update_config(watchers)
317
339
  end
318
340
  end
341
+
342
+ context 'if watcher has empty or nil config_for_generator[haproxy]' do
343
+ let(:watchers) { [mockwatcher, mockwatcher_with_non_haproxy_config, mockwatcher_with_empty_haproxy_config] }
344
+
345
+ it 'does not generate config for those watchers' do
346
+ allow(subject).to receive(:parse_watcher_config).and_return({})
347
+ expect(subject).to receive(:generate_frontend_stanza).exactly(:once).with(mockwatcher, nil)
348
+ expect(subject).to receive(:generate_backend_stanza).exactly(:once).with(mockwatcher, nil)
349
+ subject.update_config(watchers)
350
+ end
351
+ end
352
+
353
+ context 'if watcher has a new different config_for_generator[haproxy]' do
354
+ let(:watchers) { [mockwatcher] }
355
+ let(:socket_file_path) { ['socket_file_path1', 'socket_file_path2'] }
356
+
357
+ before do
358
+ config['haproxy']['do_writes'] = true
359
+ config['haproxy']['do_reloads'] = true
360
+ config['haproxy']['do_socket'] = true
361
+ config['haproxy']['socket_file_path'] = socket_file_path
362
+ end
363
+
364
+ it 'trigger restart' do
365
+ allow(subject).to receive(:parse_watcher_config).and_return({})
366
+ allow(subject).to receive(:write_config).and_return(nil)
367
+
368
+ # set config_for_generator in state_cache to {}
369
+ allow(subject.state_cache).to receive(:config_for_generator).and_return({})
370
+
371
+ # make sure @restart_required is not triggered in other places
372
+ allow(subject).to receive(:update_backends_at).and_return(nil)
373
+ allow(subject).to receive(:generate_frontend_stanza).exactly(:once).with(mockwatcher, nil).and_return([])
374
+ allow(subject).to receive(:generate_backend_stanza).exactly(:once).with(mockwatcher, nil).and_return([])
375
+
376
+ expect(subject).to receive(:restart)
377
+
378
+ subject.update_config(watchers)
379
+ end
380
+ end
319
381
  end
320
382
 
321
383
  describe '#tick' do
@@ -329,31 +391,58 @@ describe Synapse::ConfigGenerator::Haproxy do
329
391
 
330
392
  describe '#update_state_file' do
331
393
  let(:watchers) { [mockwatcher, mockwatcher_with_server_options] }
394
+ let(:watchers_with_non_haproxy_config) { [mockwatcher_with_non_haproxy_config] }
332
395
  let(:state_file_ttl) { 60 } # seconds
333
396
 
334
397
  before do
335
398
  config['haproxy']['state_file_path'] = '/statefile'
336
399
  config['haproxy']['state_file_ttl'] = state_file_ttl
337
- allow(subject).to receive(:write_data_to_state_file)
400
+ allow(subject.state_cache).to receive(:write_data_to_state_file)
338
401
  end
339
402
 
340
403
  it 'adds backends along with timestamps' do
341
404
  subject.update_state_file(watchers)
342
- data = subject.send(:seen)
343
405
 
344
406
  watcher_names = watchers.map{ |w| w.name }
345
- expect(data.keys).to contain_exactly(*watcher_names)
407
+ expect(subject.state_cache.send(:seen).keys).to contain_exactly(*watcher_names)
346
408
 
347
409
  watchers.each do |watcher|
348
410
  backend_names = watcher.backends.map{ |b| subject.construct_name(b) }
349
- expect(data[watcher.name].keys).to contain_exactly(*backend_names)
411
+ data = subject.state_cache.backends(watcher.name)
412
+ expect(data.keys).to contain_exactly(*backend_names)
350
413
 
351
414
  backend_names.each do |backend_name|
352
- expect(data[watcher.name][backend_name]).to include('timestamp')
415
+ expect(data[backend_name]).to include('timestamp')
353
416
  end
354
417
  end
355
418
  end
356
419
 
420
+ it 'adds config_for_generator from watcher' do
421
+ subject.update_state_file(watchers)
422
+
423
+ watcher_names = watchers.map{ |w| w.name }
424
+ expect(subject.state_cache.send(:seen).keys).to contain_exactly(*watcher_names)
425
+
426
+ watchers.each do |watcher|
427
+ watcher_config_for_generator = watcher.config_for_generator
428
+ data = subject.state_cache.config_for_generator(watcher.name)
429
+ expect(data).to eq(watcher_config_for_generator["haproxy"])
430
+ end
431
+ end
432
+
433
+ it 'does not add config_for_generator of other generators from watcher' do
434
+ subject.update_state_file(watchers_with_non_haproxy_config)
435
+
436
+ watcher_names = watchers_with_non_haproxy_config.map{ |w| w.name }
437
+ expect(subject.state_cache.send(:seen).keys).to contain_exactly(*watcher_names)
438
+
439
+ watchers_with_non_haproxy_config.each do |watcher|
440
+ watcher_config_for_generator = watcher.config_for_generator
441
+ data = subject.state_cache.config_for_generator(watcher.name)
442
+ expect(data).to eq({})
443
+ end
444
+ end
445
+
357
446
  context 'when the state file contains backends not in the watcher' do
358
447
  it 'keeps them in the config' do
359
448
  subject.update_state_file(watchers)
@@ -363,7 +452,7 @@ describe Synapse::ConfigGenerator::Haproxy do
363
452
  allow(watcher).to receive(:backends).and_return([])
364
453
  end
365
454
  subject.update_state_file(watchers)
366
- end.to_not change { subject.send(:seen) }
455
+ end.to_not change { subject.state_cache.send(:seen) }
367
456
  end
368
457
 
369
458
  context 'if those backends are stale' do
@@ -377,9 +466,9 @@ describe Synapse::ConfigGenerator::Haproxy do
377
466
  # the final +1 puts us over the expiry limit
378
467
  Timecop.travel(Time.now + state_file_ttl + 1) do
379
468
  subject.update_state_file(watchers)
380
- data = subject.send(:seen)
381
469
  watchers.each do |watcher|
382
- expect(data[watcher.name]).to be_empty
470
+ data = subject.state_cache.backends(watcher.name)
471
+ expect(data).to be_empty
383
472
  end
384
473
  end
385
474
  end
@@ -53,6 +53,22 @@ describe Synapse::ServiceWatcher::BaseWatcher do
53
53
  {'name' => 'server1', 'host' => 'server1', 'port' => 123},
54
54
  {'name' => 'server2', 'host' => 'server2', 'port' => 123}
55
55
  ]
56
+ config_for_generator = {
57
+ "haproxy" => {
58
+ "frontend" => [
59
+ "binding ::1:1111"
60
+ ],
61
+ "listen" => [
62
+ "mode http",
63
+ "option httpchk GET /health",
64
+ "timeout client 300s",
65
+ "timeout server 300s",
66
+ "option httplog"
67
+ ],
68
+ "port" => 1111,
69
+ "server_options" => "check inter 60s fastinter 2s downinter 5s rise 3 fall 2",
70
+ }
71
+ }
56
72
  let(:args) { testargs.merge({'default_servers' => default_servers}) }
57
73
 
58
74
  it 'sets backends' do
@@ -61,6 +77,20 @@ describe Synapse::ServiceWatcher::BaseWatcher do
61
77
  expect(subject.backends).to eq(backends)
62
78
  end
63
79
 
80
+ it 'sets backends with config for generator' do
81
+ expect(subject).to receive(:'reconfigure!').exactly(:once)
82
+ expect(subject.send(:set_backends, backends, config_for_generator)).to equal(true)
83
+ expect(subject.backends).to eq(backends)
84
+ expect(subject.config_for_generator).to eq(config_for_generator)
85
+ end
86
+
87
+ it 'calls reconfigure for duplicate backends but different config_for_generator' do
88
+ allow(subject).to receive(:backends).and_return(backends)
89
+ expect(subject).to receive(:'reconfigure!').exactly(:once)
90
+ expect(subject.send(:set_backends, backends, config_for_generator)).to equal(true)
91
+ expect(subject.config_for_generator).to eq(config_for_generator)
92
+ end
93
+
64
94
  it 'removes duplicate backends' do
65
95
  expect(subject).to receive(:'reconfigure!').exactly(:once)
66
96
  duplicate_backends = backends + backends
@@ -74,6 +104,19 @@ describe Synapse::ServiceWatcher::BaseWatcher do
74
104
  expect(subject.backends).to eq(default_servers)
75
105
  end
76
106
 
107
+ it 'keeps the current config_for_generator if no config discovered from ZK' do
108
+ expect(subject).to receive(:'reconfigure!').exactly(:once)
109
+ # set config_for_generator to some valid config
110
+ expect(subject.send(:set_backends, backends, config_for_generator)).to equal(true)
111
+ expect(subject.backends).to eq(backends)
112
+ expect(subject.config_for_generator).to eq(config_for_generator)
113
+
114
+ # re-set config_for_generator to empty
115
+ expect(subject.send(:set_backends, backends, {})).to equal(false)
116
+ expect(subject.backends).to eq(backends)
117
+ expect(subject.config_for_generator).to eq(config_for_generator)
118
+ end
119
+
77
120
  context 'with no default_servers' do
78
121
  let(:args) { remove_arg 'default_servers' }
79
122
  it 'uses previous backends if no default_servers set' do
@@ -98,12 +141,14 @@ describe Synapse::ServiceWatcher::BaseWatcher do
98
141
  end
99
142
  end
100
143
 
101
- it 'calls reconfigure only once for duplicate backends' do
144
+ it 'calls reconfigure only once for duplicate backends and config_for_generator' do
102
145
  expect(subject).to receive(:'reconfigure!').exactly(:once)
103
- expect(subject.send(:set_backends, backends)).to equal(true)
146
+ expect(subject.send(:set_backends, backends, config_for_generator)).to equal(true)
104
147
  expect(subject.backends).to eq(backends)
105
- expect(subject.send(:set_backends, backends)).to equal(false)
148
+ expect(subject.config_for_generator).to eq(config_for_generator)
149
+ expect(subject.send(:set_backends, backends, config_for_generator)).to equal(false)
106
150
  expect(subject.backends).to eq(backends)
151
+ expect(subject.config_for_generator).to eq(config_for_generator)
107
152
  end
108
153
 
109
154
  context 'with keep_default_servers set' do
@@ -29,11 +29,52 @@ describe Synapse::ServiceWatcher::ZookeeperWatcher do
29
29
  'labels' => { 'az' => 'us-east-1a' }
30
30
  }
31
31
  end
32
+ let(:config_for_generator_haproxy) do
33
+ {
34
+ "frontend" => [
35
+ "binding ::1:1111"
36
+ ],
37
+ "listen" => [
38
+ "mode http",
39
+ "option httpchk GET /health",
40
+ "timeout client 300s",
41
+ "timeout server 300s",
42
+ "option httplog"
43
+ ],
44
+ "port" => 1111,
45
+ "server_options" => "check inter 60s fastinter 2s downinter 5s rise 3 fall 2",
46
+ }
47
+ end
48
+ let(:config_for_generator) do
49
+ {
50
+ "haproxy" => config_for_generator_haproxy,
51
+ "unknown_generator" => {
52
+ "key" => "value"
53
+ }
54
+ }
55
+ end
56
+ let(:config_for_generator_invalid) do
57
+ {
58
+ "haproxy" => "value",
59
+ }
60
+ end
32
61
  let(:service_data_string) { service_data.to_json }
33
62
  let(:deserialized_service_data) {
34
63
  [ service_data['host'], service_data['port'], service_data['name'], service_data['weight'],
35
64
  service_data['haproxy_server_options'], service_data['labels'] ]
36
65
  }
66
+ let(:config_for_generator_string) { [config_for_generator.to_json] }
67
+ let(:parsed_config_for_generator) do
68
+ {
69
+ "haproxy" => config_for_generator_haproxy
70
+ }
71
+ end
72
+ let(:config_for_generator_invalid_string) { config_for_generator_invalid.to_json }
73
+ let(:parsed_config_for_generator_invalid) do
74
+ {
75
+ "haproxy" => {}
76
+ }
77
+ end
37
78
 
38
79
  context 'ZookeeperWatcher' do
39
80
  let(:discovery) { { 'method' => 'zookeeper', 'hosts' => 'somehost', 'path' => 'some/path' } }
@@ -49,15 +90,24 @@ describe Synapse::ServiceWatcher::ZookeeperWatcher do
49
90
  expect(subject.send(:deserialize_service_instance, service_data_string)).to eql(deserialized_service_data)
50
91
  end
51
92
 
93
+ it 'decodes config data correctly' do
94
+ expect(subject.send(:parse_service_config, config_for_generator_string.first)).to eql(parsed_config_for_generator)
95
+ end
96
+
97
+ it 'decodes invalid config data correctly' do
98
+ expect(subject.send(:parse_service_config, config_for_generator_invalid_string)).to eql(parsed_config_for_generator_invalid)
99
+ end
100
+
52
101
  it 'reacts to zk push events' do
53
102
  expect(subject).to receive(:watch)
54
103
  expect(subject).to receive(:discover).and_call_original
104
+ expect(mock_zk).to receive(:get).with('some/path', {:watch=>true}).and_return(config_for_generator_string)
55
105
  expect(mock_zk).to receive(:children).with('some/path', {:watch=>true}).and_return(
56
106
  ["test_child_1"]
57
107
  )
58
108
  expect(mock_zk).to receive(:get).with('some/path/test_child_1').and_return(mock_node)
59
109
  subject.instance_variable_set('@zk', mock_zk)
60
- expect(subject).to receive(:set_backends).with([service_data.merge({'id' => 1})])
110
+ expect(subject).to receive(:set_backends).with([service_data.merge({'id' => 1})], parsed_config_for_generator)
61
111
  subject.send(:watcher_callback).call
62
112
  end
63
113
 
@@ -67,9 +117,11 @@ describe Synapse::ServiceWatcher::ZookeeperWatcher do
67
117
  expect(mock_zk).to receive(:children).with('some/path', {:watch=>true}).and_return(
68
118
  ["test_child_1"]
69
119
  )
120
+ expect(mock_zk).to receive(:get).with('some/path', {:watch=>true}).and_return("")
70
121
  expect(mock_zk).to receive(:get).with('some/path/test_child_1').and_raise(ZK::Exceptions::NoNode)
122
+
71
123
  subject.instance_variable_set('@zk', mock_zk)
72
- expect(subject).to receive(:set_backends).with([])
124
+ expect(subject).to receive(:set_backends).with([],{})
73
125
  subject.send(:watcher_callback).call
74
126
  end
75
127
  end
data/synapse.gemspec CHANGED
@@ -25,6 +25,7 @@ Gem::Specification.new do |gem|
25
25
  gem.add_runtime_dependency "docker-api", "~> 1.7"
26
26
  gem.add_runtime_dependency "zk", "~> 1.9.4"
27
27
  gem.add_runtime_dependency "logging", "~> 1.8"
28
+ gem.add_runtime_dependency "hashdiff", "~> 0.2.3"
28
29
 
29
30
  gem.add_development_dependency "rake"
30
31
  gem.add_development_dependency "rspec", "~> 3.1.0"
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: synapse
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.14.7
4
+ version: 0.15.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Martin Rhoads
@@ -10,160 +10,174 @@ authors:
10
10
  autorequire:
11
11
  bindir: bin
12
12
  cert_chain: []
13
- date: 2017-08-10 00:00:00.000000000 Z
13
+ date: 2017-09-06 00:00:00.000000000 Z
14
14
  dependencies:
15
15
  - !ruby/object:Gem::Dependency
16
16
  name: aws-sdk
17
17
  requirement: !ruby/object:Gem::Requirement
18
18
  requirements:
19
- - - ~>
19
+ - - "~>"
20
20
  - !ruby/object:Gem::Version
21
21
  version: '1.39'
22
22
  type: :runtime
23
23
  prerelease: false
24
24
  version_requirements: !ruby/object:Gem::Requirement
25
25
  requirements:
26
- - - ~>
26
+ - - "~>"
27
27
  - !ruby/object:Gem::Version
28
28
  version: '1.39'
29
29
  - !ruby/object:Gem::Dependency
30
30
  name: docker-api
31
31
  requirement: !ruby/object:Gem::Requirement
32
32
  requirements:
33
- - - ~>
33
+ - - "~>"
34
34
  - !ruby/object:Gem::Version
35
35
  version: '1.7'
36
36
  type: :runtime
37
37
  prerelease: false
38
38
  version_requirements: !ruby/object:Gem::Requirement
39
39
  requirements:
40
- - - ~>
40
+ - - "~>"
41
41
  - !ruby/object:Gem::Version
42
42
  version: '1.7'
43
43
  - !ruby/object:Gem::Dependency
44
44
  name: zk
45
45
  requirement: !ruby/object:Gem::Requirement
46
46
  requirements:
47
- - - ~>
47
+ - - "~>"
48
48
  - !ruby/object:Gem::Version
49
49
  version: 1.9.4
50
50
  type: :runtime
51
51
  prerelease: false
52
52
  version_requirements: !ruby/object:Gem::Requirement
53
53
  requirements:
54
- - - ~>
54
+ - - "~>"
55
55
  - !ruby/object:Gem::Version
56
56
  version: 1.9.4
57
57
  - !ruby/object:Gem::Dependency
58
58
  name: logging
59
59
  requirement: !ruby/object:Gem::Requirement
60
60
  requirements:
61
- - - ~>
61
+ - - "~>"
62
62
  - !ruby/object:Gem::Version
63
63
  version: '1.8'
64
64
  type: :runtime
65
65
  prerelease: false
66
66
  version_requirements: !ruby/object:Gem::Requirement
67
67
  requirements:
68
- - - ~>
68
+ - - "~>"
69
69
  - !ruby/object:Gem::Version
70
70
  version: '1.8'
71
+ - !ruby/object:Gem::Dependency
72
+ name: hashdiff
73
+ requirement: !ruby/object:Gem::Requirement
74
+ requirements:
75
+ - - "~>"
76
+ - !ruby/object:Gem::Version
77
+ version: 0.2.3
78
+ type: :runtime
79
+ prerelease: false
80
+ version_requirements: !ruby/object:Gem::Requirement
81
+ requirements:
82
+ - - "~>"
83
+ - !ruby/object:Gem::Version
84
+ version: 0.2.3
71
85
  - !ruby/object:Gem::Dependency
72
86
  name: rake
73
87
  requirement: !ruby/object:Gem::Requirement
74
88
  requirements:
75
- - - ! '>='
89
+ - - ">="
76
90
  - !ruby/object:Gem::Version
77
91
  version: '0'
78
92
  type: :development
79
93
  prerelease: false
80
94
  version_requirements: !ruby/object:Gem::Requirement
81
95
  requirements:
82
- - - ! '>='
96
+ - - ">="
83
97
  - !ruby/object:Gem::Version
84
98
  version: '0'
85
99
  - !ruby/object:Gem::Dependency
86
100
  name: rspec
87
101
  requirement: !ruby/object:Gem::Requirement
88
102
  requirements:
89
- - - ~>
103
+ - - "~>"
90
104
  - !ruby/object:Gem::Version
91
105
  version: 3.1.0
92
106
  type: :development
93
107
  prerelease: false
94
108
  version_requirements: !ruby/object:Gem::Requirement
95
109
  requirements:
96
- - - ~>
110
+ - - "~>"
97
111
  - !ruby/object:Gem::Version
98
112
  version: 3.1.0
99
113
  - !ruby/object:Gem::Dependency
100
114
  name: factory_girl
101
115
  requirement: !ruby/object:Gem::Requirement
102
116
  requirements:
103
- - - ! '>='
117
+ - - ">="
104
118
  - !ruby/object:Gem::Version
105
119
  version: '0'
106
120
  type: :development
107
121
  prerelease: false
108
122
  version_requirements: !ruby/object:Gem::Requirement
109
123
  requirements:
110
- - - ! '>='
124
+ - - ">="
111
125
  - !ruby/object:Gem::Version
112
126
  version: '0'
113
127
  - !ruby/object:Gem::Dependency
114
128
  name: pry
115
129
  requirement: !ruby/object:Gem::Requirement
116
130
  requirements:
117
- - - ! '>='
131
+ - - ">="
118
132
  - !ruby/object:Gem::Version
119
133
  version: '0'
120
134
  type: :development
121
135
  prerelease: false
122
136
  version_requirements: !ruby/object:Gem::Requirement
123
137
  requirements:
124
- - - ! '>='
138
+ - - ">="
125
139
  - !ruby/object:Gem::Version
126
140
  version: '0'
127
141
  - !ruby/object:Gem::Dependency
128
142
  name: pry-nav
129
143
  requirement: !ruby/object:Gem::Requirement
130
144
  requirements:
131
- - - ! '>='
145
+ - - ">="
132
146
  - !ruby/object:Gem::Version
133
147
  version: '0'
134
148
  type: :development
135
149
  prerelease: false
136
150
  version_requirements: !ruby/object:Gem::Requirement
137
151
  requirements:
138
- - - ! '>='
152
+ - - ">="
139
153
  - !ruby/object:Gem::Version
140
154
  version: '0'
141
155
  - !ruby/object:Gem::Dependency
142
156
  name: webmock
143
157
  requirement: !ruby/object:Gem::Requirement
144
158
  requirements:
145
- - - ! '>='
159
+ - - ">="
146
160
  - !ruby/object:Gem::Version
147
161
  version: '0'
148
162
  type: :development
149
163
  prerelease: false
150
164
  version_requirements: !ruby/object:Gem::Requirement
151
165
  requirements:
152
- - - ! '>='
166
+ - - ">="
153
167
  - !ruby/object:Gem::Version
154
168
  version: '0'
155
169
  - !ruby/object:Gem::Dependency
156
170
  name: timecop
157
171
  requirement: !ruby/object:Gem::Requirement
158
172
  requirements:
159
- - - ! '>='
173
+ - - ">="
160
174
  - !ruby/object:Gem::Version
161
175
  version: '0'
162
176
  type: :development
163
177
  prerelease: false
164
178
  version_requirements: !ruby/object:Gem::Requirement
165
179
  requirements:
166
- - - ! '>='
180
+ - - ">="
167
181
  - !ruby/object:Gem::Version
168
182
  version: '0'
169
183
  description: Synapse is a daemon used to dynamically configure and manage local instances
@@ -179,10 +193,10 @@ executables:
179
193
  extensions: []
180
194
  extra_rdoc_files: []
181
195
  files:
182
- - .gitignore
183
- - .mailmap
184
- - .rspec
185
- - .travis.yml
196
+ - ".gitignore"
197
+ - ".mailmap"
198
+ - ".rspec"
199
+ - ".travis.yml"
186
200
  - Gemfile
187
201
  - Gemfile.lock
188
202
  - LICENSE.txt
@@ -235,17 +249,17 @@ require_paths:
235
249
  - lib
236
250
  required_ruby_version: !ruby/object:Gem::Requirement
237
251
  requirements:
238
- - - ! '>='
252
+ - - ">="
239
253
  - !ruby/object:Gem::Version
240
254
  version: '0'
241
255
  required_rubygems_version: !ruby/object:Gem::Requirement
242
256
  requirements:
243
- - - ! '>='
257
+ - - ">="
244
258
  - !ruby/object:Gem::Version
245
259
  version: '0'
246
260
  requirements: []
247
261
  rubyforge_project:
248
- rubygems_version: 2.5.1
262
+ rubygems_version: 2.4.5
249
263
  signing_key:
250
264
  specification_version: 4
251
265
  summary: Dynamic HAProxy configuration daemon