synapse 0.13.8 → 0.14.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +8 -8
- data/.travis.yml +5 -4
- data/README.md +30 -8
- data/config/synapse.conf.json +1 -0
- data/lib/synapse.rb +24 -18
- data/lib/synapse/config_generator.rb +20 -0
- data/lib/synapse/config_generator/README.md +74 -0
- data/lib/synapse/config_generator/base.rb +44 -0
- data/lib/synapse/{file_output.rb → config_generator/file_output.rb} +13 -11
- data/lib/synapse/{haproxy.rb → config_generator/haproxy.rb} +81 -42
- data/lib/synapse/service_watcher/README.md +6 -5
- data/lib/synapse/service_watcher/base.rb +38 -13
- data/lib/synapse/service_watcher/ec2tag.rb +2 -7
- data/lib/synapse/service_watcher/zookeeper.rb +2 -4
- data/lib/synapse/service_watcher/zookeeper_dns.rb +0 -1
- data/lib/synapse/version.rb +1 -1
- data/spec/lib/synapse/file_output_spec.rb +3 -2
- data/spec/lib/synapse/haproxy_spec.rb +167 -8
- data/spec/lib/synapse/service_watcher_base_spec.rb +9 -2
- data/spec/lib/synapse/service_watcher_docker_spec.rb +8 -1
- data/spec/lib/synapse/service_watcher_ec2tags_spec.rb +42 -8
- data/spec/lib/synapse/service_watcher_marathon_spec.rb +8 -1
- data/spec/lib/synapse/service_watcher_spec.rb +8 -1
- data/spec/lib/synapse/service_watcher_zookeeper_spec.rb +8 -1
- data/spec/support/minimum.conf.yaml +1 -0
- metadata +7 -4
checksums.yaml
CHANGED
@@ -1,15 +1,15 @@
|
|
1
1
|
---
|
2
2
|
!binary "U0hBMQ==":
|
3
3
|
metadata.gz: !binary |-
|
4
|
-
|
4
|
+
YTEwNDE5ZTdlZDM4Njk2YmJkMDRjYWQzYTNiM2FkZTM0MzgyZWE4YQ==
|
5
5
|
data.tar.gz: !binary |-
|
6
|
-
|
6
|
+
ZjBmNmVmOTY2YWRiYTY1NWZmYTg3YjhiMThhY2NmNjIxNmE3ODkwYQ==
|
7
7
|
SHA512:
|
8
8
|
metadata.gz: !binary |-
|
9
|
-
|
10
|
-
|
11
|
-
|
9
|
+
NjhlNDA1YjY1ODkxZWU4YzMyMzI2ODE1YWNjMzlmZWM4MTk1OTVjZTVkMjc4
|
10
|
+
ODFjMTVlNDEyYjVmMDk0YTAxMjM0MWMyNGRmOGFiYjdhNDM4OWFiNDFlMjE1
|
11
|
+
ZDMzNzZiYjg2MjgzY2VmYjQ2YjAxOGEwOTE0Y2MwMTkwNGFjMmU=
|
12
12
|
data.tar.gz: !binary |-
|
13
|
-
|
14
|
-
|
15
|
-
|
13
|
+
MGI0ZTBjNGQ5NjA0NmU1OWI0NzJhYmYyNDI2NWEyODI0MjA1NDM5NmYxNzEw
|
14
|
+
YWYxMzI3MDc2ZTEyMDFkM2U5MzkyY2U5NzBkNjQ4YjA4YWNhNTI0YmRlMzlj
|
15
|
+
N2JhM2MxYjQ5YWVhMzBlZWFhZTM3Y2RkMzVhOTNlMzc0ZmVkY2Y=
|
data/.travis.yml
CHANGED
data/README.md
CHANGED
@@ -134,12 +134,24 @@ The file has three main sections.
|
|
134
134
|
|
135
135
|
The `services` section is a hash, where the keys are the `name` of the service to be configured.
|
136
136
|
The name is just a human-readable string; it will be used in logs and notifications.
|
137
|
-
Each value in the services hash is also a hash, and
|
137
|
+
Each value in the services hash is also a hash, and must contain the following keys:
|
138
|
+
|
139
|
+
* [`discovery`](#discovery): how synapse will discover hosts providing this service (see [below](#discovery))
|
140
|
+
|
141
|
+
The services hash *should* contain a section on how to configure the routing
|
142
|
+
component you wish to use for this particular service. The only choice currently
|
143
|
+
is `haproxy`:
|
138
144
|
|
139
|
-
* [`discovery`](#discovery): how synapse will discover hosts providing this service (see below)
|
140
|
-
* `default_servers`: the list of default servers providing this service; synapse uses these if no others can be discovered
|
141
145
|
* [`haproxy`](#haproxysvc): how will the haproxy section for this service be configured
|
142
146
|
|
147
|
+
The services hash may contain the following keys:
|
148
|
+
|
149
|
+
* `default_servers` (default: `[]`): the list of default servers providing this service; synapse uses these if no others can be discovered. See [Listing Default Servers](#defaultservers).
|
150
|
+
* `keep_default_servers` (default: false): whether default servers should be added to discovered services
|
151
|
+
* `use_previous_backends` (default: true): if at any time the registry drops all backends, use previous backends we already know about.
|
152
|
+
<a name="backend_port_override"/>
|
153
|
+
* `backend_port_override`: the port that discovered servers listen on; you should specify this if your discovery mechanism only discovers names or addresses (like the DNS watcher or the Ec2TagWatcher). If the discovery method discovers a port along with hostnames (like the zookeeper watcher) this option may be left out, but will be used in preference if given.
|
154
|
+
|
143
155
|
<a name="discovery"/>
|
144
156
|
#### Service Discovery ####
|
145
157
|
|
@@ -214,9 +226,9 @@ It takes the following options:
|
|
214
226
|
this is case-sensitive.
|
215
227
|
* `tag_value`: the value to match on. Case-sensitive.
|
216
228
|
|
217
|
-
Additionally, you MUST supply `
|
218
|
-
|
219
|
-
|
229
|
+
Additionally, you MUST supply [`backend_port_override`](#backend_port_override)
|
230
|
+
in the service configuration as this watcher does not know which port the
|
231
|
+
backend service is listening on.
|
220
232
|
|
221
233
|
The following options are optional, provided the well-known `AWS_`
|
222
234
|
environment variables shown are set. If supplied, these options will
|
@@ -238,6 +250,7 @@ It takes the following options:
|
|
238
250
|
* `check_interval`: How often to request the list of tasks from Marathon (default: 10 seconds)
|
239
251
|
* `port_index`: Index of the backend port in the task's "ports" array. (default: 0)
|
240
252
|
|
253
|
+
<a name="defaultservers"/>
|
241
254
|
#### Listing Default Servers ####
|
242
255
|
|
243
256
|
You may list a number of default servers providing a service.
|
@@ -262,9 +275,13 @@ by unsetting `use_previous_backends`.
|
|
262
275
|
|
263
276
|
This section is its own hash, which should contain the following keys:
|
264
277
|
|
278
|
+
* `disabled`: A boolean value indicating if haproxy configuration management
|
279
|
+
for just this service instance ought be disabled. For example, if you want
|
280
|
+
file output for a particular service but no HAProxy config. (default is ``False``)
|
265
281
|
* `port`: the port (on localhost) where HAProxy will listen for connections to the service. If this is omitted, only a backend stanza (and no frontend stanza) will be generated for this service; you'll need to get traffic to your service yourself via the `shared_frontend` or manual frontends in `extra_sections`
|
266
282
|
* `bind_address`: force HAProxy to listen on this address ( default is localhost ). Setting `bind_address` on a per service basis overrides the global `bind_address` in the top level `haproxy`. Having HAProxy listen for connections on different addresses ( example: service1 listen on 127.0.0.2:443 and service2 listen on 127.0.0.3:443) allows /etc/hosts entries to point to services.
|
267
|
-
* `
|
283
|
+
* `bind_options`: optional: default value is an empty string, specify additional bind parameters, such as ssl accept-proxy, crt, ciphers etc.
|
284
|
+
* `server_port_override`: **DEPRECATED**. Renamed [`backend_port_override`](#backend_port_override) and moved to the top level hash. This will be removed in future versions.
|
268
285
|
* `server_options`: the haproxy options for each `server` line of the service in HAProxy config; it may be left out.
|
269
286
|
* `frontend`: additional lines passed to the HAProxy config in the `frontend` stanza of this service
|
270
287
|
* `backend`: additional lines passed to the HAProxy config in the `backend` stanza of this service
|
@@ -371,7 +388,7 @@ For example:
|
|
371
388
|
server_options: "check inter 2s rise 3 fall 2"
|
372
389
|
shared_frontend:
|
373
390
|
- "acl is_service1 hdr_dom(host) -i service2.lb.example.com"
|
374
|
-
- "use_backend service2 if is_service2
|
391
|
+
- "use_backend service2 if is_service2"
|
375
392
|
backend:
|
376
393
|
- "mode http"
|
377
394
|
|
@@ -410,3 +427,8 @@ Non-HTTP backends such as MySQL or RabbitMQ will obviously continue to need thei
|
|
410
427
|
|
411
428
|
See the Service Watcher [README](lib/synapse/service_watcher/README.md) for
|
412
429
|
how to add new Service Watchers.
|
430
|
+
|
431
|
+
### Creating a Config Generator ###
|
432
|
+
|
433
|
+
See the Config Generator [README](lib/synapse/config_generator/README.md) for
|
434
|
+
how to add new Config Generators
|
data/config/synapse.conf.json
CHANGED
data/lib/synapse.rb
CHANGED
@@ -1,11 +1,10 @@
|
|
1
1
|
require 'logger'
|
2
2
|
require 'json'
|
3
3
|
|
4
|
-
require
|
5
|
-
require
|
6
|
-
require
|
7
|
-
require
|
8
|
-
require "synapse/service_watcher"
|
4
|
+
require 'synapse/version'
|
5
|
+
require 'synapse/log'
|
6
|
+
require 'synapse/config_generator'
|
7
|
+
require 'synapse/service_watcher'
|
9
8
|
|
10
9
|
|
11
10
|
module Synapse
|
@@ -14,22 +13,14 @@ module Synapse
|
|
14
13
|
include Logging
|
15
14
|
|
16
15
|
def initialize(opts={})
|
16
|
+
# create objects that need to be notified of service changes
|
17
|
+
@config_generators = create_config_generators(opts)
|
18
|
+
raise "no config generators supplied" if @config_generators.empty?
|
19
|
+
|
17
20
|
# create the service watchers for all our services
|
18
21
|
raise "specify a list of services to connect in the config" unless opts.has_key?('services')
|
19
22
|
@service_watchers = create_service_watchers(opts['services'])
|
20
23
|
|
21
|
-
# create objects that need to be notified of service changes
|
22
|
-
@config_generators = []
|
23
|
-
# create the haproxy config generator, this is mandatory
|
24
|
-
raise "haproxy config section is missing" unless opts.has_key?('haproxy')
|
25
|
-
@config_generators << Haproxy.new(opts['haproxy'])
|
26
|
-
|
27
|
-
# possibly create a file manifestation for services that do not
|
28
|
-
# want to communicate via haproxy, e.g. cassandra
|
29
|
-
if opts.has_key?('file_output')
|
30
|
-
@config_generators << FileOutput.new(opts['file_output'])
|
31
|
-
end
|
32
|
-
|
33
24
|
# configuration is initially enabled to configure on first loop
|
34
25
|
@config_updated = true
|
35
26
|
|
@@ -85,9 +76,13 @@ module Synapse
|
|
85
76
|
@config_updated = true
|
86
77
|
end
|
87
78
|
|
79
|
+
def available_generators
|
80
|
+
Hash[@config_generators.collect{|cg| [cg.name, cg]}]
|
81
|
+
end
|
82
|
+
|
88
83
|
private
|
89
84
|
def create_service_watchers(services={})
|
90
|
-
service_watchers =[]
|
85
|
+
service_watchers = []
|
91
86
|
services.each do |service_name, service_config|
|
92
87
|
service_watchers << ServiceWatcher.create(service_name, service_config, self)
|
93
88
|
end
|
@@ -95,5 +90,16 @@ module Synapse
|
|
95
90
|
return service_watchers
|
96
91
|
end
|
97
92
|
|
93
|
+
private
|
94
|
+
def create_config_generators(opts={})
|
95
|
+
config_generators = []
|
96
|
+
opts.each do |type, generator_opts|
|
97
|
+
# Skip the "services" top level key
|
98
|
+
next if type == 'services'
|
99
|
+
config_generators << ConfigGenerator.create(type, generator_opts)
|
100
|
+
end
|
101
|
+
|
102
|
+
return config_generators
|
103
|
+
end
|
98
104
|
end
|
99
105
|
end
|
@@ -0,0 +1,20 @@
|
|
1
|
+
require 'synapse/log'
|
2
|
+
require 'synapse/config_generator/base'
|
3
|
+
|
4
|
+
module Synapse
|
5
|
+
class ConfigGenerator
|
6
|
+
# the type which actually dispatches generator creation requests
|
7
|
+
def self.create(type, opts)
|
8
|
+
generator = begin
|
9
|
+
type = type.downcase
|
10
|
+
require "synapse/config_generator/#{type}"
|
11
|
+
# haproxy => Haproxy, file_output => FileOutput, etc ...
|
12
|
+
type_class = type.split('_').map{|x| x.capitalize}.join
|
13
|
+
self.const_get("#{type_class}")
|
14
|
+
rescue Exception => e
|
15
|
+
raise ArgumentError, "Specified a config generator of #{type}, which could not be found: #{e}"
|
16
|
+
end
|
17
|
+
return generator.new(opts)
|
18
|
+
end
|
19
|
+
end
|
20
|
+
end
|
@@ -0,0 +1,74 @@
|
|
1
|
+
## ConfigGenerator Classes
|
2
|
+
|
3
|
+
Generators are the piece of Synapse that react to changes in service
|
4
|
+
registrations and actually reflect those changes in local state.
|
5
|
+
Generators should conform to the interface specified by `BaseGenerator` and
|
6
|
+
when your generator has received an update from synapse via `update_config` it
|
7
|
+
should sync the watcher state with the external configuration (e.g. HAProxy
|
8
|
+
state)
|
9
|
+
|
10
|
+
```ruby
|
11
|
+
require "synapse/config_generator/base"
|
12
|
+
|
13
|
+
class Synapse::ConfigGenerator
|
14
|
+
class MyGenerator < BaseGenerator
|
15
|
+
# The generator name is used to find service specific
|
16
|
+
# configuration in the service watchers. When supplying
|
17
|
+
# per service config, use this as the key
|
18
|
+
NAME = 'my_generator'.freeze
|
19
|
+
|
20
|
+
def initialize(opts = {})
|
21
|
+
# Process and validate any options specified in the dedicated section
|
22
|
+
# for this config generator, given as the `opts` hash. You may omit
|
23
|
+
# this method, or you can declare your own, but remember to invoke
|
24
|
+
# the parent initializer
|
25
|
+
super(opts)
|
26
|
+
end
|
27
|
+
|
28
|
+
def update_config(watchers)
|
29
|
+
# synapse will call this method whenever watcher state changes with the
|
30
|
+
# watcher state. You should reflect that state in the local config state
|
31
|
+
end
|
32
|
+
|
33
|
+
def tick
|
34
|
+
# Called every loop of the main Synapse loop regardless of watcher
|
35
|
+
# changes (roughly ~1s). You can use this to rate limit how often your
|
36
|
+
# config generator actually reconfigures external services (e.g. HAProxy
|
37
|
+
# may need to rate limit reloads as those can be disruptive to in
|
38
|
+
# flight connections
|
39
|
+
end
|
40
|
+
|
41
|
+
def normalize_watcher_provided_config(service_watcher_name, service_watcher_config)
|
42
|
+
# Every service watcher section of the Synapse configuration can contain
|
43
|
+
# options that change how the config generators react for that
|
44
|
+
# particular service. This normalize method is a good place to ensure
|
45
|
+
# you set options your generator expects every service watcher config
|
46
|
+
# to supply, providing, for example, default values. This is also a
|
47
|
+
# good place to raise errors in case any options are invalid.
|
48
|
+
end
|
49
|
+
end
|
50
|
+
end
|
51
|
+
```
|
52
|
+
|
53
|
+
### Generator Plugin Inteface
|
54
|
+
Synapse deduces both the class path and class name from any additional keys
|
55
|
+
passed to the top level configuration, which it assumes are equal to the `NAME`
|
56
|
+
of some ConfigGenerator. For example, if `haproxy` is set at the top level we
|
57
|
+
try to load the `Haproxy` `ConfigGenerator`.
|
58
|
+
|
59
|
+
#### Class Location
|
60
|
+
Synapse expects to find your class at `synapse/config_generator/#{name}`. You
|
61
|
+
must make your generator available at that path, and Synapse can "just work" and
|
62
|
+
find it.
|
63
|
+
|
64
|
+
#### Class Name
|
65
|
+
These type strings are then transformed into class names via the following
|
66
|
+
function:
|
67
|
+
|
68
|
+
```
|
69
|
+
type_class = type.split('_').map{|x| x.capitalize}
|
70
|
+
```
|
71
|
+
|
72
|
+
This has the effect of taking the method, splitting on `_`, capitalizing each
|
73
|
+
part and recombining. So `file_output` becomes `FileOutput` and `haproxy`
|
74
|
+
becomes `Haproxy`. Make sure your class name is correct.
|
@@ -0,0 +1,44 @@
|
|
1
|
+
require 'synapse/log'
|
2
|
+
|
3
|
+
class Synapse::ConfigGenerator
|
4
|
+
class BaseGenerator
|
5
|
+
include Synapse::Logging
|
6
|
+
|
7
|
+
NAME = 'base'.freeze
|
8
|
+
|
9
|
+
attr_reader :opts
|
10
|
+
|
11
|
+
def initialize(opts={})
|
12
|
+
@opts = opts
|
13
|
+
end
|
14
|
+
|
15
|
+
# Exposes NAME as 'name' so we can remain consistent with how we refer to
|
16
|
+
# service_watchers' by generator.name access (instead of
|
17
|
+
# generator.class::NAME) even though the names of generators don't change
|
18
|
+
def name
|
19
|
+
self.class::NAME
|
20
|
+
end
|
21
|
+
|
22
|
+
# The synapse main loop will call this any time watchers change, the
|
23
|
+
# config_generator is responsible for diffing the passed watcher state
|
24
|
+
# against the output configuration
|
25
|
+
def update_config(watchers)
|
26
|
+
end
|
27
|
+
|
28
|
+
# The synapse main loop will call this every tick
|
29
|
+
# of the logical clock (~1s). You can use this to intiate reloads
|
30
|
+
# or restarts in a rate limited fashion
|
31
|
+
def tick
|
32
|
+
end
|
33
|
+
|
34
|
+
# Service watchers have a subsection of their ``services`` entry that is
|
35
|
+
# dedicated to the watcher specific configuration for how to configure
|
36
|
+
# the config generator. This method will be called with each of these
|
37
|
+
# watcher hashes, and should normalize them to what the config generator
|
38
|
+
# needs, such as adding defaults. Return the properly populated default hash
|
39
|
+
def normalize_watcher_provided_config(service_watcher_name, service_watcher_config)
|
40
|
+
service_watcher_config.dup
|
41
|
+
end
|
42
|
+
|
43
|
+
end
|
44
|
+
end
|
@@ -1,12 +1,17 @@
|
|
1
|
+
require 'synapse/config_generator/base'
|
2
|
+
|
1
3
|
require 'fileutils'
|
2
4
|
require 'tempfile'
|
3
5
|
|
4
|
-
|
5
|
-
class FileOutput
|
6
|
-
include Logging
|
7
|
-
|
6
|
+
class Synapse::ConfigGenerator
|
7
|
+
class FileOutput < BaseGenerator
|
8
|
+
include Synapse::Logging
|
9
|
+
|
10
|
+
NAME = 'file_output'.freeze
|
8
11
|
|
9
12
|
def initialize(opts)
|
13
|
+
super(opts)
|
14
|
+
|
10
15
|
unless opts.has_key?("output_directory")
|
11
16
|
raise ArgumentError, "flat file generation requires an output_directory key"
|
12
17
|
end
|
@@ -16,9 +21,6 @@ module Synapse
|
|
16
21
|
rescue SystemCallError => err
|
17
22
|
raise ArgumentError, "provided output directory #{opts['output_directory']} is not present or creatable"
|
18
23
|
end
|
19
|
-
|
20
|
-
@opts = opts
|
21
|
-
@name = 'file_output'
|
22
24
|
end
|
23
25
|
|
24
26
|
def tick(watchers)
|
@@ -32,7 +34,7 @@ module Synapse
|
|
32
34
|
end
|
33
35
|
|
34
36
|
def write_backends_to_file(service_name, new_backends)
|
35
|
-
data_path = File.join(
|
37
|
+
data_path = File.join(opts['output_directory'], "#{service_name}.json")
|
36
38
|
begin
|
37
39
|
old_backends = JSON.load(File.read(data_path))
|
38
40
|
rescue Errno::ENOENT
|
@@ -45,8 +47,8 @@ module Synapse
|
|
45
47
|
# internal state only when the smartstack state has actually changed
|
46
48
|
return false
|
47
49
|
else
|
48
|
-
# Atomically write new
|
49
|
-
temp_path = File.join(
|
50
|
+
# Atomically write new service configuration file
|
51
|
+
temp_path = File.join(opts['output_directory'],
|
50
52
|
".#{service_name}.json.tmp")
|
51
53
|
File.open(temp_path, 'w', 0644) {|f| f.write(new_backends.to_json)}
|
52
54
|
FileUtils.mv(temp_path, data_path)
|
@@ -56,7 +58,7 @@ module Synapse
|
|
56
58
|
|
57
59
|
def clean_old_watchers(current_watchers)
|
58
60
|
# Cleanup old services that Synapse no longer manages
|
59
|
-
FileUtils.cd(
|
61
|
+
FileUtils.cd(opts['output_directory']) do
|
60
62
|
present_files = Dir.glob('*.json')
|
61
63
|
managed_files = current_watchers.collect {|watcher| "#{watcher.name}.json"}
|
62
64
|
files_to_purge = present_files.select {|svc| not managed_files.include?(svc)}
|
@@ -1,12 +1,15 @@
|
|
1
|
+
require 'synapse/config_generator/base'
|
2
|
+
|
1
3
|
require 'fileutils'
|
2
4
|
require 'json'
|
3
5
|
require 'socket'
|
4
6
|
require 'digest/sha1'
|
5
7
|
|
6
|
-
|
7
|
-
class Haproxy
|
8
|
-
include Logging
|
9
|
-
|
8
|
+
class Synapse::ConfigGenerator
|
9
|
+
class Haproxy < BaseGenerator
|
10
|
+
include Synapse::Logging
|
11
|
+
|
12
|
+
NAME = 'haproxy'.freeze
|
10
13
|
|
11
14
|
# these come from the documentation for haproxy (1.5 and 1.6)
|
12
15
|
# http://haproxy.1wt.eu/download/1.5/doc/configuration.txt
|
@@ -790,31 +793,31 @@ module Synapse
|
|
790
793
|
|
791
794
|
DEFAULT_STATE_FILE_TTL = (60 * 60 * 24).freeze # 24 hours
|
792
795
|
STATE_FILE_UPDATE_INTERVAL = 60.freeze # iterations; not a unit of time
|
796
|
+
DEFAULT_BIND_ADDRESS = 'localhost'
|
793
797
|
|
794
798
|
def initialize(opts)
|
795
|
-
super()
|
799
|
+
super(opts)
|
796
800
|
|
797
|
-
%w{global defaults
|
801
|
+
%w{global defaults}.each do |req|
|
798
802
|
raise ArgumentError, "haproxy requires a #{req} section" if !opts.has_key?(req)
|
799
803
|
end
|
800
804
|
|
805
|
+
@opts['do_writes'] = true unless @opts.key?('do_writes')
|
806
|
+
@opts['do_socket'] = true unless @opts.key?('do_socket')
|
807
|
+
@opts['do_reloads'] = true unless @opts.key?('do_reloads')
|
808
|
+
|
801
809
|
req_pairs = {
|
802
810
|
'do_writes' => 'config_file_path',
|
803
811
|
'do_socket' => 'socket_file_path',
|
804
|
-
'do_reloads' => 'reload_command'
|
812
|
+
'do_reloads' => 'reload_command'
|
813
|
+
}
|
805
814
|
|
806
815
|
req_pairs.each do |cond, req|
|
807
|
-
if opts[cond]
|
808
|
-
raise ArgumentError, "the `#{req}` option is required when `#{cond}` is true" unless opts[req]
|
816
|
+
if @opts[cond]
|
817
|
+
raise ArgumentError, "the `#{req}` option is required when `#{cond}` is true" unless @opts[req]
|
809
818
|
end
|
810
819
|
end
|
811
820
|
|
812
|
-
@opts = opts
|
813
|
-
|
814
|
-
@opts['do_writes'] = true unless @opts.key?('do_writes')
|
815
|
-
@opts['do_socket'] = true unless @opts.key?('do_socket')
|
816
|
-
@opts['do_reloads'] = true unless @opts.key?('do_reloads')
|
817
|
-
|
818
821
|
# socket_file_path can be a string or a list
|
819
822
|
# lets make a new option which is always a list (plural)
|
820
823
|
@opts['socket_file_paths'] = [@opts['socket_file_path']].flatten
|
@@ -835,8 +838,19 @@ module Synapse
|
|
835
838
|
@state_file_ttl = @opts.fetch('state_file_ttl', DEFAULT_STATE_FILE_TTL).to_i
|
836
839
|
end
|
837
840
|
|
838
|
-
def
|
839
|
-
|
841
|
+
def normalize_watcher_provided_config(service_watcher_name, service_watcher_config)
|
842
|
+
service_watcher_config = super(service_watcher_name, service_watcher_config)
|
843
|
+
defaults = {
|
844
|
+
'server_options' => "",
|
845
|
+
'server_port_override' => nil,
|
846
|
+
'backend' => [],
|
847
|
+
'frontend' => [],
|
848
|
+
'listen' => [],
|
849
|
+
}
|
850
|
+
unless service_watcher_config.include?('port')
|
851
|
+
log.warn "synapse: service #{service_watcher_name}: haproxy config does not include a port; only backend sections for the service will be created; you must move traffic there manually using configuration in `extra_sections`"
|
852
|
+
end
|
853
|
+
defaults.merge(service_watcher_config)
|
840
854
|
end
|
841
855
|
|
842
856
|
def tick(watchers)
|
@@ -848,13 +862,13 @@ module Synapse
|
|
848
862
|
|
849
863
|
# We potentially have to restart if the restart was rate limited
|
850
864
|
# in the original call to update_config
|
851
|
-
restart if
|
865
|
+
restart if opts['do_reloads'] && @restart_required
|
852
866
|
end
|
853
867
|
|
854
868
|
def update_config(watchers)
|
855
869
|
# if we support updating backends, try that whenever possible
|
856
|
-
if
|
857
|
-
|
870
|
+
if opts['do_socket']
|
871
|
+
opts['socket_file_paths'].each do |socket_path|
|
858
872
|
update_backends_at(socket_path, watchers)
|
859
873
|
end
|
860
874
|
else
|
@@ -865,9 +879,9 @@ module Synapse
|
|
865
879
|
new_config = generate_config(watchers)
|
866
880
|
|
867
881
|
# if we write config files, lets do that and then possibly restart
|
868
|
-
if
|
882
|
+
if opts['do_writes']
|
869
883
|
write_config(new_config)
|
870
|
-
restart if
|
884
|
+
restart if opts['do_reloads'] && @restart_required
|
871
885
|
end
|
872
886
|
end
|
873
887
|
|
@@ -877,14 +891,19 @@ module Synapse
|
|
877
891
|
shared_frontend_lines = generate_shared_frontend
|
878
892
|
|
879
893
|
watchers.each do |watcher|
|
894
|
+
watcher_config = watcher.config_for_generator[name]
|
880
895
|
@watcher_configs[watcher.name] ||= parse_watcher_config(watcher)
|
896
|
+
next if watcher_config['disabled']
|
881
897
|
new_config << generate_frontend_stanza(watcher, @watcher_configs[watcher.name]['frontend'])
|
882
898
|
new_config << generate_backend_stanza(watcher, @watcher_configs[watcher.name]['backend'])
|
883
|
-
if
|
884
|
-
if
|
899
|
+
if watcher_config.include?('shared_frontend')
|
900
|
+
if opts['shared_frontend'] == nil
|
885
901
|
log.warn "synapse: service #{watcher.name} contains a shared frontend section but the base config does not! skipping."
|
886
902
|
else
|
887
|
-
|
903
|
+
tabbed_shared_frontend = watcher_config['shared_frontend'].map{|l| "\t#{l}"}
|
904
|
+
shared_frontend_lines << validate_haproxy_stanza(
|
905
|
+
tabbed_shared_frontend, "frontend", "shared frontend section for #{watcher.name}"
|
906
|
+
)
|
888
907
|
end
|
889
908
|
end
|
890
909
|
end
|
@@ -896,10 +915,10 @@ module Synapse
|
|
896
915
|
|
897
916
|
# pull out the shared frontend section if any
|
898
917
|
def generate_shared_frontend
|
899
|
-
return nil unless
|
918
|
+
return nil unless opts.include?('shared_frontend')
|
900
919
|
log.debug "synapse: found a shared frontend section"
|
901
920
|
shared_frontend_lines = ["\nfrontend shared-frontend"]
|
902
|
-
shared_frontend_lines << validate_haproxy_stanza(
|
921
|
+
shared_frontend_lines << validate_haproxy_stanza(opts['shared_frontend'].map{|l| "\t#{l}"}, "frontend", "shared frontend")
|
903
922
|
return shared_frontend_lines
|
904
923
|
end
|
905
924
|
|
@@ -909,13 +928,13 @@ module Synapse
|
|
909
928
|
|
910
929
|
%w{global defaults}.each do |section|
|
911
930
|
base_config << "#{section}"
|
912
|
-
|
931
|
+
opts[section].each do |option|
|
913
932
|
base_config << "\t#{option}"
|
914
933
|
end
|
915
934
|
end
|
916
935
|
|
917
|
-
if
|
918
|
-
|
936
|
+
if opts['extra_sections']
|
937
|
+
opts['extra_sections'].each do |title, section|
|
919
938
|
base_config << "\n#{title}"
|
920
939
|
section.each do |option|
|
921
940
|
base_config << "\t#{option}"
|
@@ -930,12 +949,13 @@ module Synapse
|
|
930
949
|
# frontend and backend sections
|
931
950
|
def parse_watcher_config(watcher)
|
932
951
|
config = {}
|
952
|
+
watcher_config = watcher.config_for_generator[name]
|
933
953
|
%w{frontend backend}.each do |section|
|
934
|
-
config[section] =
|
954
|
+
config[section] = watcher_config[section] || []
|
935
955
|
|
936
956
|
# copy over the settings from the 'listen' section that pertain to section
|
937
957
|
config[section].concat(
|
938
|
-
|
958
|
+
watcher_config['listen'].select {|setting|
|
939
959
|
parsed_setting = setting.strip.gsub(/\s+/, ' ').downcase
|
940
960
|
SECTION_FIELDS[section].any? {|field| parsed_setting.start_with?(field)}
|
941
961
|
})
|
@@ -961,16 +981,33 @@ module Synapse
|
|
961
981
|
|
962
982
|
# generates an individual stanza for a particular watcher
|
963
983
|
def generate_frontend_stanza(watcher, config)
|
964
|
-
|
984
|
+
watcher_config = watcher.config_for_generator[name]
|
985
|
+
unless watcher_config.has_key?("port")
|
965
986
|
log.debug "synapse: not generating frontend stanza for watcher #{watcher.name} because it has no port defined"
|
966
987
|
return []
|
988
|
+
else
|
989
|
+
port = watcher_config['port']
|
967
990
|
end
|
968
991
|
|
992
|
+
|
993
|
+
bind_address = (
|
994
|
+
watcher_config['bind_address'] ||
|
995
|
+
opts['bind_address'] ||
|
996
|
+
DEFAULT_BIND_ADDRESS
|
997
|
+
)
|
998
|
+
backend_name = watcher_config.fetch('backend_name', watcher.name)
|
999
|
+
|
1000
|
+
bind_line = [
|
1001
|
+
"\tbind",
|
1002
|
+
"#{bind_address}:#{port}",
|
1003
|
+
watcher_config['bind_options']
|
1004
|
+
].compact.join(' ')
|
1005
|
+
|
969
1006
|
stanza = [
|
970
1007
|
"\nfrontend #{watcher.name}",
|
971
1008
|
config.map {|c| "\t#{c}"},
|
972
|
-
|
973
|
-
"\tdefault_backend #{
|
1009
|
+
bind_line,
|
1010
|
+
"\tdefault_backend #{backend_name}"
|
974
1011
|
]
|
975
1012
|
end
|
976
1013
|
|
@@ -1004,7 +1041,8 @@ module Synapse
|
|
1004
1041
|
log.debug "synapse: no backends found for watcher #{watcher.name}"
|
1005
1042
|
end
|
1006
1043
|
|
1007
|
-
|
1044
|
+
watcher_config = watcher.config_for_generator[name]
|
1045
|
+
keys = case watcher_config['backend_order']
|
1008
1046
|
when 'asc'
|
1009
1047
|
backends.keys.sort
|
1010
1048
|
when 'desc'
|
@@ -1016,20 +1054,20 @@ module Synapse
|
|
1016
1054
|
end
|
1017
1055
|
|
1018
1056
|
stanza = [
|
1019
|
-
"\nbackend #{
|
1057
|
+
"\nbackend #{watcher_config.fetch('backend_name', watcher.name)}",
|
1020
1058
|
config.map {|c| "\t#{c}"},
|
1021
1059
|
keys.map {|backend_name|
|
1022
1060
|
backend = backends[backend_name]
|
1023
1061
|
b = "\tserver #{backend_name} #{backend['host']}:#{backend['port']}"
|
1024
1062
|
unless config.include?('mode tcp')
|
1025
|
-
b = case
|
1063
|
+
b = case watcher_config['cookie_value_method']
|
1026
1064
|
when 'hash'
|
1027
1065
|
b = "#{b} cookie #{Digest::SHA1.hexdigest(backend_name)}"
|
1028
1066
|
else
|
1029
1067
|
b = "#{b} cookie #{backend_name}"
|
1030
1068
|
end
|
1031
1069
|
end
|
1032
|
-
b = "#{b} #{
|
1070
|
+
b = "#{b} #{watcher_config['server_options']}" if watcher_config['server_options']
|
1033
1071
|
b = "#{b} #{backend['haproxy_server_options']}" if backend['haproxy_server_options']
|
1034
1072
|
b = "#{b} disabled" unless backend['enabled']
|
1035
1073
|
b }
|
@@ -1075,6 +1113,7 @@ module Synapse
|
|
1075
1113
|
watchers.each do |watcher|
|
1076
1114
|
enabled_backends[watcher.name] = []
|
1077
1115
|
next if watcher.backends.empty?
|
1116
|
+
next if watcher.config_for_generator[name]['disabled']
|
1078
1117
|
|
1079
1118
|
unless cur_backends.include? watcher.name
|
1080
1119
|
log.info "synapse: restart required because we added new section #{watcher.name}"
|
@@ -1125,16 +1164,16 @@ module Synapse
|
|
1125
1164
|
# writes the config
|
1126
1165
|
def write_config(new_config)
|
1127
1166
|
begin
|
1128
|
-
old_config = File.read(
|
1167
|
+
old_config = File.read(opts['config_file_path'])
|
1129
1168
|
rescue Errno::ENOENT => e
|
1130
|
-
log.info "synapse: could not open haproxy config file at #{
|
1169
|
+
log.info "synapse: could not open haproxy config file at #{opts['config_file_path']}"
|
1131
1170
|
old_config = ""
|
1132
1171
|
end
|
1133
1172
|
|
1134
1173
|
if old_config == new_config
|
1135
1174
|
return false
|
1136
1175
|
else
|
1137
|
-
File.open(
|
1176
|
+
File.open(opts['config_file_path'],'w') {|f| f.write(new_config)}
|
1138
1177
|
return true
|
1139
1178
|
end
|
1140
1179
|
end
|