synapse 0.12.1 → 0.12.2
Sign up to get free protection for your applications and to get access to all the features.
- data/.travis.yml +1 -0
- data/README.md +64 -55
- data/lib/synapse.rb +6 -6
- data/lib/synapse/file_output.rb +12 -1
- data/lib/synapse/haproxy.rb +14 -4
- data/lib/synapse/service_watcher.rb +11 -19
- data/lib/synapse/service_watcher/README.md +84 -0
- data/lib/synapse/service_watcher/base.rb +3 -3
- data/lib/synapse/service_watcher/dns.rb +1 -1
- data/lib/synapse/service_watcher/docker.rb +4 -3
- data/lib/synapse/service_watcher/ec2tag.rb +13 -9
- data/lib/synapse/service_watcher/marathon.rb +112 -0
- data/lib/synapse/service_watcher/zookeeper.rb +12 -7
- data/lib/synapse/service_watcher/zookeeper_dns.rb +3 -3
- data/lib/synapse/version.rb +1 -1
- data/spec/lib/synapse/file_output_spec.rb +61 -0
- data/spec/lib/synapse/haproxy_spec.rb +14 -1
- data/spec/lib/synapse/service_watcher_base_spec.rb +3 -3
- data/spec/lib/synapse/service_watcher_docker_spec.rb +12 -6
- data/spec/lib/synapse/service_watcher_ec2tags_spec.rb +36 -14
- data/spec/lib/synapse/service_watcher_marathon_spec.rb +191 -0
- data/spec/lib/synapse/service_watcher_spec.rb +102 -0
- data/spec/spec_helper.rb +1 -0
- data/spec/support/minimum.conf.yaml +6 -1
- data/synapse.gemspec +1 -0
- metadata +26 -2
data/.travis.yml
CHANGED
data/README.md
CHANGED
@@ -15,7 +15,7 @@ In an environment like Amazon's EC2, all of the available workarounds are subopt
|
|
15
15
|
|
16
16
|
* Round-robin DNS: Slow to converge, and doesn't work when applications cache DNS lookups (which is frequent)
|
17
17
|
* Elastic IPs: slow to converge, limited in number, public-facing-only, which makes them less useful for internal services
|
18
|
-
* ELB:
|
18
|
+
* ELB: ultimately uses DNS (see above), can't tune load balancing, have to launch a new one for every service * region, autoscaling doesn't happen fast enough
|
19
19
|
|
20
20
|
One solution to this problem is a discovery service, like [Apache Zookeeper](http://zookeeper.apache.org/).
|
21
21
|
However, Zookeeper and similar services have their own problems:
|
@@ -92,38 +92,50 @@ HAProxy will be transparently reloaded, and your application will keep running w
|
|
92
92
|
|
93
93
|
## Installation
|
94
94
|
|
95
|
-
|
95
|
+
To download and run the synapse binary, first install a version of ruby. Then,
|
96
|
+
install synapse with:
|
96
97
|
|
97
|
-
|
98
|
-
|
99
|
-
|
98
|
+
```bash
|
99
|
+
$ mkdir -p /opt/smartstack/synapse
|
100
|
+
# If you are on Ruby 2.X use --no-document instead of --no-ri --no-rdoc
|
101
|
+
$ gem install synapse --install-dir /opt/smartstack/synapse --no-ri --no-rdoc
|
102
|
+
```
|
100
103
|
|
101
|
-
|
104
|
+
This will download synapse and its dependencies into /opt/smartstack/synapse. You
|
105
|
+
might wish to omit the `--install-dir` flag to use your system's default gem
|
106
|
+
path, however this will require you to run `gem install synapse` with root
|
107
|
+
permissions.
|
102
108
|
|
103
|
-
|
109
|
+
You can now run the synapse binary like:
|
104
110
|
|
105
|
-
|
106
|
-
|
111
|
+
```bash
|
112
|
+
export GEM_PATH=/opt/smartstack/synapse
|
113
|
+
/opt/smartstack/synapse/bin/synapse --help
|
114
|
+
```
|
107
115
|
|
108
|
-
Don't forget to install HAProxy
|
116
|
+
Don't forget to install HAProxy too.
|
109
117
|
|
110
118
|
## Configuration ##
|
111
119
|
|
112
120
|
Synapse depends on a single config file in JSON format; it's usually called `synapse.conf.json`.
|
113
|
-
The file has
|
114
|
-
|
115
|
-
|
121
|
+
The file has three main sections.
|
122
|
+
|
123
|
+
1. [`services`](#services): lists the services you'd like to connect.
|
124
|
+
2. [`haproxy`](#haproxy): specifies how to configure and interact with HAProxy.
|
125
|
+
3. [`file_output`](#file) (optional): specifies where to write service state to on the filesystem.
|
116
126
|
|
127
|
+
<a name="services"/>
|
117
128
|
### Configuring a Service ###
|
118
129
|
|
119
|
-
The services
|
130
|
+
The `services` section is a hash, where the keys are the `name` of the service to be configured.
|
120
131
|
The name is just a human-readable string; it will be used in logs and notifications.
|
121
132
|
Each value in the services hash is also a hash, and should contain the following keys:
|
122
133
|
|
123
|
-
* `discovery
|
134
|
+
* [`discovery`](#discovery): how synapse will discover hosts providing this service (see below)
|
124
135
|
* `default_servers`: the list of default servers providing this service; synapse uses these if no others can be discovered
|
125
|
-
* `haproxy
|
136
|
+
* [`haproxy`](#haproxysvc): how will the haproxy section for this service be configured
|
126
137
|
|
138
|
+
<a name="discovery"/>
|
127
139
|
#### Service Discovery ####
|
128
140
|
|
129
141
|
We've included a number of `watchers` which provide service discovery.
|
@@ -183,6 +195,18 @@ be used in preference to the `AWS_` environment variables.
|
|
183
195
|
* `aws_secret_access_key`: AWS secret key or set `AWS_SECRET_ACCESS_KEY` in the environment.
|
184
196
|
* `aws_region`: AWS region (i.e. `us-east-1`) or set `AWS_REGION` in the environment.
|
185
197
|
|
198
|
+
##### Marathon #####
|
199
|
+
|
200
|
+
This watcher polls the Marathon API and retrieves a list of instances for a
|
201
|
+
given application.
|
202
|
+
|
203
|
+
It takes the following options:
|
204
|
+
|
205
|
+
* `marathon_api_url`: Address of the marathon API (e.g. `http://marathon-master:8080`)
|
206
|
+
* `application_name`: Name of the application in Marathon
|
207
|
+
* `check_interval`: How often to request the list of tasks from Marathon (default: 10 seconds)
|
208
|
+
* `port_index`: Index of the backend port in the task's "ports" array. (default: 0)
|
209
|
+
|
186
210
|
#### Listing Default Servers ####
|
187
211
|
|
188
212
|
You may list a number of default servers providing a service.
|
@@ -202,15 +226,7 @@ If you do not list any `default_servers`, and all backends for a service
|
|
202
226
|
disappear then the previous known backends will be used. Disable this behavior
|
203
227
|
by unsetting `use_previous_backends`.
|
204
228
|
|
205
|
-
|
206
|
-
|
207
|
-
This section controls whether or not synapse will write out service state
|
208
|
-
to the filesystem in json format. This can be used for services that want to
|
209
|
-
use discovery information but not go through HAProxy.
|
210
|
-
|
211
|
-
* `output_directory`: the path to a directory on disk that service registrations
|
212
|
-
should be written to.
|
213
|
-
|
229
|
+
<a name="haproxysvc"/>
|
214
230
|
#### The `haproxy` Section ####
|
215
231
|
|
216
232
|
This section is its own hash, which should contain the following keys:
|
@@ -220,12 +236,15 @@ This section is its own hash, which should contain the following keys:
|
|
220
236
|
* `server_options`: the haproxy options for each `server` line of the service in HAProxy config; it may be left out.
|
221
237
|
* `frontend`: additional lines passed to the HAProxy config in the `frontend` stanza of this service
|
222
238
|
* `backend`: additional lines passed to the HAProxy config in the `backend` stanza of this service
|
239
|
+
* `backend_name`: The name of the generated HAProxy backend for this service
|
240
|
+
(defaults to the service's key in the `services` section)
|
223
241
|
* `listen`: these lines will be parsed and placed in the correct `frontend`/`backend` section as applicable; you can put lines which are the same for the frontend and backend here.
|
224
242
|
* `shared_frontend`: optional: haproxy configuration directives for a shared http frontend (see below)
|
225
243
|
|
244
|
+
<a name="haproxy"/>
|
226
245
|
### Configuring HAProxy ###
|
227
246
|
|
228
|
-
The `haproxy` section of the config file has the following options:
|
247
|
+
The top level `haproxy` section of the config file has the following options:
|
229
248
|
|
230
249
|
* `reload_command`: the command Synapse will run to reload HAProxy
|
231
250
|
* `config_file_path`: where Synapse will write the HAProxy config file
|
@@ -249,6 +268,17 @@ The `haproxy` section of the config file has the following options:
|
|
249
268
|
Note that a non-default `bind_address` can be dangerous.
|
250
269
|
If you configure an `address:port` combination that is already in use on the system, haproxy will fail to start.
|
251
270
|
|
271
|
+
<a name="file"/>
|
272
|
+
### Configuring `file_output` ###
|
273
|
+
|
274
|
+
This section controls whether or not synapse will write out service state
|
275
|
+
to the filesystem in json format. This can be used for services that want to
|
276
|
+
use discovery information but not go through HAProxy.
|
277
|
+
|
278
|
+
* `output_directory`: the path to a directory on disk that service registrations
|
279
|
+
should be written to.
|
280
|
+
|
281
|
+
|
252
282
|
### HAProxy shared HTTP Frontend ###
|
253
283
|
|
254
284
|
For HTTP-only services, it is not always necessary or desirable to dedicate a TCP port per service, since HAProxy can route traffic based on host headers.
|
@@ -260,7 +290,8 @@ For example:
|
|
260
290
|
|
261
291
|
```yaml
|
262
292
|
haproxy:
|
263
|
-
shared_frontend:
|
293
|
+
shared_frontend:
|
294
|
+
- "bind 127.0.0.1:8081"
|
264
295
|
reload_command: "service haproxy reload"
|
265
296
|
config_file_path: "/etc/haproxy/haproxy.cfg"
|
266
297
|
socket_file_path: "/var/run/haproxy.sock"
|
@@ -279,7 +310,8 @@ For example:
|
|
279
310
|
discovery:
|
280
311
|
method: "zookeeper"
|
281
312
|
path: "/nerve/services/service1"
|
282
|
-
hosts:
|
313
|
+
hosts:
|
314
|
+
- "0.zookeeper.example.com:2181"
|
283
315
|
haproxy:
|
284
316
|
server_options: "check inter 2s rise 3 fall 2"
|
285
317
|
shared_frontend:
|
@@ -298,7 +330,8 @@ For example:
|
|
298
330
|
shared_frontend:
|
299
331
|
- "acl is_service1 hdr_dom(host) -i service2.lb.example.com"
|
300
332
|
- "use_backend service2 if is_service2
|
301
|
-
backend:
|
333
|
+
backend:
|
334
|
+
- "mode http"
|
302
335
|
|
303
336
|
```
|
304
337
|
|
@@ -333,29 +366,5 @@ Non-HTTP backends such as MySQL or RabbitMQ will obviously continue to need thei
|
|
333
366
|
|
334
367
|
### Creating a Service Watcher ###
|
335
368
|
|
336
|
-
|
337
|
-
|
338
|
-
1. Create a file for your watcher in `service_watcher` dir
|
339
|
-
2. Use the following template:
|
340
|
-
```ruby
|
341
|
-
require 'synapse/service_watcher/base'
|
342
|
-
|
343
|
-
module Synapse
|
344
|
-
class NewWatcher < BaseWatcher
|
345
|
-
def start
|
346
|
-
# write code which begins running service discovery
|
347
|
-
end
|
348
|
-
|
349
|
-
private
|
350
|
-
def validate_discovery_opts
|
351
|
-
# here, validate any required options in @discovery
|
352
|
-
end
|
353
|
-
end
|
354
|
-
end
|
355
|
-
```
|
356
|
-
|
357
|
-
3. Implement the `start` and `validate_discovery_opts` methods
|
358
|
-
4. Implement whatever additional methods your discovery requires
|
359
|
-
|
360
|
-
When your watcher detects a list of new backends, you should call `set_backends` to
|
361
|
-
store the new backends and update the HAProxy config.
|
369
|
+
See the Service Watcher [README](lib/synapse/service_watcher/README.md) for
|
370
|
+
how to add new Service Watchers.
|
data/lib/synapse.rb
CHANGED
@@ -1,18 +1,18 @@
|
|
1
|
+
require 'logger'
|
2
|
+
require 'json'
|
3
|
+
|
1
4
|
require "synapse/version"
|
2
|
-
require "synapse/
|
5
|
+
require "synapse/log"
|
3
6
|
require "synapse/haproxy"
|
4
7
|
require "synapse/file_output"
|
5
8
|
require "synapse/service_watcher"
|
6
|
-
require "synapse/log"
|
7
9
|
|
8
|
-
require 'logger'
|
9
|
-
require 'json'
|
10
|
-
|
11
|
-
include Synapse
|
12
10
|
|
13
11
|
module Synapse
|
14
12
|
class Synapse
|
13
|
+
|
15
14
|
include Logging
|
15
|
+
|
16
16
|
def initialize(opts={})
|
17
17
|
# create the service watchers for all our services
|
18
18
|
raise "specify a list of services to connect in the config" unless opts.has_key?('services')
|
data/lib/synapse/file_output.rb
CHANGED
@@ -1,4 +1,3 @@
|
|
1
|
-
require 'synapse/log'
|
2
1
|
require 'fileutils'
|
3
2
|
require 'tempfile'
|
4
3
|
|
@@ -29,6 +28,7 @@ module Synapse
|
|
29
28
|
watchers.each do |watcher|
|
30
29
|
write_backends_to_file(watcher.name, watcher.backends)
|
31
30
|
end
|
31
|
+
clean_old_watchers(watchers)
|
32
32
|
end
|
33
33
|
|
34
34
|
def write_backends_to_file(service_name, new_backends)
|
@@ -53,5 +53,16 @@ module Synapse
|
|
53
53
|
return true
|
54
54
|
end
|
55
55
|
end
|
56
|
+
|
57
|
+
def clean_old_watchers(current_watchers)
|
58
|
+
# Cleanup old services that Synapse no longer manages
|
59
|
+
FileUtils.cd(@opts['output_directory']) do
|
60
|
+
present_files = Dir.glob('*.json')
|
61
|
+
managed_files = current_watchers.collect {|watcher| "#{watcher.name}.json"}
|
62
|
+
files_to_purge = present_files.select {|svc| not managed_files.include?(svc)}
|
63
|
+
log.info "synapse: purging unknown service files #{files_to_purge}" if files_to_purge.length > 0
|
64
|
+
FileUtils.rm(files_to_purge)
|
65
|
+
end
|
66
|
+
end
|
56
67
|
end
|
57
68
|
end
|
data/lib/synapse/haproxy.rb
CHANGED
@@ -1,6 +1,5 @@
|
|
1
1
|
require 'fileutils'
|
2
2
|
require 'json'
|
3
|
-
require 'synapse/log'
|
4
3
|
require 'socket'
|
5
4
|
|
6
5
|
module Synapse
|
@@ -688,7 +687,7 @@ module Synapse
|
|
688
687
|
"\nfrontend #{watcher.name}",
|
689
688
|
config.map {|c| "\t#{c}"},
|
690
689
|
"\tbind #{@opts['bind_address'] || 'localhost'}:#{watcher.haproxy['port']}",
|
691
|
-
"\tdefault_backend #{watcher.name}"
|
690
|
+
"\tdefault_backend #{watcher.haproxy.fetch('backend_name', watcher.name)}"
|
692
691
|
]
|
693
692
|
end
|
694
693
|
|
@@ -705,6 +704,16 @@ module Synapse
|
|
705
704
|
# setting the enabled state.
|
706
705
|
watcher.backends.each do |backend|
|
707
706
|
backend_name = construct_name(backend)
|
707
|
+
# If we have information in the state file that allows us to detect
|
708
|
+
# server option changes, use that to potentially force a restart
|
709
|
+
if backends.has_key?(backend_name)
|
710
|
+
old_backend = backends[backend_name]
|
711
|
+
if (old_backend.fetch('haproxy_server_options', "") !=
|
712
|
+
backend.fetch('haproxy_server_options', ""))
|
713
|
+
log.info "synapse: restart required because haproxy_server_options changed for #{backend_name}"
|
714
|
+
@restart_required = true
|
715
|
+
end
|
716
|
+
end
|
708
717
|
backends[backend_name] = backend.merge('enabled' => true)
|
709
718
|
end
|
710
719
|
|
@@ -713,13 +722,14 @@ module Synapse
|
|
713
722
|
end
|
714
723
|
|
715
724
|
stanza = [
|
716
|
-
"\nbackend #{watcher.name}",
|
725
|
+
"\nbackend #{watcher.haproxy.fetch('backend_name', watcher.name)}",
|
717
726
|
config.map {|c| "\t#{c}"},
|
718
727
|
backends.keys.shuffle.map {|backend_name|
|
719
728
|
backend = backends[backend_name]
|
720
729
|
b = "\tserver #{backend_name} #{backend['host']}:#{backend['port']}"
|
721
730
|
b = "#{b} cookie #{backend_name}" unless config.include?('mode tcp')
|
722
|
-
b = "#{b} #{watcher.haproxy['server_options']}"
|
731
|
+
b = "#{b} #{watcher.haproxy['server_options']}" if watcher.haproxy['server_options']
|
732
|
+
b = "#{b} #{backend['haproxy_server_options']}" if backend['haproxy_server_options']
|
723
733
|
b = "#{b} disabled" unless backend['enabled']
|
724
734
|
b }
|
725
735
|
]
|
@@ -1,22 +1,8 @@
|
|
1
|
+
require "synapse/log"
|
1
2
|
require "synapse/service_watcher/base"
|
2
|
-
require "synapse/service_watcher/zookeeper"
|
3
|
-
require "synapse/service_watcher/ec2tag"
|
4
|
-
require "synapse/service_watcher/dns"
|
5
|
-
require "synapse/service_watcher/docker"
|
6
|
-
require "synapse/service_watcher/zookeeper_dns"
|
7
3
|
|
8
4
|
module Synapse
|
9
5
|
class ServiceWatcher
|
10
|
-
|
11
|
-
@watchers = {
|
12
|
-
'base' => BaseWatcher,
|
13
|
-
'zookeeper' => ZookeeperWatcher,
|
14
|
-
'ec2tag' => EC2Watcher,
|
15
|
-
'dns' => DnsWatcher,
|
16
|
-
'docker' => DockerWatcher,
|
17
|
-
'zookeeper_dns' => ZookeeperDnsWatcher,
|
18
|
-
}
|
19
|
-
|
20
6
|
# the method which actually dispatches watcher creation requests
|
21
7
|
def self.create(name, opts, synapse)
|
22
8
|
opts['name'] = name
|
@@ -25,10 +11,16 @@ module Synapse
|
|
25
11
|
unless opts.has_key?('discovery') && opts['discovery'].has_key?('method')
|
26
12
|
|
27
13
|
discovery_method = opts['discovery']['method']
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
|
14
|
+
watcher = begin
|
15
|
+
method = discovery_method.downcase
|
16
|
+
require "synapse/service_watcher/#{method}"
|
17
|
+
# zookeeper_dns => ZookeeperDnsWatcher, ec2tag => Ec2tagWatcher, etc ...
|
18
|
+
method_class = method.split('_').map{|x| x.capitalize}.join.concat('Watcher')
|
19
|
+
self.const_get("#{method_class}")
|
20
|
+
rescue Exception => e
|
21
|
+
raise ArgumentError, "Specified a discovery method of #{discovery_method}, which could not be found: #{e}"
|
22
|
+
end
|
23
|
+
return watcher.new(opts, synapse)
|
32
24
|
end
|
33
25
|
end
|
34
26
|
end
|
@@ -0,0 +1,84 @@
|
|
1
|
+
## Watcher Classes
|
2
|
+
|
3
|
+
Watchers are the piece of Synapse that watch an external service registry
|
4
|
+
and reflect those changes in the local HAProxy state. Watchers should conform
|
5
|
+
to the interface specified by `BaseWatcher` and when your watcher has received
|
6
|
+
an update from the service registry you should call
|
7
|
+
`set_backends(new_backends)` to trigger a sync of your watcher state with local
|
8
|
+
HAProxy state. See the [`Backend Interface`](#backend_interface) section for
|
9
|
+
what service registrations Synapse understands.
|
10
|
+
|
11
|
+
```ruby
|
12
|
+
require "synapse/service_watcher/base"
|
13
|
+
|
14
|
+
class Synapse::ServiceWatcher
|
15
|
+
class MyWatcher < BaseWatcher
|
16
|
+
def start
|
17
|
+
# write code which begins running service discovery
|
18
|
+
end
|
19
|
+
|
20
|
+
def stop
|
21
|
+
# write code which tears down the service discovery
|
22
|
+
end
|
23
|
+
|
24
|
+
def ping?
|
25
|
+
# write code to check in on the health of the watcher
|
26
|
+
end
|
27
|
+
|
28
|
+
private
|
29
|
+
def validate_discovery_opts
|
30
|
+
# here, validate any required options in @discovery
|
31
|
+
end
|
32
|
+
|
33
|
+
... setup watches, poll, etc ... and call set_backends when you have new
|
34
|
+
... backends to set
|
35
|
+
|
36
|
+
end
|
37
|
+
end
|
38
|
+
```
|
39
|
+
|
40
|
+
### Watcher Plugin Inteface
|
41
|
+
Synapse deduces both the class path and class name from the `method` key within
|
42
|
+
the watcher configuration. Every watcher is passed configuration with the
|
43
|
+
`method` key, e.g. `zookeeper` or `ec2tag`.
|
44
|
+
|
45
|
+
#### Class Location
|
46
|
+
Synapse expects to find your class at `synapse/service_watcher/#{method}`. You
|
47
|
+
must make your watcher available at that path, and Synapse can "just work" and
|
48
|
+
find it.
|
49
|
+
|
50
|
+
#### Class Name
|
51
|
+
These method strings are then transformed into class names via the following
|
52
|
+
function:
|
53
|
+
|
54
|
+
```
|
55
|
+
method_class = method.split('_').map{|x| x.capitalize}.join.concat('Watcher')
|
56
|
+
```
|
57
|
+
|
58
|
+
This has the effect of taking the method, splitting on '_', capitalizing each
|
59
|
+
part and recombining with an added 'Watcher' on the end. So `zookeeper_dns`
|
60
|
+
becomes `ZookeeperDnsWatcher`, and `zookeeper` becomes `Zookeeper`. Make sure
|
61
|
+
your class name is correct.
|
62
|
+
|
63
|
+
<a name="backend_interface"/>
|
64
|
+
### Backend interface
|
65
|
+
Synapse understands the following fields in service backends (which are pulled
|
66
|
+
from the service registries):
|
67
|
+
|
68
|
+
`host` (string): The hostname of the service instance
|
69
|
+
|
70
|
+
`port` (integer): The port running the service on `host`
|
71
|
+
|
72
|
+
`name` (string, optional): The human readable name to refer to this service instance by
|
73
|
+
|
74
|
+
`weight` (float, optional): The weight that this backend should get when load
|
75
|
+
balancing to this service instance. Full support for updating HAProxy based on
|
76
|
+
this is still a WIP.
|
77
|
+
|
78
|
+
`haproxy_server_options` (string, optional): Any haproxy server options
|
79
|
+
specific to this particular server. They will be applied to the generated
|
80
|
+
`server` line in the HAProxy configuration. If you want Synapse to react to
|
81
|
+
changes in these lines you will need to enable the `state_file_path` option
|
82
|
+
in the main synapse configuration. In general the HAProxy backend level
|
83
|
+
`haproxy.server_options` setting is preferred to setting this per server
|
84
|
+
in your backends.
|