haproxy-cluster 0.0.1
Sign up to get free protection for your applications and to get access to all the features.
- data/README.md +56 -0
- data/bin/haproxy_cluster +3 -0
- data/lib/haproxy_cluster.rb +95 -0
- data/lib/haproxy_cluster/backend.rb +32 -0
- data/lib/haproxy_cluster/cli.rb +61 -0
- data/lib/haproxy_cluster/member.rb +54 -0
- data/lib/haproxy_cluster/server.rb +60 -0
- data/lib/haproxy_cluster/server_collection.rb +9 -0
- data/lib/haproxy_cluster/stats_container.rb +23 -0
- data/lib/haproxy_cluster/version.rb +3 -0
- metadata +104 -0
data/README.md
ADDED
@@ -0,0 +1,56 @@
|
|
1
|
+
haproxy-cluster
|
2
|
+
===============
|
3
|
+
|
4
|
+
> "Can we survive a rolling restart?"
|
5
|
+
>
|
6
|
+
> "How many concurrent connections right now across all load balancers?"
|
7
|
+
|
8
|
+
While there are already a handfull of [HA Proxy](http://haproxy.1wt.edu) abstraction layers on RubyGems, I wanted to be able to answer questions like those above and more, quickly, accurately, and easily. So here's one more for the pile.
|
9
|
+
|
10
|
+
`HAProxyCluster::Member` provides an ORM for HA Proxy's status page.
|
11
|
+
|
12
|
+
`HAProxyCluster` provides a simple map/reduce-inspired framework on top of `HAProxyCluster::Member`.
|
13
|
+
|
14
|
+
`haproxy_cluster` provides a shell scripting interface for `HAProxyCluster`. Exit codes are meaningful and intended to be useful from Nagios.
|
15
|
+
|
16
|
+
Do you deploy new code using a sequential restart of application servers? Using this common pattern carelessly can result in too many servers being down at the same time, and cutomers seeing errors. `haproxy_cluster` can prevent this by ensuring that every load balancer agrees that the application is up at each stage in the deployment. In the example below, we will deploy a new WAR to three Tomcat instances which are fronted by two HA Proxy instances. HA Proxy has been configured with `option httpchk /check`, a path which only returns an affirmative status code when the application is ready to serve requests.
|
17
|
+
|
18
|
+
```bash
|
19
|
+
#!bin/bash
|
20
|
+
set -o errexit
|
21
|
+
servers="server1.example.com server2.example.com server3.example.com"
|
22
|
+
load_balancers="https://lb1.example.com:8888 http://lb2.example.com:8888"
|
23
|
+
|
24
|
+
for server in $servers ; do
|
25
|
+
haproxy_cluster --timeout=300 --eval "wait_until(true){ myapp.rolling_restartable? }" $load_balancers
|
26
|
+
scp myapp.war $server:/opt/tomcat/webapps
|
27
|
+
done
|
28
|
+
```
|
29
|
+
|
30
|
+
The code block passed to `--eval` will not return until every load balancer reports that at least 80% of the backend servers defined for "myapp" are ready to serve requests. If this takes more than 5 minutes (300 seconds), the whole deployment is halted.
|
31
|
+
|
32
|
+
Maybe you'd like to know how many transactions per second your whole cluster is processing.
|
33
|
+
|
34
|
+
$ haproxy_cluster --eval 'poll{ puts members.map{|m|m.myapp.rate}.inject(:+) }' $load_balancers
|
35
|
+
|
36
|
+
Installation
|
37
|
+
------------
|
38
|
+
|
39
|
+
`gem install haproxy-cluster`
|
40
|
+
|
41
|
+
Requires Ruby 1.9.2 and depends on RestClient.
|
42
|
+
|
43
|
+
Non-Features
|
44
|
+
------------
|
45
|
+
|
46
|
+
* Doesn't try to modify configuration files. Use [haproxy-tools](https://github.com/subakva/haproxy-tools), [rhaproxy](https://github.com/jjuliano/rhaproxy), [haproxy_join](https://github.com/joewilliams/haproxy_join), or better yet, [Chef](http://www.opscode.com/chef) for that.
|
47
|
+
* Doesn't talk to sockets, yet. Use [haproxy-ruby](https://github.com/inkel/haproxy-ruby) for now if you need this. I intend to add support for this using `Net::SSH` and `socat(1)` but for now HTTP is enough for my needs.
|
48
|
+
|
49
|
+
ProTip
|
50
|
+
------
|
51
|
+
|
52
|
+
HA Proxy's awesome creator Willy Tarrreau loves [big text files](http://haproxy.1wt.eu/download/1.5/doc/configuration.txt) and [big, flat web pages](http://haproxy.1wt.eu/). If smaller, hyperlinked documents are more your style, you should know about the two alternative documentation sources:
|
53
|
+
|
54
|
+
* http://code.google.com/p/haproxy-docs/
|
55
|
+
* http://cbonte.github.com/haproxy-dconv/configuration-1.5.html
|
56
|
+
|
data/bin/haproxy_cluster
ADDED
@@ -0,0 +1,95 @@
|
|
1
|
+
require 'haproxy_cluster/member'
|
2
|
+
require 'thread'
|
3
|
+
|
4
|
+
class HAProxyCluster
|
5
|
+
def initialize(members = [])
|
6
|
+
@members = []
|
7
|
+
threads = []
|
8
|
+
members.each do |url|
|
9
|
+
threads << Thread.new do
|
10
|
+
@members << HAProxyCluster::Member.new(url)
|
11
|
+
end
|
12
|
+
end
|
13
|
+
threads.each{|t|t.join}
|
14
|
+
end
|
15
|
+
|
16
|
+
attr_accessor :members
|
17
|
+
|
18
|
+
# Poll the cluster, executing the given block with fresh data at the
|
19
|
+
# prescribed interval.
|
20
|
+
def poll(interval = 1.0)
|
21
|
+
first = true
|
22
|
+
loop do
|
23
|
+
start = Time.now
|
24
|
+
map { poll! } unless first
|
25
|
+
first = false
|
26
|
+
yield
|
27
|
+
sleep interval - (Time.now - start)
|
28
|
+
end
|
29
|
+
end
|
30
|
+
|
31
|
+
# Poll the entire cluster using exponential backoff until the the given
|
32
|
+
# block's return value always matches the condition (expressed as boolean or
|
33
|
+
# range).
|
34
|
+
#
|
35
|
+
# A common form of this is:
|
36
|
+
#
|
37
|
+
# wait_for(true) do
|
38
|
+
# api.servers.map{|s|s.ok?}
|
39
|
+
# end
|
40
|
+
#
|
41
|
+
# This block would not return until every member of the cluster is available
|
42
|
+
# to serve requests.
|
43
|
+
#
|
44
|
+
# wait_for(1!=1){false} #=> true
|
45
|
+
# wait_for(1==1){true} #=> true
|
46
|
+
# wait_for(1..3){2} #=> true
|
47
|
+
# wait_for(true){sleep} #=> Timeout
|
48
|
+
def wait_for (condition, &code)
|
49
|
+
results = map(&code)
|
50
|
+
delay = 1.5
|
51
|
+
loop do
|
52
|
+
if reduce(condition, results.values.flatten)
|
53
|
+
return true
|
54
|
+
end
|
55
|
+
if delay > 60
|
56
|
+
puts "Too many timeouts, giving up"
|
57
|
+
return false
|
58
|
+
end
|
59
|
+
delay *= 2
|
60
|
+
sleep delay
|
61
|
+
map { poll! }
|
62
|
+
results = map(&code)
|
63
|
+
end
|
64
|
+
end
|
65
|
+
|
66
|
+
# Run the specified code against every memeber of the cluster. Results are
|
67
|
+
# returned as a Hash, with member.to_s being the key.
|
68
|
+
def map (&code)
|
69
|
+
threads = []
|
70
|
+
results = {}
|
71
|
+
@members.each do |member|
|
72
|
+
threads << Thread.new do
|
73
|
+
results[member.to_s] = member.instance_exec &code
|
74
|
+
end
|
75
|
+
end
|
76
|
+
threads.each{|t|t.join}
|
77
|
+
return results
|
78
|
+
end
|
79
|
+
|
80
|
+
# Return true or false depending on the relationship between `condition` and `values`.
|
81
|
+
# `condition` may be specified as true, false, or a Range object.
|
82
|
+
# `values` is an Array of whatever type is appropriate for the condition.
|
83
|
+
def reduce (condition, values)
|
84
|
+
case condition.class.to_s
|
85
|
+
when "Range"
|
86
|
+
values.each{ |v| return false unless condition.cover? v }
|
87
|
+
when "TrueClass", "FalseClass"
|
88
|
+
values.each{ |v| return false unless v == condition }
|
89
|
+
else
|
90
|
+
raise ArgumentError.new("Got #{condition.class.to_s} but TrueClass, FalseClass, or Range expected")
|
91
|
+
end
|
92
|
+
return true
|
93
|
+
end
|
94
|
+
|
95
|
+
end
|
@@ -0,0 +1,32 @@
|
|
1
|
+
require 'haproxy_cluster/stats_container'
|
2
|
+
require 'haproxy_cluster/server_collection'
|
3
|
+
|
4
|
+
class HAProxyCluster
|
5
|
+
|
6
|
+
class Backend < StatsContainer
|
7
|
+
|
8
|
+
def initialize
|
9
|
+
@servers = ServerCollection.new
|
10
|
+
super
|
11
|
+
end
|
12
|
+
|
13
|
+
attr_accessor :servers
|
14
|
+
|
15
|
+
def name
|
16
|
+
self.pxname
|
17
|
+
end
|
18
|
+
|
19
|
+
def rolling_restartable? (enough = 80)
|
20
|
+
up_servers = @servers.map{ |s| s.ok? }.count
|
21
|
+
if up_servers == 0
|
22
|
+
return true # All servers are down; can't hurt!
|
23
|
+
elsif Rational(up_servers,@servers.count) >= Rational(enough,100)
|
24
|
+
return true # Minumum % is satisfied
|
25
|
+
else
|
26
|
+
return false # Not enough servers are up to handle restarting #{number_to_restart} at a time.
|
27
|
+
end
|
28
|
+
end
|
29
|
+
|
30
|
+
end
|
31
|
+
|
32
|
+
end
|
@@ -0,0 +1,61 @@
|
|
1
|
+
require 'rubygems'
|
2
|
+
require 'optparse'
|
3
|
+
require 'ostruct'
|
4
|
+
require 'pp'
|
5
|
+
require 'thread'
|
6
|
+
require 'timeout'
|
7
|
+
require 'haproxy_cluster/version'
|
8
|
+
require 'haproxy_cluster'
|
9
|
+
|
10
|
+
options = OpenStruct.new
|
11
|
+
OptionParser.new do |opts|
|
12
|
+
opts.banner = "Usage: #{File.basename $0} ARGS URL [URL] [...]"
|
13
|
+
opts.on("-e", "--eval=CODE", "Ruby code block to be evaluated") do |o|
|
14
|
+
options.code_string = o
|
15
|
+
end
|
16
|
+
opts.on("-v", "--verbose", "Verbose logging") do
|
17
|
+
# TODO Need better coverage here
|
18
|
+
RestClient.log = STDERR
|
19
|
+
end
|
20
|
+
opts.on("--csv", "Assume result will be an Array of Arrays and emit as CSV") do
|
21
|
+
options.csv = true
|
22
|
+
end
|
23
|
+
opts.on("-t", "--timeout=SECONDS", "Give up after TIMEOUT seconds") do |o|
|
24
|
+
options.timeout = o.to_f
|
25
|
+
end
|
26
|
+
opts.on_tail("--version", "Show version") do
|
27
|
+
puts HAProxyCluster::Version
|
28
|
+
exit
|
29
|
+
end
|
30
|
+
opts.separator "URL should be the root of an HA Proxy status page, either http:// or https://"
|
31
|
+
end.parse!
|
32
|
+
options.urls = ARGV
|
33
|
+
|
34
|
+
|
35
|
+
if options.code_string
|
36
|
+
|
37
|
+
if options.timeout
|
38
|
+
result = Timeout::timeout(options.timeout) do
|
39
|
+
Kernel.eval( options.code_string, HAProxyCluster.new(options.urls).instance_eval("binding") )
|
40
|
+
end
|
41
|
+
else
|
42
|
+
result = Kernel.eval( options.code_string, HAProxyCluster.new(options.urls).instance_eval("binding") )
|
43
|
+
end
|
44
|
+
|
45
|
+
case result.class.to_s
|
46
|
+
when "TrueClass","FalseClass"
|
47
|
+
exit result == true ? 0 : 1
|
48
|
+
when "Hash"
|
49
|
+
pp result
|
50
|
+
when "Array"
|
51
|
+
if options.csv
|
52
|
+
result.each{|row| puts row.to_csv}
|
53
|
+
else
|
54
|
+
pp result
|
55
|
+
end
|
56
|
+
else
|
57
|
+
puts result
|
58
|
+
end
|
59
|
+
|
60
|
+
end
|
61
|
+
|
@@ -0,0 +1,54 @@
|
|
1
|
+
require 'csv'
|
2
|
+
require 'rest-client'
|
3
|
+
require 'haproxy_cluster/backend'
|
4
|
+
require 'haproxy_cluster/server'
|
5
|
+
|
6
|
+
class HAProxyCluster
|
7
|
+
class Member
|
8
|
+
BACKEND = 1
|
9
|
+
SERVER = 2
|
10
|
+
|
11
|
+
def initialize(source)
|
12
|
+
@source = source
|
13
|
+
@backends = Hash.new { |h,k| h[k] = Backend.new }
|
14
|
+
if source =~ /https?:/
|
15
|
+
@type = :url
|
16
|
+
else
|
17
|
+
@type = :file
|
18
|
+
end
|
19
|
+
poll!
|
20
|
+
end
|
21
|
+
|
22
|
+
def poll!
|
23
|
+
case @type
|
24
|
+
when :url
|
25
|
+
csv = RestClient.get(@source + ';csv').gsub(/^# /,'').gsub(/,$/,'')
|
26
|
+
when :file
|
27
|
+
File.read(@source)
|
28
|
+
end
|
29
|
+
CSV.parse(csv, { :headers => :first_row, :converters => :all, :header_converters => [:downcase,:symbol] } ) do |row|
|
30
|
+
case row[:type]
|
31
|
+
when BACKEND
|
32
|
+
@backends[ row[:pxname].to_sym ].stats.merge! row.to_hash
|
33
|
+
when SERVER
|
34
|
+
@backends[ row[:pxname].to_sym ].servers << Server.new(row.to_hash, self)
|
35
|
+
end
|
36
|
+
end
|
37
|
+
end
|
38
|
+
|
39
|
+
attr_accessor :backends, :source, :type
|
40
|
+
|
41
|
+
def get_binding; binding; end
|
42
|
+
def to_s; @source; end
|
43
|
+
|
44
|
+
# Allow Backends to be accessed by dot-notation
|
45
|
+
def method_missing(m, *args, &block)
|
46
|
+
if @backends.has_key? m
|
47
|
+
@backends[m]
|
48
|
+
else
|
49
|
+
super
|
50
|
+
end
|
51
|
+
end
|
52
|
+
|
53
|
+
end
|
54
|
+
end
|
@@ -0,0 +1,60 @@
|
|
1
|
+
require 'haproxy_cluster/stats_container'
|
2
|
+
|
3
|
+
class HAProxyCluster
|
4
|
+
|
5
|
+
class Server < StatsContainer
|
6
|
+
|
7
|
+
def initialize (stats,member)
|
8
|
+
@member = member
|
9
|
+
super stats
|
10
|
+
end
|
11
|
+
|
12
|
+
def name
|
13
|
+
self.svname
|
14
|
+
end
|
15
|
+
|
16
|
+
def backup?
|
17
|
+
self.bck == 1
|
18
|
+
end
|
19
|
+
|
20
|
+
def ok?
|
21
|
+
self.status == 'UP'
|
22
|
+
end
|
23
|
+
|
24
|
+
def enable!
|
25
|
+
modify! :enable
|
26
|
+
end
|
27
|
+
|
28
|
+
def disable!
|
29
|
+
modify! :disable
|
30
|
+
end
|
31
|
+
|
32
|
+
def wait_until_ok
|
33
|
+
return true if self.ok?
|
34
|
+
start = Time.now
|
35
|
+
until self.ok?
|
36
|
+
raise Timeout if Time.now > start + 10
|
37
|
+
sleep 1
|
38
|
+
@member.poll!
|
39
|
+
end
|
40
|
+
return true
|
41
|
+
end
|
42
|
+
|
43
|
+
private
|
44
|
+
|
45
|
+
def modify! (how)
|
46
|
+
case @member.type
|
47
|
+
when :url
|
48
|
+
RestClient.post @member.source, { :s => self.name, :action => how, :b => self.pxname }
|
49
|
+
@member.poll!
|
50
|
+
else
|
51
|
+
raise "Not implemented: #{how} on #{@member.type}"
|
52
|
+
end
|
53
|
+
return self.status
|
54
|
+
end
|
55
|
+
|
56
|
+
class Timeout < RuntimeError ; end
|
57
|
+
|
58
|
+
end
|
59
|
+
|
60
|
+
end
|
@@ -0,0 +1,23 @@
|
|
1
|
+
require 'haproxy_cluster'
|
2
|
+
|
3
|
+
# Backends present summary statistics for the servers they contain, and
|
4
|
+
# individual servers also present their own specific data.
|
5
|
+
class HAProxyCluster
|
6
|
+
class StatsContainer
|
7
|
+
|
8
|
+
def initialize(stats = {})
|
9
|
+
@stats = stats
|
10
|
+
end
|
11
|
+
|
12
|
+
attr_accessor :stats
|
13
|
+
|
14
|
+
def method_missing(m, *args, &block)
|
15
|
+
if @stats.has_key? m
|
16
|
+
@stats[m]
|
17
|
+
else
|
18
|
+
super
|
19
|
+
end
|
20
|
+
end
|
21
|
+
|
22
|
+
end
|
23
|
+
end
|
metadata
ADDED
@@ -0,0 +1,104 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: haproxy-cluster
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.0.1
|
5
|
+
prerelease:
|
6
|
+
platform: ruby
|
7
|
+
authors:
|
8
|
+
- Jacob Elder
|
9
|
+
autorequire:
|
10
|
+
bindir: bin
|
11
|
+
cert_chain: []
|
12
|
+
date: 2012-07-23 00:00:00.000000000 Z
|
13
|
+
dependencies:
|
14
|
+
- !ruby/object:Gem::Dependency
|
15
|
+
name: rest-client
|
16
|
+
requirement: &70251272059180 !ruby/object:Gem::Requirement
|
17
|
+
none: false
|
18
|
+
requirements:
|
19
|
+
- - ! '>='
|
20
|
+
- !ruby/object:Gem::Version
|
21
|
+
version: '0'
|
22
|
+
type: :runtime
|
23
|
+
prerelease: false
|
24
|
+
version_requirements: *70251272059180
|
25
|
+
description: ! "haproxy-cluster\n===============\n\n> \"Can we survive a rolling restart?\"\n>\n>
|
26
|
+
\"How many concurrent connections right now across all load balancers?\"\n\nWhile
|
27
|
+
there are already a handfull of [HA Proxy](http://haproxy.1wt.edu) abstraction layers
|
28
|
+
on RubyGems, I wanted to be able to answer questions like those above and more,
|
29
|
+
quickly, accurately, and easily. So here's one more for the pile.\n\n`HAProxyCluster::Member`
|
30
|
+
provides an ORM for HA Proxy's status page.\n\n`HAProxyCluster` provides a simple
|
31
|
+
map/reduce-inspired framework on top of `HAProxyCluster::Member`.\n\n`haproxy_cluster`
|
32
|
+
provides a shell scripting interface for `HAProxyCluster`. Exit codes are meaningful
|
33
|
+
and intended to be useful from Nagios.\n\nDo you deploy new code using a sequential
|
34
|
+
restart of application servers? Using this common pattern carelessly can result
|
35
|
+
in too many servers being down at the same time, and cutomers seeing errors. `haproxy_cluster`
|
36
|
+
can prevent this by ensuring that every load balancer agrees that the application
|
37
|
+
is up at each stage in the deployment. In the example below, we will deploy a new
|
38
|
+
WAR to three Tomcat instances which are fronted by two HA Proxy instances. HA Proxy
|
39
|
+
has been configured with `option httpchk /check`, a path which only returns an affirmative
|
40
|
+
status code when the application is ready to serve requests.\n\n```bash\n#!bin/bash\nset
|
41
|
+
-o errexit\nservers=\"server1.example.com server2.example.com server3.example.com\"\nload_balancers=\"https://lb1.example.com:8888
|
42
|
+
http://lb2.example.com:8888\"\n\nfor server in $servers ; do\n haproxy_cluster
|
43
|
+
--timeout=300 --eval \"wait_until(true){ myapp.rolling_restartable? }\" $load_balancers\n
|
44
|
+
\ scp myapp.war $server:/opt/tomcat/webapps\ndone\n```\n\nThe code block passed
|
45
|
+
to `--eval` will not return until every load balancer reports that at least 80%
|
46
|
+
of the backend servers defined for \"myapp\" are ready to serve requests. If this
|
47
|
+
takes more than 5 minutes (300 seconds), the whole deployment is halted.\n\nMaybe
|
48
|
+
you'd like to know how many transactions per second your whole cluster is processing.\n\n
|
49
|
+
\ $ haproxy_cluster --eval 'poll{ puts members.map{|m|m.myapp.rate}.inject(:+)
|
50
|
+
}' $load_balancers\n\nInstallation\n------------\n\n`gem install haproxy-cluster`\n\nRequires
|
51
|
+
Ruby 1.9.2 and depends on RestClient.\n\nNon-Features\n------------\n\n* Doesn't
|
52
|
+
try to modify configuration files. Use [haproxy-tools](https://github.com/subakva/haproxy-tools),
|
53
|
+
[rhaproxy](https://github.com/jjuliano/rhaproxy), [haproxy_join](https://github.com/joewilliams/haproxy_join),
|
54
|
+
or better yet, [Chef](http://www.opscode.com/chef) for that.\n* Doesn't talk to
|
55
|
+
sockets, yet. Use [haproxy-ruby](https://github.com/inkel/haproxy-ruby) for now
|
56
|
+
if you need this. I intend to add support for this using `Net::SSH` and `socat(1)`
|
57
|
+
but for now HTTP is enough for my needs.\n\nProTip\n------\n\nHA Proxy's awesome
|
58
|
+
creator Willy Tarrreau loves [big text files](http://haproxy.1wt.eu/download/1.5/doc/configuration.txt)
|
59
|
+
and [big, flat web pages](http://haproxy.1wt.eu/). If smaller, hyperlinked documents
|
60
|
+
are more your style, you should know about the two alternative documentation sources:\n\n*
|
61
|
+
http://code.google.com/p/haproxy-docs/\n* http://cbonte.github.com/haproxy-dconv/configuration-1.5.html\n\n"
|
62
|
+
email:
|
63
|
+
- jacob.elder@gmail.com
|
64
|
+
executables:
|
65
|
+
- haproxy_cluster
|
66
|
+
extensions: []
|
67
|
+
extra_rdoc_files: []
|
68
|
+
files:
|
69
|
+
- README.md
|
70
|
+
- bin/haproxy_cluster
|
71
|
+
- lib/haproxy_cluster/backend.rb
|
72
|
+
- lib/haproxy_cluster/cli.rb
|
73
|
+
- lib/haproxy_cluster/member.rb
|
74
|
+
- lib/haproxy_cluster/server.rb
|
75
|
+
- lib/haproxy_cluster/server_collection.rb
|
76
|
+
- lib/haproxy_cluster/stats_container.rb
|
77
|
+
- lib/haproxy_cluster/version.rb
|
78
|
+
- lib/haproxy_cluster.rb
|
79
|
+
homepage: https://github.com/jelder/haproxy_cluster
|
80
|
+
licenses: []
|
81
|
+
post_install_message:
|
82
|
+
rdoc_options: []
|
83
|
+
require_paths:
|
84
|
+
- lib
|
85
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
86
|
+
none: false
|
87
|
+
requirements:
|
88
|
+
- - ~>
|
89
|
+
- !ruby/object:Gem::Version
|
90
|
+
version: 1.9.3
|
91
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
92
|
+
none: false
|
93
|
+
requirements:
|
94
|
+
- - ! '>='
|
95
|
+
- !ruby/object:Gem::Version
|
96
|
+
version: '0'
|
97
|
+
requirements: []
|
98
|
+
rubyforge_project:
|
99
|
+
rubygems_version: 1.8.11
|
100
|
+
signing_key:
|
101
|
+
specification_version: 3
|
102
|
+
summary: Inspect and manipulate collections of HA Proxy instances
|
103
|
+
test_files: []
|
104
|
+
has_rdoc:
|