etcd-rb 1.0.0.pre1 → 1.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: f5096dd7b703a7f42348e4eaae037cfa642e054f
4
+ data.tar.gz: 8f93a5c5affe7ffaaa0557a7df893a2fa6d9f00c
5
+ SHA512:
6
+ metadata.gz: 64631fd825ea6326288c8730f60539afd63e62f12015a657232e663153564ce628025f78b5c7026844901c790a76cbcddd70bb36963af529e7c3f8871b29a7d4
7
+ data.tar.gz: a2170e3f1d2ea0d9dfac6353d221539f593cc534083782bf2a53704907429adf9cbd2da3ec844a2d6d956571ce9ee08b50fe7416160127edb449dc2d8d5bf8d4
data/README.md CHANGED
@@ -5,50 +5,197 @@
5
5
 
6
6
  # Requirements
7
7
 
8
- A modern Ruby, compatible with 1.9.3 or later. Continously tested with MRI 1.9.3, 2.0.0 and JRuby 1.7.x.
9
-
10
- An etcd cluster. _Currently incompatible with the most recent versions of etcd because they return the wrong URI for the leader._
8
+ - A modern Ruby, compatible with 1.9.3 or later. Continously tested with MRI 1.9.3, 2.0.0 and JRuby 1.7.x.
9
+ - Linux/OSX OS
10
+ - to have a local Etcd binary, run
11
+ - `$ sh/install-etcd.sh`
11
12
 
12
13
  # Installation
13
14
 
14
- gem install etcd-rb --prerelease
15
+ gem install etcd-rb
15
16
 
16
17
  # Quick start
17
18
 
18
19
  ```ruby
19
20
  require 'etcd'
20
21
 
21
- client = Etcd::Client.new
22
+ client = Etcd::Client.connect(uris: 'http://localhost:4001')
23
+ client.connect
22
24
  client.set('/foo', 'bar')
23
25
  client.get('/foo')
24
26
  ```
25
27
 
26
- See the full [API documentation](http://rubydoc.info/gems/etcd-rb/frames) for more. All core features are supported, including test-and-set, TTL, watches -- as well as a few convenience features like continuous watching.
28
+ See the full [API documentation](http://rubydoc.info/github/iconara/etcd-rb/master/frames) for more. All core features are supported, including test-and-set, TTL, watches -- as well as a few convenience features like continuous watching.
29
+
30
+
31
+ # Development
32
+ $ git clone https://github.com/iconara/etcd-rb.git
33
+ $ cd etcd-rb
34
+ # will compile the etcd binary in the tmp folder to use for testing
35
+ $ sh/install-etcd.sh
36
+ $ bundle install
37
+ # make your changes
38
+ $ sh/test
39
+
40
+
41
+ # Development helpers
42
+ # console for quick REPL testing
43
+ $ sh/c
44
+
45
+ # start/stop Etcd cluster
46
+ $ sh/cluster <start/stop/reset>
47
+
48
+ # install Etcd binary to tmp folder
49
+ $ sh/install-etcd.sh
50
+
51
+ # run tests
52
+ $ sh/test
53
+
54
+
55
+ # Playing in shell
56
+ # load console with etcd-rb code
57
+ $ sh/c
58
+ > ClusterController.start_cluster
59
+ > seed_uris = ["http://127.0.0.1:4001", "http://127.0.0.1:4002", "http://127.0.0.1:4003"]
60
+ > client = Etcd::Client.connect(:uris => seed_uris)
61
+
27
62
 
28
63
  # Features
29
64
 
30
- ## Continuous watches: observers
65
+ ### Continuous watches: observers - [Example](#example-observers)
31
66
 
32
67
  Most of the time when you use watches with etcd you want to immediately re-watch the key when you get a change notification. The `Client#observe` method handles this for you, including re-watching with the last seen index, so that you don't miss any updates.
33
68
 
34
- ## Automatic leader detection
69
+ ### Automatic leader detection - [Example](#example-automatic-leader-detection)
70
+
71
+ All writes go to the leader-node. When the leader is re-elected, next request triggers a redirect and re-evaluation for
72
+ the cluster status on the client side. This happens transparently to you.
73
+
74
+ ### Automatic failover & retry - [Example](#example-automatic-failover)
75
+
76
+ If a request fails, client will try to get cluster configuration from all given seed URIs until first valid response. Then the original request will be retried. This also happens transparently to you.
77
+
78
+ Watches are a special case, since they use long polling, they will break when the leader goes down. After a failover observers reestablish their watches with the new leader. Again - this happens transparently to you :)
79
+
80
+
81
+ ### Heartbeating - [Example](#example-heartbeating)
82
+
83
+ To ensure that you have the most up-to-date cluster status and your observers are registered against the current leader node, initiate the client with `:heartbeat_freq (in seconds) parameter. This will start a background thread, that will periodially check the leader status, which in case of leader re-election will trigger the failover.
84
+
85
+ ### Example: Automatic Leader Detection
86
+
87
+ ```ruby
88
+ $ sh/c
89
+ # ensure we have a cluster with 3 nodes
90
+ ClusterController.start_cluster
91
+ client = Etcd::Client.test_client
92
+ # => <Etcd::Client ["http://127.0.0.1:4001", "http://127.0.0.1:4002", "http://127.0.0.1:4003"]>
93
+ client.leader
94
+ # => <Etcd::Node - node2 (leader) - http://127.0.0.1:4002>
95
+ ClusterController.kill_node(client.cluster.leader.name)
96
+ client.get("foo")
97
+ client.leader # leader has changed!
98
+ #=> <Etcd::Node - node3 (leader) - http://127.0.0.1:4003>
99
+ ```
100
+
101
+ ### Example: Automatic Failover
35
102
 
36
- You can point the client to any node in the etcd cluster, it will ask that node for the current leader and direct all subsequent requests directly to the leader to avoid unnecessary redirects. When the leader changes, detected by a redirect, the new leader will be registered and used instead of the previous.
103
+ ```ruby
104
+ # start with
105
+ # $ sh/c to have ClusterController available :)
106
+ seed_uris = ["http://127.0.0.1:4001", "http://127.0.0.1:4002", "http://127.0.0.1:4003"]
107
+ client = Etcd::Client.connect(:uris => seed_uris)
108
+
109
+
110
+ ## set some values
111
+ client.set("foo", "bar")
112
+ client.get("foo") # => bar
113
+ client.get("does-not-exist") # => nil
114
+
115
+ ## kill leader node
116
+ ClusterController.kill_node(client.cluster.leader.name)
117
+
118
+ ## client still trucking on
119
+ client.get("foo") # => bar
120
+
121
+ ## we have visibility into cluster status
122
+ puts client.cluster.nodes.map(&:status) # => [:running, :down, :running]
37
123
 
38
- ## Automacic failover & retry
124
+ # will leave only one process running by killing the next leader node
125
+ ClusterController.kill_node(client.cluster.leader.name)
39
126
 
40
- When connecting for the first time, and when the leader changes, the list of nodes in the cluster is cached. Should the node that the client is talking to become unreachable, the client will attempt to connect to the next known node, until it finds one that responds. The first node to respond will be asked for the current leader, which will then be used for subsequent request.
127
+ # but since we have no leader with one process, all requests will fail
128
+ client.get("foo") # raises AllNodesDownError error
129
+
130
+ puts client.cluster.nodes.map(&:status) # => [:running, :down, :down]
131
+ client.cluster.leader # => nil
132
+
133
+ ## now start up the cluster in another terminal by executing
134
+ ClusterController.start_cluster
135
+
136
+ ## client works again
137
+ client.get("foo") # => bar
138
+
139
+ ```
140
+
141
+
142
+ ### Example: Observers
143
+
144
+ ```ruby
145
+ $ sh/c
146
+ # ensure we have a cluster with 3 nodes
147
+ ClusterController.start_cluster
148
+ # test_client method is only sugar for local development
149
+ client = Etcd::Client.test_client
150
+
151
+ # your block can get value, key and info of the change, that you are observing
152
+ client.observe('/foo') do |v,k,info|
153
+ puts "v #{v}, k: #{k}, info: #{info}"
154
+ end
155
+
156
+ # this will trigger the observer
157
+ client.set("foo", "bar")
158
+ # let's kill the leader of the cluster to demonstrate the re-watching feature
159
+ ClusterController.kill_node(client.cluster.leader.name)
160
+ # still triggering the observer!
161
+ client.set("foo", "bar")
162
+ ```
163
+
164
+
165
+ ### Example: Heartbeating
166
+
167
+ ```ruby
168
+ $ sh/c
169
+ # ensure we have a cluster with 3 nodes
170
+ ClusterController.start_cluster
171
+ client = Etcd::Client.test_client(:heartbeat_freq => 5)
172
+
173
+ # your block can get value, key and info of the change, that you are observing
174
+ client.observe('/foo') do |v,k,info|
175
+ puts "v #{v}, k: #{k}, info: #{info}"
176
+ end
177
+
178
+ ### START A NEW console with $ `sh/c` helper
179
+ client = Etcd::Client.test_client
180
+ # this will trigger the observer in the first console
181
+ client.set("foo", "bar")
182
+ # let's kill the leader of the cluster to demonstrate re-watching && heartbeating for all active clients
183
+ ClusterController.kill_node(client.cluster.leader.name)
184
+ # still triggering the observer in the first console
185
+ # you might loose some changes in the 5-seconds hearbeating window... You should be aware of that.
186
+ client.set("foo", "bar")
187
+ ```
41
188
 
42
- This is handled completely transparently to you.
43
189
 
44
- Watches are a special case, since they use long polling, they will break when the leader goes down. Observers will attempt to reestablish their watches with the new leader.
45
190
 
46
191
  # Changelog & versioning
47
192
 
48
193
  Check out the [releases on GitHub](https://github.com/iconara/etcd-rb/releases). Version numbering follows the [semantic versioning](http://semver.org/) scheme.
49
194
 
195
+
50
196
  # How to contribute
51
197
 
198
+
52
199
  Fork the repository, make your changes in a topic branch that branches off from the right place in the history (HEAD isn't necessarily always right), make your changes and finally submit a pull request.
53
200
 
54
201
  Follow the style of the existing code, make sure that existing tests pass, and that everything new has good test coverage. Put some effort into writing clear and concise commit messages, and write a good pull request description.
@@ -0,0 +1,109 @@
1
+ module Etcd
2
+ class Client
3
+
4
+
5
+ # @param options [Hash]
6
+ # @option options [Array] :uris (['http://127.0.0.1:4001']) seed uris with etcd cluster nodes
7
+ # @option options [Float] :heartbeat_freq (0.0) check-frequency for leader status (in seconds)
8
+ # Heartbeating will start only for non-zero values
9
+ def initialize(options={})
10
+ @observers = {}
11
+ @seed_uris = options[:uris] || ['http://127.0.0.1:4001']
12
+ @heartbeat_freq = options[:heartbeat_freq].to_f
13
+ http_client.redirect_uri_callback = method(:handle_redirected)
14
+ end
15
+
16
+ # Create a new client and connect it to the etcd cluster.
17
+ #
18
+ # This method is the preferred way to create a new client, and is the
19
+ # equivalent of `Client.new(options).connect`. See {#initialize} and
20
+ # {#connect} for options and details.
21
+ #
22
+ # @see #initialize
23
+ # @see #connect
24
+ def self.connect(options={})
25
+ self.new(options).connect
26
+ end
27
+
28
+ # Connects to the etcd cluster
29
+ #
30
+ # @see #update_cluster
31
+ def connect
32
+ update_cluster
33
+ start_heartbeat_if_needed
34
+ self
35
+ end
36
+
37
+
38
+ # Creates a Cluster-instance from `@seed_uris`
39
+ # and stores the cluster leader information
40
+ def update_cluster
41
+ logger.debug("update_cluster: enter")
42
+ begin
43
+ @cluster = Etcd::Cluster.init_from_uris(*seed_uris)
44
+ @leader = @cluster.leader
45
+ @status = :up
46
+ logger.debug("update_cluster: after success")
47
+ refresh_observers
48
+ @cluster
49
+ rescue AllNodesDownError => e
50
+ logger.debug("update_cluster: failed")
51
+ raise e
52
+ end
53
+ end
54
+
55
+ # kinda magic accessor-method:
56
+ # - will reinitialize leader && cluster if needed
57
+ def leader
58
+ @leader ||= cluster && cluster.leader || update_cluster && self.leader
59
+ end
60
+
61
+ def leader_uri
62
+ leader && leader.etcd
63
+ end
64
+
65
+
66
+ def start_heartbeat_if_needed
67
+ logger.debug("client - starting heartbeat")
68
+ @heartbeat = Etcd::Heartbeat.new(self, @heartbeat_freq)
69
+ @heartbeat.start_heartbeat_if_needed
70
+ end
71
+
72
+ # Only happens on attempted write to a follower node in cluster. Means:
73
+ # - leader changed since last update
74
+ # Solution: just get fresh cluster status
75
+ def handle_redirected(uri, response)
76
+ update_cluster
77
+ http_client.default_redirect_uri_callback(uri, response)
78
+ end
79
+
80
+
81
+ private
82
+ # :uri and :request_data are the only methods calling :leader method
83
+ # so they both need to handle the case for missing leader in cluster
84
+ def uri(key, action=S_KEYS)
85
+ raise AllNodesDownError unless leader
86
+ key = "/#{key}" unless key.start_with?(S_SLASH)
87
+ "#{leader_uri}/v1/#{action}#{key}"
88
+ end
89
+
90
+
91
+ def request_data(method, uri, args={})
92
+ logger.debug("request_data: #{method} - #{uri} #{args.inspect}")
93
+ begin
94
+ super
95
+ rescue Errno::ECONNREFUSED, HTTPClient::TimeoutError => e
96
+ logger.debug("request_data: re-election handling")
97
+ old_leader_uri = @leader.etcd
98
+ update_cluster
99
+ if @leader
100
+ uri = uri.gsub(old_leader_uri, @leader.etcd)
101
+ retry
102
+ else
103
+ raise AllNodesDownError
104
+ end
105
+ end
106
+ end
107
+
108
+ end
109
+ end
@@ -0,0 +1,60 @@
1
+ module Etcd
2
+ class Client
3
+
4
+ # Sets up a continuous watch of a key or prefix.
5
+ #
6
+ # This method works like {#watch} (which is used behind the scenes), but
7
+ # will re-watch the key or prefix after receiving a change notificiation.
8
+ #
9
+ # When re-watching the index of the previous change notification is used,
10
+ # so no subsequent changes will be lost while a change is being processed.
11
+ #
12
+ # Unlike {#watch} this method as asynchronous. The watch handler runs in a
13
+ # separate thread (currently a new thread is created for each invocation,
14
+ # keep this in mind if you need to watch many different keys), and can be
15
+ # cancelled by calling `#cancel` on the returned object.
16
+ #
17
+ # Because of implementation details the watch handler thread will not be
18
+ # stopped directly when you call `#cancel`. The thread will be blocked until
19
+ # the next change notification (which will be ignored). This will have very
20
+ # little effect on performance since the thread will not be runnable. Unless
21
+ # you're creating lots of observers it should not matter. If you want to
22
+ # make sure you wait for the thread to stop you can call `#join` on the
23
+ # returned object.
24
+ #
25
+ # @example Creating and cancelling an observer
26
+ # observer = client.observe('/foo') do |value|
27
+ # # do something on changes
28
+ # end
29
+ # # ...
30
+ # observer.cancel
31
+ #
32
+ # @return [#cancel, #join] an observer object which you can call cancel and
33
+ # join on
34
+ def observe(prefix, &handler)
35
+ ob = Observer.new(self, prefix, handler).tap(&:run)
36
+ @observers[prefix] = ob
37
+ ob
38
+ end
39
+
40
+ def observers_overview
41
+ observers.map do |_, observer|
42
+ observer.pp_status
43
+ end
44
+ end
45
+
46
+ def refresh_observers_if_needed
47
+ refresh_observers if observers.values.any?{|x| not x.status}
48
+ end
49
+
50
+
51
+ # Re-initiates watches after leader election
52
+ def refresh_observers
53
+ logger.debug("refresh_observers: enter")
54
+ observers.each do |_, observer|
55
+ observer.rerun unless observer.status
56
+ end
57
+ end
58
+
59
+ end
60
+ end
@@ -0,0 +1,186 @@
1
+ module Etcd
2
+ class Client
3
+
4
+ # Sets the value of a key.
5
+ #
6
+ # Accepts an optional `:ttl` which is the number of seconds that the key
7
+ # should live before being automatically deleted.
8
+ #
9
+ # @param key [String] the key to set
10
+ # @param value [String] the value to set
11
+ # @param options [Hash]
12
+ # @option options [Fixnum] :ttl (nil) an optional time to live (in seconds)
13
+ # for the key
14
+ # @return [String] The previous value (if any)
15
+ def set(key, value, options={})
16
+ body = {:value => value}
17
+ body[:ttl] = options[:ttl] if options[:ttl]
18
+ data = request_data(:post, key_uri(key), body: body)
19
+ data[S_PREV_VALUE]
20
+ end
21
+
22
+ # Gets the value or values for a key.
23
+ #
24
+ # If the key represents a directory with direct decendants (e.g. "/foo" for
25
+ # "/foo/bar") a hash of keys and values will be returned.
26
+ #
27
+ # @param key [String] the key or prefix to retrieve
28
+ # @return [String, Hash] the value for the key, or a hash of keys and values
29
+ # when the key is a prefix.
30
+ def get(key)
31
+ data = request_data(:get, key_uri(key))
32
+ return nil unless data
33
+ if data.is_a?(Array)
34
+ data.each_with_object({}) do |e, acc|
35
+ acc[e[S_KEY]] = e[S_VALUE]
36
+ end
37
+ else
38
+ data[S_VALUE]
39
+ end
40
+ end
41
+
42
+ # Atomically sets the value for a key if the current value for the key
43
+ # matches the specified expected value.
44
+ #
45
+ # Returns `true` when the operation succeeds, i.e. when the specified
46
+ # expected value matches the current value. Returns `false` otherwise.
47
+ #
48
+ # Accepts an optional `:ttl` which is the number of seconds that the key
49
+ # should live before being automatically deleted.
50
+ #
51
+ # @param key [String] the key to set
52
+ # @param value [String] the value to set
53
+ # @param expected_value [String] the value to compare to the current value
54
+ # @param options [Hash]
55
+ # @option options [Fixnum] :ttl (nil) an optional time to live (in seconds)
56
+ # for the key
57
+ # @return [true, false] whether or not the operation succeeded
58
+ def update(key, value, expected_value, options={})
59
+ body = {:value => value, :prevValue => expected_value}
60
+ body[:ttl] = options[:ttl] if options[:ttl]
61
+ data = request_data(:post, key_uri(key), body: body)
62
+ !! data
63
+ end
64
+
65
+ # Remove a key and its value.
66
+ #
67
+ # The previous value is returned, or `nil` if the key did not exist.
68
+ #
69
+ # @param key [String] the key to remove
70
+ # @return [String] the previous value, if any
71
+ def delete(key)
72
+ data = request_data(:delete, key_uri(key))
73
+ return nil unless data
74
+ data[S_PREV_VALUE]
75
+ end
76
+
77
+ # Returns true if the specified key exists.
78
+ #
79
+ # This is a convenience method and equivalent to calling {#get} and checking
80
+ # if the value is `nil`.
81
+ #
82
+ # @return [true, false] whether or not the specified key exists
83
+ def exists?(key)
84
+ !!get(key)
85
+ end
86
+
87
+ # Returns info about a key, such as TTL, expiration and index.
88
+ #
89
+ # For keys with values the returned hash will include `:key`, `:value` and
90
+ # `:index`. Additionally for keys with a TTL set there will be a `:ttl` and
91
+ # `:expiration` (as a UTC `Time`).
92
+ #
93
+ # For keys that represent directories with no direct decendants (e.g. "/foo"
94
+ # for "/foo/bar/baz") the `:dir` key will have the value `true`.
95
+ #
96
+ # For keys that represent directories with direct decendants (e.g. "/foo"
97
+ # for "/foo/bar") a hash of keys and info will be returned.
98
+ #
99
+ # @param key [String] the key or prefix to retrieve
100
+ # @return [Hash] a with info about the key, the exact contents depend on
101
+ # what kind of key it is.
102
+ def info(key)
103
+ data = request_data(:get, uri(key))
104
+ return nil unless data
105
+ if data.is_a?(Array)
106
+ data.each_with_object({}) do |d, acc|
107
+ info = extract_info(d)
108
+ info.delete(:action)
109
+ acc[info[:key]] = info
110
+ end
111
+ else
112
+ info = extract_info(data)
113
+ info.delete(:action)
114
+ info
115
+ end
116
+ end
117
+
118
+ # Watches a key or prefix and calls the given block when with any changes.
119
+ #
120
+ # This method will block until the server replies. There is no way to cancel
121
+ # the call.
122
+ #
123
+ # The parameters to the block are the value, the key and a hash of
124
+ # additional info. The info will contain the `:action` that caused the
125
+ # change (`:set`, `:delete` etc.), the `:key`, the `:value`, the `:index`,
126
+ # `:new_key` with the value `true` when a new key was created below the
127
+ # watched prefix, `:previous_value`, if any, `:ttl` and `:expiration` if
128
+ # applicable.
129
+ #
130
+ # The reason why the block parameters are in the order`value`, `key` instead
131
+ # of `key`, `value` is because you almost always want to get the new value
132
+ # when you watch, but not always the key, and most often not the info. With
133
+ # this order you can leave out the parameters you don't need.
134
+ #
135
+ # @param prefix [String] the key or prefix to watch
136
+ # @param options [Hash]
137
+ # @option options [Fixnum] :index (nil) the index to start watching from
138
+ # @yieldparam [String] value the value of the key that changed
139
+ # @yieldparam [String] key the key that changed
140
+ # @yieldparam [Hash] info the info for the key that changed
141
+ # @return [Object] the result of the given block
142
+ def watch(prefix, options={})
143
+ if options[:index]
144
+ parameters = {:index => options[:index]}
145
+ data = request_data(:post, watch_uri(prefix), query: parameters)
146
+ else
147
+ data = request_data(:get, watch_uri(prefix), query: {})
148
+ end
149
+
150
+ info = extract_info(data)
151
+ yield info[:value], info[:key], info
152
+ end
153
+
154
+
155
+ def key_uri(key)
156
+ uri(key, S_KEYS)
157
+ end
158
+
159
+ def watch_uri(key)
160
+ uri(key, S_WATCH)
161
+ end
162
+
163
+ private
164
+
165
+ def extract_info(data)
166
+ info = {
167
+ :key => data[S_KEY],
168
+ :value => data[S_VALUE],
169
+ :index => data[S_INDEX],
170
+ }
171
+ expiration_s = data[S_EXPIRATION]
172
+ ttl = data[S_TTL]
173
+ previous_value = data[S_PREV_VALUE]
174
+ action_s = data[S_ACTION]
175
+ info[:expiration] = Time.iso8601(expiration_s) if expiration_s
176
+ info[:ttl] = ttl if ttl
177
+ info[:new_key] = data[S_NEW_KEY] if data.include?(S_NEW_KEY)
178
+ info[:dir] = data[S_DIR] if data.include?(S_DIR)
179
+ info[:previous_value] = previous_value if previous_value
180
+ info[:action] = action_s.downcase.to_sym if action_s
181
+ info
182
+ end
183
+
184
+
185
+ end
186
+ end