rloss 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 032f5b4b44b83ca04abfa4d14381adbbc0e52565
4
+ data.tar.gz: 751b6adbda48d547a04c516f7e123f6f3cf1ed1d
5
+ SHA512:
6
+ metadata.gz: 3e780da641248b462a9684536ccc82843821442471335efcd159cb62d429abd1c5bfd85831ef2baa76b4cbec704b46d5963f303d9044f232d3cdc4fad7fa1363
7
+ data.tar.gz: d0cde7f4a013edd6b65adbedf0af855f76baf161c30a6a93a55d3258727552653256b44c7cf488cb7b8cdfb6f49e919046768917fff9afcfa0c69678c29cbf7b
@@ -0,0 +1,19 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
18
+ .DS_Store
19
+ log/*
data/.rspec ADDED
@@ -0,0 +1 @@
1
+ --fail-fast --backtrace --require spec_helper
@@ -0,0 +1,3 @@
1
+ before_install: sudo apt-get install libzmq3-dev
2
+ script: bundle exec rake
3
+ language: ruby
@@ -0,0 +1,3 @@
1
+ --markup=markdown
2
+ --charset=utf-8
3
+ lib/**/*.rb
data/Gemfile ADDED
@@ -0,0 +1,13 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in floss.gemspec
4
+ gemspec
5
+
6
+ gem 'celluloid', github: 'celluloid/celluloid'
7
+ gem 'celluloid-io', github: 'celluloid/celluloid-io'
8
+ gem 'celluloid-zmq', github: 'celluloid/celluloid-zmq'
9
+
10
+ group :docs do
11
+ gem 'yard'
12
+ gem 'redcarpet'
13
+ end
data/LICENSE ADDED
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2013 Alexander Flatter
2
+
3
+ MIT License
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining
6
+ a copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to
10
+ permit persons to whom the Software is furnished to do so, subject to
11
+ the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be
14
+ included in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,102 @@
1
+ [![Build Status](https://secure.travis-ci.org/celluloid/floss.png?branch=master)](http://travis-ci.org/celluloid/floss)
2
+
3
+ # Floss
4
+
5
+ An implementation of the [Raft consensus algorithm](https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf) on top of Celluloid.
6
+
7
+ ## Installation
8
+
9
+ Add this line to your application's Gemfile:
10
+
11
+ ```ruby
12
+ gem 'floss'
13
+ ```
14
+
15
+ And then execute:
16
+
17
+ ```bash
18
+ $ bundle
19
+ ```
20
+
21
+ Or install it yourself as:
22
+
23
+ ```bash
24
+ $ gem install floss
25
+ ```
26
+
27
+ ## Usage
28
+
29
+ We're going to implement a distributed counter. While not very useful, it's a good enough demonstration of what you can
30
+ do with this library. Let's start with the counter service. It accepts three commands, `get`, `reset` and `increase`.
31
+ The first simply returns the current count, the second sets the count to zero and the third changes the current count,
32
+ optionally by a given amount.
33
+
34
+ ```ruby
35
+ class Counter
36
+ attr_accessor :count
37
+
38
+ def initialize
39
+ self.count = 0
40
+ end
41
+
42
+ def get
43
+ count
44
+ end
45
+
46
+ def reset
47
+ self.count = 0
48
+ end
49
+
50
+ def increase(amount = 1)
51
+ self.count += amount
52
+ end
53
+ end
54
+ ```
55
+
56
+ To increase reliability of your counter, you decide to distribute it across multiple machines. This is where `floss`
57
+ comes into play. To simplify this demonstration, we're going to start multiple nodes in the same process.
58
+
59
+ ```ruby
60
+ addresses = [10001, 10002, 10003].map { |port| "tcp://127.0.0.1:#{port}" }
61
+
62
+ $nodes = addresses.size.times.map do |i|
63
+ combination = addresses.rotate(i)
64
+ options = {id: combination.first, peers: combination[1..-1]}
65
+ Floss::Proxy.new(Counter.new, options)
66
+ end
67
+
68
+ # Give your nodes some time to start up.
69
+ $nodes.each(&:wait_until_ready)
70
+ ```
71
+
72
+ Now we're ready to play with our distributed counter.
73
+
74
+ ```ruby
75
+ def random_node; $nodes.sample; end
76
+
77
+ random_node.get # => 0
78
+ random_node.increase # => 1
79
+ random_node.get # => 1
80
+ ```
81
+
82
+ That was easy wasn't it? Let's see what happens if the cluster is damaged.
83
+
84
+ ```ruby
85
+ # Terminate a random node in the cluster.
86
+ doomed_node = $nodes.delete(random_node)
87
+ doomed_node_id = doomed_node.id
88
+ doomed_node.terminate
89
+
90
+ random_node.increase # => 2
91
+ ```
92
+
93
+ Your cluster still works. If you'd kill another one, executing a command would result in an error because insufficient
94
+ nodes are available to ensure your system's consistency.
95
+
96
+ ## Contributing
97
+
98
+ 1. Fork it
99
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
100
+ 3. Commit your changes (`git commit -am 'Added some feature'`)
101
+ 4. Push to the branch (`git push origin my-new-feature`)
102
+ 5. Create new Pull Request
@@ -0,0 +1,8 @@
1
+ #!/usr/bin/env rake
2
+ require "bundler/gem_tasks"
3
+ require "rspec/core/rake_task"
4
+
5
+ RSpec::Core::RakeTask.new
6
+
7
+ task :default => :spec
8
+ task :test => :spec
@@ -0,0 +1,36 @@
1
+ $: << File.expand_path('../../lib', __FILE__)
2
+
3
+ require 'floss/test_helper'
4
+ require 'floss/proxy'
5
+
6
+ include Celluloid::Logger
7
+
8
+ class FSM
9
+ def initialize
10
+ @content = Hash.new
11
+ end
12
+
13
+ def set(key, value)
14
+ @content[key] = value
15
+ end
16
+
17
+ def get(key)
18
+ @content[key]
19
+ end
20
+ end
21
+
22
+ CLUSTER_SIZE = 5
23
+
24
+ ids = CLUSTER_SIZE.times.map do |i|
25
+ port = 50000 + i
26
+ "tcp://127.0.0.1:#{port}"
27
+ end
28
+
29
+ proxies = Floss::TestHelper.cluster(ids) do |id, peers|
30
+ Floss::Proxy.new(FSM.new, id: id, peers: peers)
31
+ end
32
+
33
+ 100.times do |i|
34
+ proxies.sample.set(:foo, i)
35
+ raise "fail" unless proxies.sample.get(:foo) == i
36
+ end
@@ -0,0 +1,7 @@
1
+ require 'celluloid'
2
+ require "floss/version"
3
+
4
+ module Floss
5
+ class Error < StandardError; end
6
+ class TimeoutError < Error; end
7
+ end
@@ -0,0 +1,23 @@
1
+ require 'celluloid'
2
+ require 'floss'
3
+
4
+ class Floss::CountDownLatch
5
+ # @return [Fixnum] Current count.
6
+ attr_reader :count
7
+
8
+ def initialize(count)
9
+ @count = count
10
+ @condition = Celluloid::Condition.new
11
+ end
12
+
13
+ def signal
14
+ return if @count == 0
15
+
16
+ @count -= 1
17
+ @condition.signal if @count == 0
18
+ end
19
+
20
+ def wait
21
+ @condition.wait
22
+ end
23
+ end
@@ -0,0 +1,53 @@
1
+ require 'celluloid'
2
+ require 'floss'
3
+
4
+ # Based on Celluloid::Condition.
5
+ class Floss::Latch
6
+ SignalConditionRequest = Celluloid::SignalConditionRequest
7
+ class LatchError < Celluloid::Error; end
8
+
9
+ def initialize
10
+ @tasks = []
11
+ @mutex = Mutex.new
12
+ @ready = false
13
+ @values = nil
14
+ end
15
+
16
+ def ready?
17
+ @mutex.synchronize { @ready }
18
+ end
19
+
20
+ def wait
21
+ raise LatchError, "cannot wait for signals while exclusive" if Celluloid.exclusive?
22
+
23
+ task = Thread.current[:celluloid_actor] ? Celluloid::Task.current : Thread.current
24
+ waiter = Celluloid::Condition::Waiter.new(self, task, Celluloid.mailbox)
25
+
26
+ ready = @mutex.synchronize do
27
+ ready = @ready
28
+ @tasks << waiter unless ready
29
+ ready
30
+ end
31
+
32
+ values = if ready
33
+ @values
34
+ else
35
+ values = Celluloid.suspend(:condwait, waiter)
36
+ raise values if values.is_a?(LatchError)
37
+ values
38
+ end
39
+
40
+ values.size == 1 ? values.first : values
41
+ end
42
+
43
+ def signal(*values)
44
+ @mutex.synchronize do
45
+ return false if @ready
46
+
47
+ @ready = true
48
+ @values = values
49
+
50
+ @tasks.each { |waiter| waiter << SignalConditionRequest.new(waiter.task, values) }
51
+ end
52
+ end
53
+ end
@@ -0,0 +1,69 @@
1
+ require 'forwardable'
2
+ require 'floss'
3
+
4
+ # See Section 5.3.
5
+ class Floss::Log
6
+ include Celluloid
7
+ extend Forwardable
8
+
9
+ class Entry
10
+ # @return [Fixnum] When the entry was received by the leader.
11
+ attr_accessor :term
12
+
13
+ # @return [Object] A replicated state machine command.
14
+ attr_accessor :command
15
+
16
+ def initialize(command, term)
17
+ raise ArgumentError, "Term must be a Fixnum." unless term.is_a?(Fixnum)
18
+
19
+ self.term = term
20
+ self.command = command
21
+ end
22
+ end
23
+
24
+ def initialize(options={})
25
+ raise NotImplementedError
26
+ end
27
+
28
+ def []=(k,v)
29
+ raise NotImplementedError
30
+ end
31
+
32
+ def empty?
33
+ raise NotImplementedError
34
+ end
35
+
36
+ # @param [Array] The entries to append to the log.
37
+ def append(new_entries)
38
+ raise NotImplementedError
39
+ end
40
+
41
+ def starting_with(index)
42
+ raise NotImplementedError
43
+ end
44
+
45
+ # Returns the last index in the log or nil if the log is empty.
46
+ def last_index
47
+ raise NotImplementedError
48
+ end
49
+
50
+ # Returns the term of the last entry in the log or nil if the log is empty.
51
+ def last_term
52
+ raise NotImplementedError
53
+ end
54
+
55
+ def complete?(other_term, other_index)
56
+ # Special case: Accept the first entry if the log is empty.
57
+ return empty? if other_term.nil? && other_index.nil?
58
+
59
+ (other_term > last_term) || (other_term == last_term && other_index >= last_index)
60
+ end
61
+
62
+ def validate(index, term)
63
+ raise NotImplementedError
64
+ end
65
+
66
+ def remove_starting_with(index)
67
+ raise NotImplementedError
68
+ end
69
+ end
@@ -0,0 +1,55 @@
1
+ require 'forwardable'
2
+ require 'floss'
3
+ require 'floss/log'
4
+
5
+ # See Section 5.3.
6
+ class Floss::Log
7
+ class Simple < Floss::Log
8
+ include Celluloid
9
+ extend Forwardable
10
+
11
+ def_delegators :entries, :[], :empty?
12
+
13
+ # @return [Array<Entry>] The log's entries.
14
+ attr_accessor :entries
15
+
16
+ def initialize(options={})
17
+ self.entries = []
18
+ end
19
+
20
+ # @param [Array] The entries to append to the log.
21
+ def append(new_entries)
22
+ raise ArgumentError, 'The passed array is empty.' if new_entries.empty?
23
+
24
+ entries.concat(new_entries)
25
+ last_index
26
+ end
27
+
28
+ def starting_with(index)
29
+ entries[index..-1]
30
+ end
31
+
32
+ # Returns the last index in the log or nil if the log is empty.
33
+ def last_index
34
+ entries.any? ? entries.size - 1 : nil
35
+ end
36
+
37
+ # Returns the term of the last entry in the log or nil if the log is empty.
38
+ def last_term
39
+ entry = entries.last
40
+ entry ? entry.term : nil
41
+ end
42
+
43
+ def validate(index, term)
44
+ # Special case: Accept the first entry if the log is empty.
45
+ return empty? if index.nil? && term.nil?
46
+
47
+ entry = entries[index]
48
+ entry && entry.term == term
49
+ end
50
+
51
+ def remove_starting_with(index)
52
+ entries.slice!(index..-1)
53
+ end
54
+ end
55
+ end
@@ -0,0 +1,148 @@
1
+ require 'floss'
2
+ require 'floss/count_down_latch'
3
+
4
+ # Used by the leader to manage the replicated log.
5
+ class Floss::LogReplicator
6
+ extend Forwardable
7
+ include Celluloid
8
+
9
+ class IndexWaiter
10
+ class Waiter
11
+ attr_reader :peer
12
+ attr_reader :index
13
+ attr_reader :condition
14
+
15
+ def initialize(peer, index, condition)
16
+ @peer = peer
17
+ @index = index
18
+ @condition = condition
19
+ end
20
+ end
21
+
22
+ def initialize
23
+ @waiters = []
24
+ end
25
+
26
+ def register(peer, index, condition)
27
+ # This class is always used to wait for replication of a log entry, so we're failing fast here:
28
+ # Every log entry has an index, thus nil is disallowed.
29
+ unless index.is_a?(Fixnum) && index >= 0
30
+ raise ArgumentError, 'index must be a Fixnum and >= 0'
31
+ end
32
+
33
+ @waiters << Waiter.new(peer, index, condition)
34
+ end
35
+
36
+ def signal(peer, index)
37
+ return unless index # There's nothing to do if no index is given, see #register.
38
+
39
+ waiters = @waiters.delete_if do |waiter|
40
+ next unless waiter.peer == peer
41
+ waiter.index <= index
42
+ end
43
+
44
+ waiters.map(&:condition).each(&:signal)
45
+ end
46
+ end
47
+
48
+ finalizer :finalize
49
+
50
+ def_delegators :node, :peers, :log
51
+
52
+ # TODO: Pass those to the constructor: They don't change during a term.
53
+ def_delegators :node, :cluster_quorum, :broadcast_time, :current_term
54
+
55
+ # @return [Floss::Node]
56
+ attr_accessor :node
57
+
58
+ def initialize(node)
59
+ @node = node
60
+
61
+ # A helper for waiting on a certain index to be written to a peer.
62
+ @write_waiters = IndexWaiter.new
63
+
64
+ # Stores the index of the last log entry that a peer agrees with.
65
+ @write_indices = {}
66
+
67
+ # Keeps Celluloid::Timer instances that fire periodically for each peer to trigger replication.
68
+ @pacemakers = {}
69
+
70
+ initial_write_index = log.last_index
71
+
72
+ peers.each do |peer|
73
+ @write_indices[peer] = initial_write_index
74
+ @pacemakers[peer] = after(broadcast_time) { replicate(peer) }
75
+ end
76
+ end
77
+
78
+ def append(entry)
79
+ pause
80
+ index = log.append([entry])
81
+
82
+ quorum = Floss::CountDownLatch.new(cluster_quorum)
83
+ peers.each { |peer| signal_on_write(peer, index, quorum) }
84
+
85
+ resume
86
+ quorum.wait
87
+
88
+ # TODO: Ensure there's at least one write in the leader's current term.
89
+ @commit_index = index
90
+ end
91
+
92
+ def signal_on_write(peer, index, condition)
93
+ @write_waiters.register(peer, index, condition)
94
+ end
95
+
96
+ def pause
97
+ @pacemakers.values.each(&:cancel)
98
+ end
99
+
100
+ def resume
101
+ @pacemakers.values.each(&:fire)
102
+ end
103
+
104
+ def replicate(peer)
105
+ write_index = @write_indices[peer]
106
+ response = peer.append_entries(construct_payload(write_index))
107
+
108
+ if response[:success]
109
+ # nil if the log is still empty, last replicated log index otherwise
110
+ last_index = log.last_index
111
+
112
+ @write_indices[peer] = last_index
113
+ @write_waiters.signal(peer, last_index)
114
+ else
115
+ # Walk back in the peer's history.
116
+ @write_indices[peer] = write_index > 0 ? write_index - 1 : nil if write_index
117
+ end
118
+
119
+ @pacemakers[peer].reset
120
+ end
121
+
122
+ # Constructs payload for an AppendEntries RPC given a peer's write index.
123
+ # All entries **after** the given index will be included in the payload.
124
+ def construct_payload(index)
125
+ if index
126
+ prev_index = index
127
+ prev_term = log[prev_index].term
128
+ entries = log.starting_with(index + 1)
129
+ else
130
+ prev_index = nil
131
+ prev_term = nil
132
+ entries = log.starting_with(0)
133
+ end
134
+
135
+ Hash[
136
+ leader_id: node.id,
137
+ term: current_term,
138
+ prev_log_index: prev_index,
139
+ prev_log_term: prev_term,
140
+ commit_index: @commit_index,
141
+ entries: entries
142
+ ]
143
+ end
144
+
145
+ def finalize
146
+ pause
147
+ end
148
+ end