evinrude 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (62) hide show
  1. checksums.yaml +7 -0
  2. data/.editorconfig +23 -0
  3. data/.gitignore +6 -0
  4. data/.yardopts +1 -0
  5. data/CODE_OF_CONDUCT.md +49 -0
  6. data/CONTRIBUTING.md +10 -0
  7. data/LICENCE +674 -0
  8. data/README.md +410 -0
  9. data/evinrude.gemspec +42 -0
  10. data/lib/evinrude.rb +1233 -0
  11. data/lib/evinrude/backoff.rb +19 -0
  12. data/lib/evinrude/cluster_configuration.rb +162 -0
  13. data/lib/evinrude/config_change_queue_entry.rb +19 -0
  14. data/lib/evinrude/config_change_queue_entry/add_node.rb +13 -0
  15. data/lib/evinrude/config_change_queue_entry/remove_node.rb +14 -0
  16. data/lib/evinrude/freedom_patches/range.rb +5 -0
  17. data/lib/evinrude/log.rb +102 -0
  18. data/lib/evinrude/log_entries.rb +3 -0
  19. data/lib/evinrude/log_entry.rb +13 -0
  20. data/lib/evinrude/log_entry/cluster_configuration.rb +15 -0
  21. data/lib/evinrude/log_entry/null.rb +6 -0
  22. data/lib/evinrude/log_entry/state_machine_command.rb +13 -0
  23. data/lib/evinrude/logging_helpers.rb +40 -0
  24. data/lib/evinrude/message.rb +19 -0
  25. data/lib/evinrude/message/append_entries_reply.rb +13 -0
  26. data/lib/evinrude/message/append_entries_request.rb +18 -0
  27. data/lib/evinrude/message/command_reply.rb +13 -0
  28. data/lib/evinrude/message/command_request.rb +18 -0
  29. data/lib/evinrude/message/install_snapshot_reply.rb +13 -0
  30. data/lib/evinrude/message/install_snapshot_request.rb +18 -0
  31. data/lib/evinrude/message/join_reply.rb +13 -0
  32. data/lib/evinrude/message/join_request.rb +18 -0
  33. data/lib/evinrude/message/node_removal_reply.rb +13 -0
  34. data/lib/evinrude/message/node_removal_request.rb +18 -0
  35. data/lib/evinrude/message/read_reply.rb +13 -0
  36. data/lib/evinrude/message/read_request.rb +18 -0
  37. data/lib/evinrude/message/vote_reply.rb +13 -0
  38. data/lib/evinrude/message/vote_request.rb +18 -0
  39. data/lib/evinrude/messages.rb +14 -0
  40. data/lib/evinrude/metrics.rb +50 -0
  41. data/lib/evinrude/network.rb +69 -0
  42. data/lib/evinrude/network/connection.rb +144 -0
  43. data/lib/evinrude/network/protocol.rb +69 -0
  44. data/lib/evinrude/node_info.rb +35 -0
  45. data/lib/evinrude/peer.rb +50 -0
  46. data/lib/evinrude/resolver.rb +96 -0
  47. data/lib/evinrude/snapshot.rb +9 -0
  48. data/lib/evinrude/state_machine.rb +15 -0
  49. data/lib/evinrude/state_machine/register.rb +25 -0
  50. data/smoke_tests/001_single_node_cluster.rb +20 -0
  51. data/smoke_tests/002_three_node_cluster.rb +43 -0
  52. data/smoke_tests/003_spill.rb +25 -0
  53. data/smoke_tests/004_stale_read.rb +67 -0
  54. data/smoke_tests/005_sleepy_master.rb +28 -0
  55. data/smoke_tests/006_join_via_follower.rb +26 -0
  56. data/smoke_tests/007_snapshot_madness.rb +97 -0
  57. data/smoke_tests/008_downsizing.rb +43 -0
  58. data/smoke_tests/009_disaster_recovery.rb +46 -0
  59. data/smoke_tests/999_final_smoke_test.rb +279 -0
  60. data/smoke_tests/run +22 -0
  61. data/smoke_tests/smoke_test_helper.rb +199 -0
  62. metadata +318 -0
@@ -0,0 +1,410 @@
1
+ Evinrude is an opinionated but flexible implementation of the [Raft distributed
2
+ consensus algorithm](https://raft.github.io/). It is intended for use in any
3
+ situation where you need to be able to safely and securely achieve consensus
4
+ regarding the current state of a set of data in Ruby programs.
5
+
6
+
7
+ # Installation
8
+
9
+ It's a gem:
10
+
11
+ gem install evinrude
12
+
13
+ There's also the wonders of [the Gemfile](http://bundler.io):
14
+
15
+ gem 'evinrude'
16
+
17
+ If you're the sturdy type that likes to run from git:
18
+
19
+ rake install
20
+
21
+ Or, if you've eschewed the convenience of Rubygems entirely, then you
22
+ presumably know what to do already.
23
+
24
+
25
+ # Usage
26
+
27
+ In order to do its thing, {Evinrude} needs at least two things: the contact
28
+ details of an existing member of the cluster, and the shared secret key for the
29
+ cluster. Then, you create your {Evinrude} instance and set it running, like this:
30
+
31
+ ```
32
+ c = Evinrude.new(join_hints: [{ address: "192.0.2.42", port: 31337 }], shared_keys: ["s3kr1t"])
33
+ c.run
34
+ ```
35
+
36
+ The {Evinrude#run} method does not return in normal operation; you should call
37
+ it in a separate thread (or use a work supervisor such as
38
+ [Ultravisor](https://rubygems.org/gem/ultravisor)).
39
+
40
+ Once Evinrude is running, interaction with the data in the cluster is
41
+ straightforward. To cause a change to be made to the data set, you call
42
+ {Evinrude#command} with a message which describes how the shared state of the
43
+ cluster should be changed (an application-specific language which it is up to you
44
+ to define), while to retrieve the currently-agreed state of the data set you
45
+ call {Evinrude#state}.
46
+
47
+ By default, the data set that is managed by consensus is a "register" -- a single
48
+ atomic string value, wherein the single value is the value of the most recent
49
+ {Evinrude#command} call that has been committed by the cluster.
50
+
51
+ Since a single "last-write-wins" string is often not a particularly useful thing
52
+ to keep coordinated, Evinrude allows you to provide a more useful state machine
53
+ implementation which matches the data model you're working with.
54
+
55
+
56
+ ## The Machines of State
57
+
58
+ Because Evinrude is merely a Raft *engine*, it makes no assumptions about the
59
+ semantics of the data that is being managed. For this reason, most non-trivial
60
+ uses of Evinrude will want to provide their own implementation of {Evinrude::StateMachine},
61
+ and provide it to the {Evinrude.new} call using the `state_machine` keyword
62
+ argument:
63
+
64
+ ```
65
+ class MyStateMachine < Evinrude::StateMachine
66
+ # interesting things go here
67
+ end
68
+
69
+ c = Evinrude.new(join_hints: [...], shared_keys: [ ... ], state_machine: MyStateMachine)
70
+
71
+ # ...
72
+ ```
73
+
74
+ While the current state of the state machine is what we, as consumers of the
75
+ replicated data, care about, behind the scenes Raft deals entirely in a log of commands.
76
+ Each command (along with its arguments) may cause some deterministic change to the
77
+ internal state variables. Exactly what commands are available, their arguments, and
78
+ what they do is up to your state machine implementation.
79
+
80
+ Thus, the core method in your state machine implementation is
81
+ {Evinrude::StateMachine#process_command}. Similar to {Evinrude#command}, this
82
+ method accepts a string of arbitrary data, which it is the responsibility of
83
+ your state machine to decode and action. In fact, the commands that your state
84
+ machine receives are the exact same ones that are provided to
85
+ {Evinrude#command}. The only difference is that the only commands your state
86
+ machine will receive are those that the cluster as a whole has committed.
87
+
88
+ The other side, of course, is retrieving the current state. That is handled by
89
+ {Evinrude::StateMachine#current_state}. This method, which takes no arguments, can
90
+ return an arbitrary Ruby object that represents the current state of the
91
+ machine.
92
+
93
+ You don't need to worry about concurrency issues inside your state machine, by the way;
94
+ all calls to all methods on the state machine instance will be serialized via mutex.
95
+
96
+ It is *crucially* important that your state machine take no input from anywhere
97
+ other than calls to `#command`, and do nothing but modify internal state
98
+ variables. If you start doing things like querying data in the outside world,
99
+ or interacting with anything outside the state machine in response to commands,
100
+ you will obliterate the guarantees of the replicated state machine model, and
101
+ all heck will, sooner or later, break loose.
102
+
103
+ One performance problem in a Raft state machine is the need to replay every log
104
+ message since the dawn of time in order to reproduce the current state when
105
+ (re)starting. Since that can take a long time (and involve a lot of storage
106
+ and/or network traffic), Raft has the concept of *shapshots*. These are string
107
+ representations of the entire current state of the machine. Thus, your state
108
+ machine has to implement {Evinrude::StateMachine#snapshot}, which serializes
109
+ the current state into a string. To load the state, a previously obtained
110
+ snapshot string will be passed to {Evinrude::StateMachine#initialize} in the
111
+ `snapshot` keyword argument.
112
+
113
+ ... and that is the entire state machine interface.
114
+
115
+
116
+ ## Persistent Storage
117
+
118
+ Whilst for toy systems you *can* get away with just storing everything in memory,
119
+ it's generally not considered good form for anything which you want to survive
120
+ long-term. For that reason, you'll generally want to specify the `storage_dir`
121
+ keyword argument, specifying a directory which is writable by the user running
122
+ the process that is calling reating the object.
123
+
124
+ If there is existing state in that directory, it will be loaded before Evinrude
125
+ attempts to re-join the cluster.
126
+
127
+
128
+ ## The Madness of Networks
129
+
130
+ By default, Evinrude will listen on the `ANY` address, on a randomly assigned
131
+ high port, and will advertise itself as being available on the first sensible-looking
132
+ (ie non-loopback/link-local) address on the listening port.
133
+
134
+ In a sane and sensible network, that would be sufficient (with the possible
135
+ exception of the "listen on the random port" bit -- that can get annoying for
136
+ discovery purposes). However, so very, *very* few networks are sane and sensible,
137
+ and so there are knobs to tweak.
138
+
139
+ First off, if you need to control what address/port Evinrude listens on, you do
140
+ that via the `listen` keyword argument:
141
+
142
+ ```
143
+ Evinrude.new(listen: { address: "192.0.2.42", port: 31337 }, ...)
144
+ ```
145
+
146
+ Both `address` and `port` are optional; if left out, they'll be set to the appropriate
147
+ default. So you can just control the port to listen on, for instance, by
148
+ setting `listen: { port: 31337 }` if you like.
149
+
150
+ The other half of the network configuration is the *advertisement*. This is
151
+ needed because sometimes the address that Evinrude thinks it has is not the
152
+ address that other Evinrude instances must use to talk to it. Anywhere NAT
153
+ rears its ugly head is a candidate for this -- Docker containers where
154
+ publishing is in use, for instance, will almost certainly fall foul of this.
155
+ For this reason, you can override the advertised address and/or port using
156
+ the `advertise` keyword argument:
157
+
158
+ ```
159
+ Evinrude.new(advertise: { address: "192.0.2.42", port: 31337 })
160
+ ```
161
+
162
+
163
+ ## Bootstrapping and Joining a Cluster
164
+
165
+ A Raft cluster bootstraps itself by having the "first" node recognise that it is
166
+ all alone in the world, and configure itself as the single node in the cluster.
167
+ After that, all other new nodes need to told the location of at least one other
168
+ node in the cluster. Existing cluster nodes that are restarted can *usually* use
169
+ the cluster configuration that is stored on disk, however a node which has been
170
+ offline while all other cluster nodes have changed addresses may still need to
171
+ use the join hints to find another node.
172
+
173
+ To signal to a node that it is the initial "bootstrap" node, you must explicitly
174
+ pass `join_hints: nil` to `Evinrude.new`:
175
+
176
+ ```
177
+ # Bootstrap mode
178
+ c = Evinrude.new(join_hints: nil, ...)
179
+ ```
180
+
181
+ Note that `nil` is *not* the default for `join_hints`; this is for safety, to avoid
182
+ any sort of configuration error causing havoc.
183
+
184
+ All other nodes in the cluster should be provided with the location of at least
185
+ one existing cluster member via `join_hints`. The usual form of the `join_hints`
186
+ is an array of one or more of the following entries:
187
+
188
+ * A hash containing `:address` and `:port` keys; `:address` can be either an
189
+ IPv4 or IPv6 literal address, or a hostname which the system is capable of
190
+ resolving into one or more IPv4 or IPv6 addresses, while `:port` must be
191
+ an integer representing a valid port number; *or*
192
+
193
+ * A string, which will be queried for `SRV` records.
194
+
195
+ An example, containing all of these:
196
+
197
+ ```
198
+ c = Evinrude.new(join_hints: [
199
+ { address: "192.0.2.42", port: 1234 },
200
+ { address: "2001:db8::42", port: 4321 },
201
+ { address: "cluster.example.com", port: 31337 },
202
+ "cluster._evinrude._tcp.example.com"
203
+ ],
204
+ ...)
205
+ ```
206
+
207
+ As shown above, you can use all of the different forms together. They'll
208
+ be resolved and expanded into a big list of addresses as required.
209
+
210
+
211
+ ## Encryption Key Management
212
+
213
+ To provide at least a modicum of security, all cluster network communications
214
+ are encrypted using a symmetric cipher. This requires a common key for
215
+ encryption and decryption, which you provide in the `shared_keys` keyword
216
+ argument:
217
+
218
+ ```
219
+ c = Evinrude.new(shared_keys: ["s3krit"], ...)
220
+ ```
221
+
222
+ The keys you use can be arbitrary strings of arbitrary length. Preferably, you
223
+ want the string to be completely random and have at least 128 bits of entropy.
224
+ For example, you could use a string of 16 binary characters, encoded in
225
+ hex: `SecureRandom.hex(16)`. The longer the better, but there's no point
226
+ having more than 256 bits of entropy, because your keys get hashed to 32 bytes
227
+ for use in the encryption algorithm.
228
+
229
+ As you can see from the above example, `shared_keys` is an *array* of strings,
230
+ not a single string. This is to facilitate *key rotation*, if you're into that
231
+ kind of thing.
232
+
233
+ Since you don't want to interrupt cluster operation, you can't take down
234
+ all the nodes simultaneously to change the key. Instead, you do the following,
235
+ assuming that you are using a secret key `"oldkey"`, and you want to
236
+ switch to using `"newkey"`:
237
+
238
+ 1. Reconfigure each node, one by one, to set `shared_keys: ["oldkey", "newkey"]`
239
+ (Note the order there is important! `"oldkey"` first, then `"newkey"`)
240
+
241
+ 2. When all nodes are running with the new configuration, then go around
242
+ and reconfigure each node again, to set `shared_keys: ["newkey", "oldkey"]`
243
+ (Again, *order is important*).
244
+
245
+ 3. Finally, once all nodes are running with this second configuration, you
246
+ can remove `"oldkey"` from the configuration, and restart everything
247
+ with `shared_keys: ["newkey"]`, which retires the old key entirely.
248
+
249
+ This may seem like a lot of fiddling around, which is why you should always
250
+ use configuration management, which takes care of all the boring fiddling
251
+ around for you.
252
+
253
+ Why this works is because of how Evinrude uses the keys. The first key
254
+ in the list is the key with which all messages are encrypted. However any
255
+ received message can be decrypted with *any* key in the list. Hence, the
256
+ three step process:
257
+
258
+ 1. While you're doing step 1, everyone is encrypting with `"oldkey"`, so nobody
259
+ will ever need to use `"newkey"` to decrypt anything, but that's OK.
260
+
261
+ 2. While you're doing step 2, some nodes will be encrypting their messages with
262
+ `"oldkey"` and some will be encrypting with `"newkey"`. But since all the
263
+ nodes can decrypt anything encrypted with *either* `"oldkey"` *or*
264
+ `"newkey"` (because that's how they were configured in step 1), there's no
265
+ problem.
266
+
267
+ 3. By the time you start step 3, everyone is encrypting everything with
268
+ `"newkey"`, so there's no problems with removing `"oldkey"` from the set of
269
+ shared keys.
270
+
271
+
272
+ ## Managing and Decommissioning Nodes
273
+
274
+ Because Raft works on a "consensus" basis, a majority of nodes must always
275
+ be available to accept updates and agree on the current state of the cluster.
276
+ This is true for both writes (changes to the cluster state), *as well as reads*.
277
+
278
+ Once a node has joined the cluster, it is considered to be a part of the
279
+ cluster forever, unless it is explicitly removed. It is not safe for a node to
280
+ be removed automatically after some period of inactivity, because that node
281
+ could re-appear at any time and cause issues, including what is known as
282
+ "split-brain" (where there are two separate operational clusters, both of which
283
+ believe they know how things should be).
284
+
285
+ Evinrude makes some attempts to make the need to manually remove nodes rare. In
286
+ many raft implementations, a node is identified by its IP address and port. If
287
+ that changes, it counts as a new node. When you're using a "dynamic network"
288
+ system (like most cloud providers), every time a server restarts, it gets a new
289
+ IP address, which is counted as a new node, and so quickly there's more old, dead
290
+ nodes than currently living ones, and the cluster completely seizes up.
291
+
292
+ In contrast, Evinrude nodes have a name as well as the usual address/port pair. If a node
293
+ joins (or re-joins) the cluster with a name identical to that of a node already in
294
+ the cluster configuration, then the old node's address and port are replaced with
295
+ the address and port of the new one.
296
+
297
+ You can set one by hand, using the `node_name` keyword argument (although be
298
+ *really* sure to make them unique, or all heck will break loose), but if you
299
+ don't set one by hand, a new node will generate a UUID for its name. If a node
300
+ loads its state from disk on startup, it will use whatever name was stored on disk.
301
+
302
+ Thus, if you have servers backed by persistent storage, you don't have to do
303
+ anything special: let Evinrude generate a random name on first startup, write out
304
+ its node name to disk, and then on every restart thereafter, the shared cluster configuration
305
+ will be updated to keep the cluster state clean.
306
+
307
+ Even if you don't have persistant storage, as long as you can pass the same
308
+ node name to the cluster node each time it starts, everything will still be
309
+ fine: the fresh node will give its new address and port with its existing name,
310
+ the cluster configuration will be updated, the new node will be sent the
311
+ existing cluster state, and off it goes.
312
+
313
+
314
+ ### Removing a Node
315
+
316
+ All that being said, there *are* times when a cluster node has to be forcibly
317
+ removed from the cluster. A few of the common cases are:
318
+
319
+ 1. **Downsizing**: you were running a cluster of, say, nine nodes (because who
320
+ *doesn't* want N+4 redundancy?), but a management decree says that for
321
+ budget reasons, you can now only have five nodes (N+2 ought to be enough for
322
+ anyone!). In that case, you shut down four of the nodes, but the cluster
323
+ will need to be told that they've been removed, otherwise as soon as one
324
+ node crashes, the whole cluster will seize up.
325
+
326
+ 2. **Operator error**: somehow (doesn't matter how, we all make mistakes
327
+ sometimes) an extra node managed to join the cluster. You nuked it before
328
+ it did any real damage, but the cluster config still thinks that node should
329
+ be part of the quorum. It needs to be removed before All Heck Breaks Loose.
330
+
331
+ 3. **Totally Dynamic Environment**: if your cluster members have *no* state
332
+ persistence, not even being able to remember their name, nodes will need to
333
+ gracefully deregister themselves from the cluster when they shutdown.
334
+ **Note**: in this case, nodes that crash and burn without having a chance to
335
+ gracefully say "I'm outta here" will clog up the cluster, and sooner or
336
+ later you'll have more ex-nodes than live nodes, leading to eventual
337
+ Confusion and Delay. Make sure you've got some sort of "garbage collection"
338
+ background operation running, that can identify permanently-dead nodes and
339
+ remove them from the cluster before they cause downtime.
340
+
341
+ In any event, the way to remove a node is straightforward: from any node currently
342
+ in the cluster, call {Evinrude#remove_node}, passing the node's info:
343
+
344
+ ```
345
+ c = Evinrude.new(...)
346
+ c.remove_node(Evinrude::NodeInfo.new(address: "2001:db8::42", port: 31337, name: "fred"))
347
+ ```
348
+
349
+ This will notify the cluster leader of the node's departure, and the cluster
350
+ config will be updated.
351
+
352
+ Removing a node requires the cluster to still have consensus (half the
353
+ cluster nodes running), for the new configuration to take effect. This is so the
354
+ removal can be safe, by doing Raft Trickery to ensure that the removed node
355
+ can't cause split-brain issues on its way out the door.
356
+
357
+
358
+ ### Emergency Removal of a Node
359
+
360
+ If your cluster has completely seized up, due to more than half of
361
+ the nodes in the cluster configuration being offline, things are somewhat trickier.
362
+ In this situation, you need to do the following:
363
+
364
+ 1. Make 110% sure that the node (or nodes) you're removing aren't coming back any
365
+ time soon. If the nodes you're removing spontaneously reappear, you can end
366
+ up with split-brain.
367
+
368
+ 2. Locate the current cluster leader node. The {Evinrude#leader?} method is your
369
+ friend here. If no node is the leader, then find a node which is a candidate
370
+ instead (with {Evinrude#candidate?} and use that.
371
+
372
+ 3. Request the removal of a node with {Evinrude#remove_node}, but this time pass the
373
+ keyword argument `unsafe: true`. This bypasses the consensus checks.
374
+
375
+ The reason why you need to do this on the leader is because the new config that
376
+ doesn't have the removed node needs to propagate from the leader to the rest of
377
+ the cluster. When the cluster doesn't have a leader, removing the node from a
378
+ candidate allows that candidate to gather enough votes to consider itself a
379
+ leader, at which point it can propagate its configuration to the other nodes.
380
+
381
+ In almost all cases, you'll need to remove several nodes in order to get the
382
+ cluster working again. Just keep removing nodes until everything comes back.
383
+
384
+ Bear in mind that if your cluster split-brains as a result of passing `unsafe:
385
+ true`, you get to keep both pieces -- that's why the keyword's called `unsafe`!
386
+
387
+
388
+ # Contributing
389
+
390
+ Please see [CONTRIBUTING.md](CONTRIBUTING.md).
391
+
392
+
393
+ # Licence
394
+
395
+ Unless otherwise stated, everything in this repo is covered by the following
396
+ copyright notice:
397
+
398
+ Copyright (C) 2020 Matt Palmer <matt@hezmatt.org>
399
+
400
+ This program is free software: you can redistribute it and/or modify it
401
+ under the terms of the GNU General Public License version 3, as
402
+ published by the Free Software Foundation.
403
+
404
+ This program is distributed in the hope that it will be useful,
405
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
406
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
407
+ GNU General Public License for more details.
408
+
409
+ You should have received a copy of the GNU General Public License
410
+ along with this program. If not, see <http://www.gnu.org/licenses/>.
@@ -0,0 +1,42 @@
1
+ begin
2
+ require 'git-version-bump'
3
+ rescue LoadError
4
+ nil
5
+ end
6
+
7
+ Gem::Specification.new do |s|
8
+ s.name = "evinrude"
9
+
10
+ s.version = GVB.version rescue "0.0.0.1.NOGVB"
11
+ s.date = GVB.date rescue Time.now.strftime("%Y-%m-%d")
12
+
13
+ s.platform = Gem::Platform::RUBY
14
+
15
+ s.summary = "The Raft engine"
16
+
17
+ s.authors = ["Matt Palmer"]
18
+ s.email = ["theshed+evinrude@hezmatt.org"]
19
+ s.homepage = "https://github.com/mpalmer/evinrude"
20
+
21
+ s.files = `git ls-files -z`.split("\0").reject { |f| f =~ /^(G|spec|Rakefile)/ }
22
+
23
+ s.required_ruby_version = ">= 2.5.0"
24
+
25
+ s.add_runtime_dependency "async"
26
+ s.add_runtime_dependency "async-dns"
27
+ s.add_runtime_dependency "async-io"
28
+ s.add_runtime_dependency "frankenstein", "~> 2.1"
29
+ s.add_runtime_dependency "prometheus-client", "~> 2.0"
30
+ s.add_runtime_dependency "rbnacl"
31
+
32
+ s.add_development_dependency 'bundler'
33
+ s.add_development_dependency 'github-release'
34
+ s.add_development_dependency 'guard-rspec'
35
+ s.add_development_dependency 'rake', '~> 10.4', '>= 10.4.2'
36
+ # Needed for guard
37
+ s.add_development_dependency 'rb-inotify', '~> 0.9'
38
+ s.add_development_dependency 'redcarpet'
39
+ s.add_development_dependency 'rspec'
40
+ s.add_development_dependency 'simplecov'
41
+ s.add_development_dependency 'yard'
42
+ end