related 0.5.0 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/CHANGELOG CHANGED
@@ -1,4 +1,9 @@
1
1
 
2
+ *0.6*
3
+
4
+ * Made it possible to use Related with another database for storing node data
5
+ * Made sure all redis-server processes gets killed when running tests in distributed mode
6
+
2
7
  *0.5*
3
8
 
4
9
  * Real-time stream processing
data/README.md CHANGED
@@ -1,18 +1,20 @@
1
1
  Related
2
2
  =======
3
3
 
4
- Related is a Redis-backed high performance graph database.
4
+ Related is a Redis-backed high performance distributed graph database.
5
5
 
6
6
  Raison d'être
7
7
  -------------
8
8
 
9
9
  Related is meant to be a simple graph database that is fun, free and easy to
10
- use. The intention is not to compete with industrial grade graph databases
11
- like Neo4j, but rather to be a replacement for a relational database when your
12
- data is better described as a graph. For example when building social
13
- software. Related is meant to be web scale, but ultimately relies on the
14
- ability of Redis to scale (using Redis cluster for example). Read more about
15
- the philosophy behind Related in the
10
+ use. The intention is not to compete with "real" graph databases like Neo4j,
11
+ but rather to be a replacement for a relational database when your data is
12
+ better described as a graph. For example when building social software.
13
+ Related is very similar in scope and functionality to Twitters FlockDB, but is
14
+ among other things designed to be easier to setup and use. Related also has
15
+ better documentation and is easier to hack on. The intention is to be web
16
+ scale, but we ultimately rely on the ability of Redis to scale (using Redis
17
+ Cluster for example). Read more about the philosophy behind Related in the
16
18
  [Wiki](http://github.com/sutajio/related/wiki).
17
19
 
18
20
  Setup
@@ -43,6 +45,10 @@ node.attributes
43
45
  node.has_attribute?(:popularity)
44
46
  node.read_attribute(:popularity)
45
47
  node.write_attribute(:popularity, 50)
48
+ node.increment!(:popularity, 10)
49
+ node.decrement!(:popularity, 10)
50
+ Related::Node.increment!(node, :popularity, 10)
51
+ Related::Node.decrement!(node, :popularity, 10)
46
52
  node.save
47
53
  node.persisted?
48
54
  node = Related::Node.find(node.id)
@@ -206,12 +212,21 @@ class Comment < Related::Relationship
206
212
  end
207
213
  ```
208
214
 
209
- The weight is always an integer and is sorted in descending order.
215
+ The weight is always a double precision floating point number and is sorted in
216
+ descending order.
210
217
 
211
- The weight for the links get updated every time the relationship is saved. So
212
- if you update the points for a Comment in the example above, the weight is
213
- automatically updated. You can access the weight and rank (0 based position)
214
- of a relationship like this:
218
+ To change the weight an existing relationship you can use the
219
+ `increment_weight!` and `decrement_weight!` methods. They are atomic, which
220
+ means that you can have any number of clients updating the weight
221
+ simultaneously without conflict.
222
+
223
+ ```ruby
224
+ comment.increment_weight!(:out, 4.2)
225
+ comment.decrement_weight!(:in, 4.2)
226
+ ```
227
+
228
+ You can access the current weight and rank (0 based position) of a
229
+ relationship like this:
215
230
 
216
231
  ```ruby
217
232
  comment.weight(:out)
@@ -332,6 +347,72 @@ To start a stream processor:
332
347
  You can start as many stream processors as you may need to scale
333
348
  up.
334
349
 
350
+ Distributed cluster setup
351
+ -------------------------
352
+
353
+ It is easy to use Related in a distributed cluster setup. As of writing this
354
+ (November 2011) Redis Cluster is not yet ready for production use, but is
355
+ expected for Redis 3.0 sometime in 2012. Redis Cluster will then be the
356
+ preferred solution as it will allow you to setup up a dynamic cluster that can
357
+ re-configure on the fly. If you don't need to add or remove machines for the
358
+ cluster you can still use Related in a distributed setup right now using the
359
+ consistent hashing client Redis::Distributed which is included in the "redis"
360
+ gem.
361
+
362
+ ```ruby
363
+ Related.redis = Redis::Distributed.new %w[
364
+ redis://redis-1.example.com
365
+ redis://redis-2.example.com
366
+ redis://redis-3.example.com
367
+ redis://redis-4.example.com],
368
+ :tag => /^related:([^:]+)/
369
+ ```
370
+
371
+ The regular expression supplied in the `:tag` option tells Redis::Distributed
372
+ how to distribute keys between the different machines. The regexp in the
373
+ example is the recommended way of setting it up as it will partition the key
374
+ space based on the Related ID part of the key, in effect localizing all data
375
+ directly related to a specific node on a single machine. This is generally
376
+ good both for reliability (if a machine goes down, it only takes down a part
377
+ of the graph) and speed (set operations on relationships originating from the
378
+ same node can be done on the server side, which is a lot faster, for example).
379
+
380
+ You could also specify a regexp like `/:(n|r):/` that will locate all
381
+ relationships on the same machine, making set operations on relationships
382
+ a lot faster overall. But with the obvious drawback that the total size of
383
+ your graph will be limited by that single machine.
384
+
385
+ Using Related with another database
386
+ -----------------------------------
387
+
388
+ Related can easily be used together with other databases than Redis to store
389
+ Node data. Relationships are always stored in Redis, but node data can often
390
+ have characteristics that make Redis unsuitable (like large size).
391
+
392
+ You can for example use Related together with the Ripple gem to store nodes
393
+ in Riak:
394
+
395
+ ```ruby
396
+ class CustomNode
397
+ include Ripple::Document
398
+ include Related::Node::QueryMethods
399
+
400
+ def query
401
+ Related::Node::Query.new(self)
402
+ end
403
+ end
404
+ ```
405
+
406
+ You can then use the `CustomNode` class as an ordinary Related graph Node and
407
+ query the graph like usual:
408
+
409
+ ```ruby
410
+ node1 = CustomNode.create
411
+ node2 = CustomNode.create
412
+ Related::Relationship.create(:friend, node1, node2)
413
+ node1.shortest_path_to(node2).outgoing(:friend)
414
+ ```
415
+
335
416
  Development
336
417
  -----------
337
418
 
@@ -7,6 +7,7 @@ module Related
7
7
  include ActiveModel::Serializers::JSON
8
8
  include ActiveModel::Serializers::Xml
9
9
  include ActiveModel::Translation
10
+ include ActiveModel::AttributeMethods
10
11
 
11
12
  self.include_root_in_json = false
12
13
 
@@ -116,6 +117,24 @@ module Related
116
117
  @properties ? @properties.keys : []
117
118
  end
118
119
 
120
+ def self.increment!(id, attribute, by = 1)
121
+ raise Related::NotFound if id.blank?
122
+ Related.redis.hincrby(id.to_s, attribute.to_s, by.to_i)
123
+ end
124
+
125
+ def self.decrement!(id, attribute, by = 1)
126
+ raise Related::NotFound if id.blank?
127
+ Related.redis.hincrby(id.to_s, attribute.to_s, -by.to_i)
128
+ end
129
+
130
+ def increment!(attribute, by = 1)
131
+ self.class.increment!(@id, attribute, by)
132
+ end
133
+
134
+ def decrement!(attribute, by = 1)
135
+ self.class.decrement!(@id, attribute, by)
136
+ end
137
+
119
138
  private
120
139
 
121
140
  def load_attributes(id, attributes)
@@ -184,14 +203,12 @@ module Related
184
203
  end
185
204
 
186
205
  def self.find_many(ids, options = {})
187
- res = Related.redis.pipelined do
188
- ids.each {|id|
189
- if options[:fields]
190
- Related.redis.hmget(id.to_s, *options[:fields])
191
- else
192
- Related.redis.hgetall(id.to_s)
193
- end
194
- }
206
+ res = pipelined_fetch(ids) do |id|
207
+ if options[:fields]
208
+ Related.redis.hmget(id.to_s, *options[:fields])
209
+ else
210
+ Related.redis.hgetall(id.to_s)
211
+ end
195
212
  end
196
213
  objects = []
197
214
  ids.each_with_index do |id,i|
@@ -203,7 +220,7 @@ module Related
203
220
  klass = options[:model] ? options[:model].call(attributes) : self
204
221
  objects << klass.new.send(:load_attributes, id, attributes)
205
222
  else
206
- attributes = Hash[*res[i]]
223
+ attributes = res[i].is_a?(Array) ? Hash[*res[i]] : res[i]
207
224
  klass = options[:model] ? options[:model].call(attributes) : self
208
225
  objects << klass.new.send(:load_attributes, id, attributes)
209
226
  end
@@ -211,6 +228,18 @@ module Related
211
228
  objects
212
229
  end
213
230
 
231
+ def self.pipelined_fetch(ids, &block)
232
+ Related.redis.pipelined do
233
+ ids.each do |id|
234
+ block.call(id)
235
+ end
236
+ end
237
+ rescue Redis::Distributed::CannotDistribute
238
+ ids.map do |id|
239
+ block.call(id)
240
+ end
241
+ end
242
+
214
243
  def self.property_serializer(property)
215
244
  @properties ||= {}
216
245
  @properties[property.to_sym]
data/lib/related/node.rb CHANGED
@@ -113,7 +113,7 @@ module Related
113
113
  def to_a
114
114
  perform_query unless @result
115
115
  if @result_type == :nodes
116
- Related::Node.find(@result, @options)
116
+ node_class.find(@result, @options)
117
117
  else
118
118
  Related::Relationship.find(@result, @options)
119
119
  end
@@ -134,7 +134,7 @@ module Related
134
134
  if @destination
135
135
  self.to_a.include?(entity)
136
136
  else
137
- if entity.is_a?(Related::Node)
137
+ if entity.is_a?(node_class)
138
138
  @result_type = :nodes
139
139
  Related.redis.sismember(key, entity.to_s)
140
140
  elsif entity.is_a?(Related::Relationship)
@@ -147,7 +147,7 @@ module Related
147
147
  def find(node)
148
148
  if @result_type == :nodes
149
149
  if Related.redis.sismember(key, node.to_s)
150
- Related::Node.find(node.to_s, @options)
150
+ node_class.find(node.to_s, @options)
151
151
  end
152
152
  else
153
153
  if id = Related.redis.get(dir_key(node))
@@ -162,18 +162,51 @@ module Related
162
162
  self
163
163
  end
164
164
 
165
+ def union_with_distributed_fallback(query)
166
+ union_without_distributed_fallback(query)
167
+ rescue Redis::Distributed::CannotDistribute
168
+ s1 = Related.redis.smembers(key)
169
+ s2 = Related.redis.smembers(query.key)
170
+ @result = s1 | s2
171
+ self
172
+ end
173
+
174
+ alias_method_chain :union, :distributed_fallback
175
+
165
176
  def diff(query)
166
177
  @result_type = :nodes
167
178
  @result = Related.redis.sdiff(key, query.key)
168
179
  self
169
180
  end
170
181
 
182
+ def diff_with_distributed_fallback(query)
183
+ diff_without_distributed_fallback(query)
184
+ rescue Redis::Distributed::CannotDistribute
185
+ s1 = Related.redis.smembers(key)
186
+ s2 = Related.redis.smembers(query.key)
187
+ @result = s1 - s2
188
+ self
189
+ end
190
+
191
+ alias_method_chain :diff, :distributed_fallback
192
+
171
193
  def intersect(query)
172
194
  @result_type = :nodes
173
195
  @result = Related.redis.sinter(key, query.key)
174
196
  self
175
197
  end
176
198
 
199
+ def intersect_with_distributed_fallback(query)
200
+ intersect_without_distributed_fallback(query)
201
+ rescue Redis::Distributed::CannotDistribute
202
+ s1 = Related.redis.smembers(key)
203
+ s2 = Related.redis.smembers(query.key)
204
+ @result = s1 & s2
205
+ self
206
+ end
207
+
208
+ alias_method_chain :intersect, :distributed_fallback
209
+
177
210
  def as_json(options = {})
178
211
  self.to_a
179
212
  end
@@ -184,6 +217,10 @@ module Related
184
217
 
185
218
  protected
186
219
 
220
+ def node_class
221
+ @node.class
222
+ end
223
+
187
224
  def page_start
188
225
  if @page.nil? || @page.to_i.to_s == @page.to_s
189
226
  @page && @page.to_i != 1 ? (@page.to_i * @limit.to_i) - @limit.to_i : 0
@@ -23,7 +23,15 @@ module Related
23
23
  end
24
24
 
25
25
  def weight(direction)
26
- Related.redis.zscore(r_key(direction), self.id).to_i
26
+ Related.redis.zscore(r_key(direction), self.id).to_f
27
+ end
28
+
29
+ def increment_weight!(direction, by = 1)
30
+ Related.redis.zincrby(r_key(direction), by.to_f, self.id)
31
+ end
32
+
33
+ def decrement_weight!(direction, by = 1)
34
+ Related.redis.zincrby(r_key(direction), -by.to_f, self.id)
27
35
  end
28
36
 
29
37
  def self.weight(&block)
@@ -64,42 +72,38 @@ module Related
64
72
  if @weight
65
73
  relationship.instance_exec(direction, &@weight).to_i
66
74
  else
67
- Time.parse(relationship.created_at).to_i
75
+ Time.now.to_f
68
76
  end
69
77
  end
70
78
 
71
79
  def create
72
- Related.redis.multi do
80
+ #Related.redis.multi do
73
81
  super
74
82
  Related.redis.zadd(r_key(:out), self.class.weight_for(self, :out), self.id)
75
83
  Related.redis.zadd(r_key(:in), self.class.weight_for(self, :in), self.id)
76
84
  Related.redis.sadd(n_key(:out), self.end_node_id)
77
85
  Related.redis.sadd(n_key(:in), self.start_node_id)
78
86
  Related.redis.set(dir_key, self.id)
79
- end
87
+ #end
80
88
  Related.execute_data_flow(self.label, self)
81
89
  self
82
90
  end
83
91
 
84
92
  def update
85
- Related.redis.multi do
86
- super
87
- Related.redis.zadd(r_key(:out), self.class.weight_for(self, :out), self.id)
88
- Related.redis.zadd(r_key(:in), self.class.weight_for(self, :in), self.id)
89
- end
93
+ super
90
94
  Related.execute_data_flow(self.label, self)
91
95
  self
92
96
  end
93
97
 
94
98
  def delete
95
- Related.redis.multi do
99
+ #Related.redis.multi do
96
100
  Related.redis.zrem(r_key(:out), self.id)
97
101
  Related.redis.zrem(r_key(:in), self.id)
98
102
  Related.redis.srem(n_key(:out), self.end_node_id)
99
103
  Related.redis.srem(n_key(:in), self.start_node_id)
100
104
  Related.redis.del(dir_key)
101
105
  super
102
- end
106
+ #end
103
107
  Related.execute_data_flow(self.label, self)
104
108
  self
105
109
  end
@@ -1,3 +1,3 @@
1
1
  module Related
2
- Version = VERSION = '0.5.0'
2
+ Version = VERSION = '0.6.0'
3
3
  end
data/lib/related.rb CHANGED
@@ -21,7 +21,7 @@ module Related
21
21
  # 2. A 'hostname:port:db' string (to select the Redis db)
22
22
  # 3. A 'hostname:port/namespace' string (to set the Redis namespace)
23
23
  # 4. A redis URL string 'redis://host:port'
24
- # 5. An instance of `Redis`, `Redis::Client`, `Redis::DistRedis`,
24
+ # 5. An instance of `Redis`, `Redis::Client`, `Redis::Distributed`,
25
25
  # or `Redis::Namespace`.
26
26
  def redis=(server)
27
27
  if server.respond_to? :split
@@ -0,0 +1,46 @@
1
+ require File.expand_path('test/test_helper')
2
+ require 'pp'
3
+
4
+ class CustomNodeTest < ActiveModel::TestCase
5
+
6
+ class CustomNode
7
+ include Related::Node::QueryMethods
8
+ attr_accessor :id
9
+ def self.flush
10
+ @database = {}
11
+ end
12
+ def self.create
13
+ n = self.new
14
+ n.id = Related.generate_id
15
+ @database ||= {}
16
+ @database[n.id] = n
17
+ n
18
+ end
19
+ def self.find(*ids)
20
+ ids.pop if ids.size > 1 && ids.last.is_a?(Hash)
21
+ ids.flatten.map do |id|
22
+ @database[id]
23
+ end
24
+ end
25
+ def to_s
26
+ @id
27
+ end
28
+ protected
29
+ def query
30
+ Related::Node::Query.new(self)
31
+ end
32
+ end
33
+
34
+ def setup
35
+ Related.redis.flushall
36
+ CustomNode.flush
37
+ end
38
+
39
+ def test_property_conversion
40
+ node1 = CustomNode.create
41
+ node2 = CustomNode.create
42
+ Related::Relationship.create(:friend, node1, node2)
43
+ assert_equal [node2], node1.shortest_path_to(node2).outgoing(:friend).nodes.to_a
44
+ end
45
+
46
+ end
data/test/dump-1.rdb ADDED
Binary file
data/test/dump-2.rdb ADDED
Binary file
data/test/dump-3.rdb ADDED
Binary file
data/test/dump-4.rdb ADDED
Binary file
data/test/model_test.rb CHANGED
@@ -58,10 +58,6 @@ class ModelTest < ActiveModel::TestCase
58
58
  like = Like.create(:like, node1, node2, :in_score => 42, :out_score => 10)
59
59
  assert_equal 42, like.weight(:in)
60
60
  assert_equal 10, like.weight(:out)
61
- like.in_score = 50
62
- like.save
63
- assert_equal 50, like.weight(:in)
64
- assert_equal 10, like.weight(:out)
65
61
  end
66
62
 
67
63
  def test_weight_sorting
@@ -6,10 +6,10 @@ daemonize yes
6
6
 
7
7
  # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
8
8
  # You can specify a custom pid file location here.
9
- pidfile ./test/redis-test.pid
9
+ pidfile ./test/redis-test-1.pid
10
10
 
11
11
  # Accept connections on the specified port, default is 6379
12
- port 9736
12
+ port 6379
13
13
 
14
14
  # If you want you can bind a single interface, if the bind option is not
15
15
  # specified all the interfaces will listen for connections.
@@ -35,7 +35,7 @@ save 300 10
35
35
  save 60 10000
36
36
 
37
37
  # The filename where to dump the DB
38
- dbfilename dump.rdb
38
+ dbfilename dump-1.rdb
39
39
 
40
40
  # For default save/load DB in/from the working directory
41
41
  # Note that you must specify a directory not a file name.
@@ -0,0 +1,115 @@
1
+ # Redis configuration file example
2
+
3
+ # By default Redis does not run as a daemon. Use 'yes' if you need it.
4
+ # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
5
+ daemonize yes
6
+
7
+ # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
8
+ # You can specify a custom pid file location here.
9
+ pidfile ./test/redis-test-2.pid
10
+
11
+ # Accept connections on the specified port, default is 6379
12
+ port 6380
13
+
14
+ # If you want you can bind a single interface, if the bind option is not
15
+ # specified all the interfaces will listen for connections.
16
+ #
17
+ # bind 127.0.0.1
18
+
19
+ # Close the connection after a client is idle for N seconds (0 to disable)
20
+ timeout 300
21
+
22
+ # Save the DB on disk:
23
+ #
24
+ # save <seconds> <changes>
25
+ #
26
+ # Will save the DB if both the given number of seconds and the given
27
+ # number of write operations against the DB occurred.
28
+ #
29
+ # In the example below the behaviour will be to save:
30
+ # after 900 sec (15 min) if at least 1 key changed
31
+ # after 300 sec (5 min) if at least 10 keys changed
32
+ # after 60 sec if at least 10000 keys changed
33
+ save 900 1
34
+ save 300 10
35
+ save 60 10000
36
+
37
+ # The filename where to dump the DB
38
+ dbfilename dump-2.rdb
39
+
40
+ # For default save/load DB in/from the working directory
41
+ # Note that you must specify a directory not a file name.
42
+ dir ./test/
43
+
44
+ # Set server verbosity to 'debug'
45
+ # it can be one of:
46
+ # debug (a lot of information, useful for development/testing)
47
+ # notice (moderately verbose, what you want in production probably)
48
+ # warning (only very important / critical messages are logged)
49
+ loglevel debug
50
+
51
+ # Specify the log file name. Also 'stdout' can be used to force
52
+ # the demon to log on the standard output. Note that if you use standard
53
+ # output for logging but daemonize, logs will be sent to /dev/null
54
+ logfile stdout
55
+
56
+ # Set the number of databases. The default database is DB 0, you can select
57
+ # a different one on a per-connection basis using SELECT <dbid> where
58
+ # dbid is a number between 0 and 'databases'-1
59
+ databases 16
60
+
61
+ ################################# REPLICATION #################################
62
+
63
+ # Master-Slave replication. Use slaveof to make a Redis instance a copy of
64
+ # another Redis server. Note that the configuration is local to the slave
65
+ # so for example it is possible to configure the slave to save the DB with a
66
+ # different interval, or to listen to another port, and so on.
67
+
68
+ # slaveof <masterip> <masterport>
69
+
70
+ ################################## SECURITY ###################################
71
+
72
+ # Require clients to issue AUTH <PASSWORD> before processing any other
73
+ # commands. This might be useful in environments in which you do not trust
74
+ # others with access to the host running redis-server.
75
+ #
76
+ # This should stay commented out for backward compatibility and because most
77
+ # people do not need auth (e.g. they run their own servers).
78
+
79
+ # requirepass foobared
80
+
81
+ ################################### LIMITS ####################################
82
+
83
+ # Set the max number of connected clients at the same time. By default there
84
+ # is no limit, and it's up to the number of file descriptors the Redis process
85
+ # is able to open. The special value '0' means no limts.
86
+ # Once the limit is reached Redis will close all the new connections sending
87
+ # an error 'max number of clients reached'.
88
+
89
+ # maxclients 128
90
+
91
+ # Don't use more memory than the specified amount of bytes.
92
+ # When the memory limit is reached Redis will try to remove keys with an
93
+ # EXPIRE set. It will try to start freeing keys that are going to expire
94
+ # in little time and preserve keys with a longer time to live.
95
+ # Redis will also try to remove objects from free lists if possible.
96
+ #
97
+ # If all this fails, Redis will start to reply with errors to commands
98
+ # that will use more memory, like SET, LPUSH, and so on, and will continue
99
+ # to reply to most read-only commands like GET.
100
+ #
101
+ # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
102
+ # 'state' server or cache, not as a real DB. When Redis is used as a real
103
+ # database the memory usage will grow over the weeks, it will be obvious if
104
+ # it is going to use too much memory in the long run, and you'll have the time
105
+ # to upgrade. With maxmemory after the limit is reached you'll start to get
106
+ # errors for write operations, and this may even lead to DB inconsistency.
107
+
108
+ # maxmemory <bytes>
109
+
110
+ ############################### ADVANCED CONFIG ###############################
111
+
112
+ # Glue small output buffers together in order to send small replies in a
113
+ # single TCP packet. Uses a bit more CPU but most of the times it is a win
114
+ # in terms of number of queries per second. Use 'yes' if unsure.
115
+ glueoutputbuf yes
@@ -0,0 +1,115 @@
1
+ # Redis configuration file example
2
+
3
+ # By default Redis does not run as a daemon. Use 'yes' if you need it.
4
+ # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
5
+ daemonize yes
6
+
7
+ # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
8
+ # You can specify a custom pid file location here.
9
+ pidfile ./test/redis-test-3.pid
10
+
11
+ # Accept connections on the specified port, default is 6379
12
+ port 6381
13
+
14
+ # If you want you can bind a single interface, if the bind option is not
15
+ # specified all the interfaces will listen for connections.
16
+ #
17
+ # bind 127.0.0.1
18
+
19
+ # Close the connection after a client is idle for N seconds (0 to disable)
20
+ timeout 300
21
+
22
+ # Save the DB on disk:
23
+ #
24
+ # save <seconds> <changes>
25
+ #
26
+ # Will save the DB if both the given number of seconds and the given
27
+ # number of write operations against the DB occurred.
28
+ #
29
+ # In the example below the behaviour will be to save:
30
+ # after 900 sec (15 min) if at least 1 key changed
31
+ # after 300 sec (5 min) if at least 10 keys changed
32
+ # after 60 sec if at least 10000 keys changed
33
+ save 900 1
34
+ save 300 10
35
+ save 60 10000
36
+
37
+ # The filename where to dump the DB
38
+ dbfilename dump-3.rdb
39
+
40
+ # For default save/load DB in/from the working directory
41
+ # Note that you must specify a directory not a file name.
42
+ dir ./test/
43
+
44
+ # Set server verbosity to 'debug'
45
+ # it can be one of:
46
+ # debug (a lot of information, useful for development/testing)
47
+ # notice (moderately verbose, what you want in production probably)
48
+ # warning (only very important / critical messages are logged)
49
+ loglevel debug
50
+
51
+ # Specify the log file name. Also 'stdout' can be used to force
52
+ # the demon to log on the standard output. Note that if you use standard
53
+ # output for logging but daemonize, logs will be sent to /dev/null
54
+ logfile stdout
55
+
56
+ # Set the number of databases. The default database is DB 0, you can select
57
+ # a different one on a per-connection basis using SELECT <dbid> where
58
+ # dbid is a number between 0 and 'databases'-1
59
+ databases 16
60
+
61
+ ################################# REPLICATION #################################
62
+
63
+ # Master-Slave replication. Use slaveof to make a Redis instance a copy of
64
+ # another Redis server. Note that the configuration is local to the slave
65
+ # so for example it is possible to configure the slave to save the DB with a
66
+ # different interval, or to listen to another port, and so on.
67
+
68
+ # slaveof <masterip> <masterport>
69
+
70
+ ################################## SECURITY ###################################
71
+
72
+ # Require clients to issue AUTH <PASSWORD> before processing any other
73
+ # commands. This might be useful in environments in which you do not trust
74
+ # others with access to the host running redis-server.
75
+ #
76
+ # This should stay commented out for backward compatibility and because most
77
+ # people do not need auth (e.g. they run their own servers).
78
+
79
+ # requirepass foobared
80
+
81
+ ################################### LIMITS ####################################
82
+
83
+ # Set the max number of connected clients at the same time. By default there
84
+ # is no limit, and it's up to the number of file descriptors the Redis process
85
+ # is able to open. The special value '0' means no limts.
86
+ # Once the limit is reached Redis will close all the new connections sending
87
+ # an error 'max number of clients reached'.
88
+
89
+ # maxclients 128
90
+
91
+ # Don't use more memory than the specified amount of bytes.
92
+ # When the memory limit is reached Redis will try to remove keys with an
93
+ # EXPIRE set. It will try to start freeing keys that are going to expire
94
+ # in little time and preserve keys with a longer time to live.
95
+ # Redis will also try to remove objects from free lists if possible.
96
+ #
97
+ # If all this fails, Redis will start to reply with errors to commands
98
+ # that will use more memory, like SET, LPUSH, and so on, and will continue
99
+ # to reply to most read-only commands like GET.
100
+ #
101
+ # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
102
+ # 'state' server or cache, not as a real DB. When Redis is used as a real
103
+ # database the memory usage will grow over the weeks, it will be obvious if
104
+ # it is going to use too much memory in the long run, and you'll have the time
105
+ # to upgrade. With maxmemory after the limit is reached you'll start to get
106
+ # errors for write operations, and this may even lead to DB inconsistency.
107
+
108
+ # maxmemory <bytes>
109
+
110
+ ############################### ADVANCED CONFIG ###############################
111
+
112
+ # Glue small output buffers together in order to send small replies in a
113
+ # single TCP packet. Uses a bit more CPU but most of the times it is a win
114
+ # in terms of number of queries per second. Use 'yes' if unsure.
115
+ glueoutputbuf yes
@@ -0,0 +1,115 @@
1
+ # Redis configuration file example
2
+
3
+ # By default Redis does not run as a daemon. Use 'yes' if you need it.
4
+ # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
5
+ daemonize yes
6
+
7
+ # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
8
+ # You can specify a custom pid file location here.
9
+ pidfile ./test/redis-test-4.pid
10
+
11
+ # Accept connections on the specified port, default is 6379
12
+ port 6382
13
+
14
+ # If you want you can bind a single interface, if the bind option is not
15
+ # specified all the interfaces will listen for connections.
16
+ #
17
+ # bind 127.0.0.1
18
+
19
+ # Close the connection after a client is idle for N seconds (0 to disable)
20
+ timeout 300
21
+
22
+ # Save the DB on disk:
23
+ #
24
+ # save <seconds> <changes>
25
+ #
26
+ # Will save the DB if both the given number of seconds and the given
27
+ # number of write operations against the DB occurred.
28
+ #
29
+ # In the example below the behaviour will be to save:
30
+ # after 900 sec (15 min) if at least 1 key changed
31
+ # after 300 sec (5 min) if at least 10 keys changed
32
+ # after 60 sec if at least 10000 keys changed
33
+ save 900 1
34
+ save 300 10
35
+ save 60 10000
36
+
37
+ # The filename where to dump the DB
38
+ dbfilename dump-4.rdb
39
+
40
+ # For default save/load DB in/from the working directory
41
+ # Note that you must specify a directory not a file name.
42
+ dir ./test/
43
+
44
+ # Set server verbosity to 'debug'
45
+ # it can be one of:
46
+ # debug (a lot of information, useful for development/testing)
47
+ # notice (moderately verbose, what you want in production probably)
48
+ # warning (only very important / critical messages are logged)
49
+ loglevel debug
50
+
51
+ # Specify the log file name. Also 'stdout' can be used to force
52
+ # the demon to log on the standard output. Note that if you use standard
53
+ # output for logging but daemonize, logs will be sent to /dev/null
54
+ logfile stdout
55
+
56
+ # Set the number of databases. The default database is DB 0, you can select
57
+ # a different one on a per-connection basis using SELECT <dbid> where
58
+ # dbid is a number between 0 and 'databases'-1
59
+ databases 16
60
+
61
+ ################################# REPLICATION #################################
62
+
63
+ # Master-Slave replication. Use slaveof to make a Redis instance a copy of
64
+ # another Redis server. Note that the configuration is local to the slave
65
+ # so for example it is possible to configure the slave to save the DB with a
66
+ # different interval, or to listen to another port, and so on.
67
+
68
+ # slaveof <masterip> <masterport>
69
+
70
+ ################################## SECURITY ###################################
71
+
72
+ # Require clients to issue AUTH <PASSWORD> before processing any other
73
+ # commands. This might be useful in environments in which you do not trust
74
+ # others with access to the host running redis-server.
75
+ #
76
+ # This should stay commented out for backward compatibility and because most
77
+ # people do not need auth (e.g. they run their own servers).
78
+
79
+ # requirepass foobared
80
+
81
+ ################################### LIMITS ####################################
82
+
83
+ # Set the max number of connected clients at the same time. By default there
84
+ # is no limit, and it's up to the number of file descriptors the Redis process
85
+ # is able to open. The special value '0' means no limts.
86
+ # Once the limit is reached Redis will close all the new connections sending
87
+ # an error 'max number of clients reached'.
88
+
89
+ # maxclients 128
90
+
91
+ # Don't use more memory than the specified amount of bytes.
92
+ # When the memory limit is reached Redis will try to remove keys with an
93
+ # EXPIRE set. It will try to start freeing keys that are going to expire
94
+ # in little time and preserve keys with a longer time to live.
95
+ # Redis will also try to remove objects from free lists if possible.
96
+ #
97
+ # If all this fails, Redis will start to reply with errors to commands
98
+ # that will use more memory, like SET, LPUSH, and so on, and will continue
99
+ # to reply to most read-only commands like GET.
100
+ #
101
+ # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
102
+ # 'state' server or cache, not as a real DB. When Redis is used as a real
103
+ # database the memory usage will grow over the weeks, it will be obvious if
104
+ # it is going to use too much memory in the long run, and you'll have the time
105
+ # to upgrade. With maxmemory after the limit is reached you'll start to get
106
+ # errors for write operations, and this may even lead to DB inconsistency.
107
+
108
+ # maxmemory <bytes>
109
+
110
+ ############################### ADVANCED CONFIG ###############################
111
+
112
+ # Glue small output buffers together in order to send small replies in a
113
+ # single TCP packet. Uses a bit more CPU but most of the times it is a win
114
+ # in terms of number of queries per second. Use 'yes' if unsure.
115
+ glueoutputbuf yes
data/test/related_test.rb CHANGED
@@ -9,8 +9,10 @@ class RelatedTest < Test::Unit::TestCase
9
9
  def test_can_set_a_namespace_through_a_url_like_string
10
10
  assert Related.redis
11
11
  assert_equal :related, Related.redis.namespace
12
+ old_redis = Related.redis
12
13
  Related.redis = 'localhost:9736/namespace'
13
14
  assert_equal 'namespace', Related.redis.namespace
15
+ Related.redis = old_redis
14
16
  end
15
17
 
16
18
  def test_can_create_node
@@ -312,4 +314,29 @@ class RelatedTest < Test::Unit::TestCase
312
314
  assert_equal nil, node2.incoming(:friend).relationships.find(node2)
313
315
  end
314
316
 
317
+ def test_can_increment_and_decrement
318
+ node = Related::Node.create(:test => 1)
319
+ assert_equal 1, Related::Node.find(node.id).test.to_i
320
+ node.increment!(:test, 5)
321
+ assert_equal 6, Related::Node.find(node.id).test.to_i
322
+ node.decrement!(:test, 4)
323
+ assert_equal 2, Related::Node.find(node.id).test.to_i
324
+ end
325
+
326
+ def test_can_increment_and_decrement_relationship_weights
327
+ node1 = Related::Node.create
328
+ node2 = Related::Node.create
329
+ rel = Related::Relationship.create(:friend, node1, node2)
330
+ original_in_weight = Related::Relationship.find(rel.id).weight(:in)
331
+ rel.increment_weight!(:in, 4.2)
332
+ assert_equal original_in_weight + 4.2, Related::Relationship.find(rel.id).weight(:in)
333
+ rel.decrement_weight!(:in, 2.2)
334
+ assert_equal original_in_weight + 2.0, Related::Relationship.find(rel.id).weight(:in)
335
+ original_out_weight = Related::Relationship.find(rel.id).weight(:out)
336
+ rel.increment_weight!(:out, 5.2)
337
+ assert_equal original_out_weight + 5.2, Related::Relationship.find(rel.id).weight(:out)
338
+ rel.decrement_weight!(:out, 4.2)
339
+ assert_equal original_out_weight + 1.0, Related::Relationship.find(rel.id).weight(:out)
340
+ end
341
+
315
342
  end
data/test/test_helper.rb CHANGED
@@ -5,6 +5,7 @@ $LOAD_PATH.unshift dir + '/../lib'
5
5
  require 'rubygems'
6
6
  require 'test/unit'
7
7
  require 'related'
8
+ require 'redis/distributed'
8
9
 
9
10
  #
10
11
  # make sure we can run redis
@@ -31,13 +32,29 @@ at_exit do
31
32
  exit_code = Test::Unit::AutoRunner.run
32
33
  end
33
34
 
34
- pid = `ps -A -o pid,command | grep [r]edis-test`.split(" ")[0]
35
35
  puts "Killing test redis server..."
36
- `rm -f #{dir}/dump.rdb`
37
- Process.kill("KILL", pid.to_i)
36
+ loop do
37
+ pid = `ps -A -o pid,command | grep [r]edis-test`.split(" ")[0]
38
+ break if pid.nil?
39
+ Process.kill("KILL", pid.to_i)
40
+ end
41
+ `rm -f #{dir}/*.rdb`
38
42
  exit exit_code
39
43
  end
40
44
 
41
- puts "Starting redis for testing at localhost:9736..."
42
- `redis-server #{dir}/redis-test.conf`
43
- Related.redis = 'localhost:9736'
45
+ puts "Starting redis for testing..."
46
+
47
+ # `redis-server #{dir}/redis-test-1.conf`
48
+ # Related.redis = 'localhost:6379'
49
+
50
+ `redis-server #{dir}/redis-test-1.conf`
51
+ `redis-server #{dir}/redis-test-2.conf`
52
+ `redis-server #{dir}/redis-test-3.conf`
53
+ `redis-server #{dir}/redis-test-4.conf`
54
+
55
+ Related.redis = Redis::Distributed.new %w[
56
+ redis://localhost:6379
57
+ redis://localhost:6380
58
+ redis://localhost:6381
59
+ redis://localhost:6382],
60
+ :tag => /^related:([^:]+)/
metadata CHANGED
@@ -1,75 +1,57 @@
1
- --- !ruby/object:Gem::Specification
1
+ --- !ruby/object:Gem::Specification
2
2
  name: related
3
- version: !ruby/object:Gem::Version
4
- prerelease: false
5
- segments:
6
- - 0
7
- - 5
8
- - 0
9
- version: 0.5.0
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.6.0
5
+ prerelease:
10
6
  platform: ruby
11
- authors:
7
+ authors:
12
8
  - Niklas Holmgren
13
9
  autorequire:
14
10
  bindir: bin
15
11
  cert_chain: []
16
-
17
- date: 2011-11-02 00:00:00 +01:00
18
- default_executable:
19
- dependencies:
20
- - !ruby/object:Gem::Dependency
12
+ date: 2012-02-10 00:00:00.000000000Z
13
+ dependencies:
14
+ - !ruby/object:Gem::Dependency
21
15
  name: redis
22
- prerelease: false
23
- requirement: &id001 !ruby/object:Gem::Requirement
16
+ requirement: &70275674186620 !ruby/object:Gem::Requirement
24
17
  none: false
25
- requirements:
26
- - - ">"
27
- - !ruby/object:Gem::Version
28
- segments:
29
- - 2
30
- - 0
31
- - 0
18
+ requirements:
19
+ - - ! '>'
20
+ - !ruby/object:Gem::Version
32
21
  version: 2.0.0
33
22
  type: :runtime
34
- version_requirements: *id001
35
- - !ruby/object:Gem::Dependency
36
- name: redis-namespace
37
23
  prerelease: false
38
- requirement: &id002 !ruby/object:Gem::Requirement
24
+ version_requirements: *70275674186620
25
+ - !ruby/object:Gem::Dependency
26
+ name: redis-namespace
27
+ requirement: &70275674186120 !ruby/object:Gem::Requirement
39
28
  none: false
40
- requirements:
41
- - - ">"
42
- - !ruby/object:Gem::Version
43
- segments:
44
- - 0
45
- - 8
46
- - 0
29
+ requirements:
30
+ - - ! '>'
31
+ - !ruby/object:Gem::Version
47
32
  version: 0.8.0
48
33
  type: :runtime
49
- version_requirements: *id002
50
- - !ruby/object:Gem::Dependency
51
- name: activemodel
52
34
  prerelease: false
53
- requirement: &id003 !ruby/object:Gem::Requirement
35
+ version_requirements: *70275674186120
36
+ - !ruby/object:Gem::Dependency
37
+ name: activemodel
38
+ requirement: &70275674185740 !ruby/object:Gem::Requirement
54
39
  none: false
55
- requirements:
56
- - - ">="
57
- - !ruby/object:Gem::Version
58
- segments:
59
- - 0
60
- version: "0"
40
+ requirements:
41
+ - - ! '>='
42
+ - !ruby/object:Gem::Version
43
+ version: '0'
61
44
  type: :runtime
62
- version_requirements: *id003
63
- description: Related is a Redis-backed high performance graph database.
45
+ prerelease: false
46
+ version_requirements: *70275674185740
47
+ description: Related is a Redis-backed high performance distributed graph database.
64
48
  email: niklas@sutajio.se
65
49
  executables: []
66
-
67
50
  extensions: []
68
-
69
- extra_rdoc_files:
51
+ extra_rdoc_files:
70
52
  - LICENSE
71
53
  - README.md
72
- files:
54
+ files:
73
55
  - README.md
74
56
  - Rakefile
75
57
  - LICENSE
@@ -85,44 +67,44 @@ files:
85
67
  - lib/related/version.rb
86
68
  - lib/related.rb
87
69
  - test/active_model_test.rb
70
+ - test/custom_node_test.rb
88
71
  - test/data_flow_test.rb
72
+ - test/dump-1.rdb
73
+ - test/dump-2.rdb
74
+ - test/dump-3.rdb
75
+ - test/dump-4.rdb
89
76
  - test/follower_test.rb
90
77
  - test/model_test.rb
91
78
  - test/performance_test.rb
92
- - test/redis-test.conf
79
+ - test/redis-test-1.conf
80
+ - test/redis-test-2.conf
81
+ - test/redis-test-3.conf
82
+ - test/redis-test-4.conf
93
83
  - test/related_test.rb
94
84
  - test/test_helper.rb
95
- has_rdoc: true
96
85
  homepage: http://github.com/sutajio/related/
97
86
  licenses: []
98
-
99
87
  post_install_message:
100
- rdoc_options:
88
+ rdoc_options:
101
89
  - --charset=UTF-8
102
- require_paths:
90
+ require_paths:
103
91
  - lib
104
- required_ruby_version: !ruby/object:Gem::Requirement
92
+ required_ruby_version: !ruby/object:Gem::Requirement
105
93
  none: false
106
- requirements:
107
- - - ">="
108
- - !ruby/object:Gem::Version
109
- segments:
110
- - 0
111
- version: "0"
112
- required_rubygems_version: !ruby/object:Gem::Requirement
94
+ requirements:
95
+ - - ! '>='
96
+ - !ruby/object:Gem::Version
97
+ version: '0'
98
+ required_rubygems_version: !ruby/object:Gem::Requirement
113
99
  none: false
114
- requirements:
115
- - - ">="
116
- - !ruby/object:Gem::Version
117
- segments:
118
- - 0
119
- version: "0"
100
+ requirements:
101
+ - - ! '>='
102
+ - !ruby/object:Gem::Version
103
+ version: '0'
120
104
  requirements: []
121
-
122
105
  rubyforge_project:
123
- rubygems_version: 1.3.7
106
+ rubygems_version: 1.8.15
124
107
  signing_key:
125
108
  specification_version: 3
126
- summary: Related is a Redis-backed high performance graph database.
109
+ summary: Related is a Redis-backed high performance distributed graph database.
127
110
  test_files: []
128
-