redis-store 1.1.3 → 1.1.4

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of redis-store might be problematic. Click here for more details.

@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 0d5a0077a321672a08d78929533f54fccdc25730
4
+ data.tar.gz: d1b3cb05a453eb8f607509db5cf24f6035275aad
5
+ SHA512:
6
+ metadata.gz: 5142fb4438b825bdc84dbd410adfa6661ad2356a43a925f210a77ac5a346bc95dbe9de20164bfe316e1600d414e6167cf86587480c9b477611de46dfaee39fb8
7
+ data.tar.gz: 0528095121316e86d85259cab836fc47da23229140e3bdce118abc4186ff2112c4c0ed51a997b0347d9da383f7247cfe82d441cdd6d0bd49c96468e27152e80d
data/Gemfile CHANGED
@@ -1,4 +1,2 @@
1
- source 'http://rubygems.org'
1
+ source 'https://rubygems.org'
2
2
  gemspec
3
-
4
- gem 'SystemTimer', :platform => :mri_18
data/README.md CHANGED
@@ -2,32 +2,16 @@
2
2
 
3
3
  __Redis Store__ provides a full set of stores (*Cache*, *I18n*, *Session*, *HTTP Cache*) for all the modern Ruby frameworks like: __Ruby on Rails__, __Sinatra__, __Rack__, __Rack::Cache__ and __I18n__. It natively supports object marshalling, timeouts, single or multiple nodes and namespaces.
4
4
 
5
- This is the core for all the other gems, please check the *READMEs* to be informed about the usage.
5
+ See the main [redis-store readme](https://github.com/jodosha/redis-store) for general guidelines.
6
6
 
7
- ## Redis Installation
8
-
9
- ### Option 1: Homebrew
10
-
11
- MacOS X users should use [Homebrew](https://github.com/mxcl/homebrew) to install Redis:
12
-
13
- brew install redis
14
-
15
- ### Option 2: From Source
16
-
17
- Download and install Redis from [http://redis.io](http://redis.io/)
18
-
19
- wget http://redis.googlecode.com/files/redis-2.4.15.tar.gz
20
- tar -zxf redis-2.4.15.tar.gz
21
- mv redis-2.4.15 redis
22
- cd redis
23
- make
7
+ If you are using redis-store with Rails, consider using the [redis-rails gem](https://github.com/jodosha/redis-store/tree/master/redis-rails) instead.
24
8
 
25
9
  ## Running tests
26
10
 
11
+ gem install bundler
27
12
  git clone git://github.com/jodosha/redis-store.git
28
- cd redis-store/redis-store
29
- gem install bundler
30
- ruby ci/run.rb
13
+ cd redis-store/redis-store
14
+ ruby ci/run.rb
31
15
 
32
16
  If you are on **Snow Leopard** you have to run `env ARCHFLAGS="-arch x86_64" ruby ci/run.rb`
33
17
 
@@ -1,6 +1,6 @@
1
1
  require 'redis'
2
2
  require 'redis/store'
3
- require 'redis/factory'
3
+ require 'redis/store/factory'
4
4
  require 'redis/distributed_store'
5
5
  require 'redis/store/namespace'
6
6
  require 'redis/store/marshalling'
@@ -9,4 +9,4 @@ require 'redis/store/version'
9
9
  class Redis
10
10
  class Store < self
11
11
  end
12
- end
12
+ end
@@ -9,6 +9,7 @@ class Redis
9
9
  nodes = addresses.map do |address|
10
10
  ::Redis::Store.new _merge_options(address, options)
11
11
  end
12
+
12
13
  _extend_namespace options
13
14
  @ring = Redis::HashRing.new nodes
14
15
  end
@@ -40,7 +41,10 @@ class Redis
40
41
  end
41
42
 
42
43
  def _merge_options(address, options)
43
- address.merge(:timeout => options[:timeout] || @@timeout)
44
+ address.merge({
45
+ :timeout => options[:timeout] || @@timeout,
46
+ :namespace => options[:namespace]
47
+ })
44
48
  end
45
49
  end
46
50
  end
@@ -0,0 +1,95 @@
1
+ require 'uri'
2
+
3
+ class Redis
4
+ class Store < self
5
+ class Factory
6
+
7
+ DEFAULT_PORT = 6379
8
+
9
+ def self.create(*options)
10
+ new(options).create
11
+ end
12
+
13
+ def initialize(*options)
14
+ @addresses = []
15
+ @options = {}
16
+ extract_addresses_and_options(options)
17
+ end
18
+
19
+ def create
20
+ if @addresses.empty?
21
+ @addresses << {}
22
+ end
23
+
24
+ if @addresses.size > 1
25
+ ::Redis::DistributedStore.new @addresses, @options
26
+ else
27
+ ::Redis::Store.new @addresses.first.merge(@options)
28
+ end
29
+ end
30
+
31
+ def self.resolve(uri) #:api: private
32
+ if uri.is_a?(Hash)
33
+ extract_host_options_from_hash(uri)
34
+ else
35
+ extract_host_options_from_uri(uri)
36
+ end
37
+ end
38
+
39
+ def self.extract_host_options_from_hash(options)
40
+ options = normalize_key_names(options)
41
+ if host_options?(options)
42
+ options
43
+ else
44
+ nil
45
+ end
46
+ end
47
+
48
+ def self.normalize_key_names(options)
49
+ options = options.dup
50
+ options[:namespace] ||= options.delete(:key_prefix) # RailsSessionStore
51
+ options
52
+ end
53
+
54
+ def self.host_options?(options)
55
+ if options.keys.any? {|n| [:host, :db, :port].include?(n) }
56
+ options
57
+ else
58
+ nil # just to be clear
59
+ end
60
+ end
61
+
62
+ def self.extract_host_options_from_uri(uri)
63
+ uri = URI.parse(uri)
64
+ _, db, namespace = if uri.path
65
+ uri.path.split(/\//)
66
+ end
67
+
68
+ options = {
69
+ :host => uri.host,
70
+ :port => uri.port || DEFAULT_PORT,
71
+ :password => uri.password
72
+ }
73
+
74
+ options[:db] = db.to_i if db
75
+ options[:namespace] = namespace if namespace
76
+
77
+ options
78
+ end
79
+
80
+ private
81
+
82
+ def extract_addresses_and_options(*options)
83
+ options.flatten.compact.each do |token|
84
+ resolved = self.class.resolve(token)
85
+ if resolved
86
+ @addresses << resolved
87
+ else
88
+ @options.merge!(self.class.normalize_key_names(token))
89
+ end
90
+ end
91
+ end
92
+
93
+ end
94
+ end
95
+ end
@@ -43,7 +43,8 @@ class Redis
43
43
 
44
44
  if defined?(Encoding)
45
45
  def encode(string)
46
- string.to_s.force_encoding(Encoding::BINARY)
46
+ key = string.to_s.dup
47
+ key.force_encoding(Encoding::BINARY)
47
48
  end
48
49
  else
49
50
  def encode(string)
@@ -1,5 +1,5 @@
1
1
  class Redis
2
2
  class Store < self
3
- VERSION = '1.1.3'
3
+ VERSION = '1.1.4'
4
4
  end
5
5
  end
@@ -5,9 +5,9 @@ require 'redis/store/version'
5
5
  Gem::Specification.new do |s|
6
6
  s.name = 'redis-store'
7
7
  s.version = Redis::Store::VERSION
8
- s.authors = ['Luca Guidi', 'Matt Horan']
8
+ s.authors = ['Luca Guidi']
9
9
  s.email = ['me@lucaguidi.com']
10
- s.homepage = 'http://jodosha.github.com/redis-store'
10
+ s.homepage = 'http://redis-store.org/redis-store'
11
11
  s.summary = %q{Redis stores for Ruby frameworks}
12
12
  s.description = %q{Namespaced Rack::Session, Rack::Cache, I18n and cache Redis stores for Ruby web frameworks.}
13
13
 
@@ -18,13 +18,12 @@ Gem::Specification.new do |s|
18
18
  s.executables = `git ls-files -- bin/*`.split("\n").map{ |f| File.basename(f) }
19
19
  s.require_paths = ["lib"]
20
20
 
21
- s.add_dependency 'redis', '>= 2.2.0'
21
+ s.add_dependency 'redis', '>= 2.2'
22
22
 
23
- s.add_development_dependency 'rake', '~> 0.9.2'
24
- s.add_development_dependency 'bundler', '~> 1.1'
25
- s.add_development_dependency 'mocha', '~> 0.10.0'
26
- s.add_development_dependency 'minitest', '~> 2.8.0'
27
- s.add_development_dependency 'purdytest', '~> 1.0.0'
28
- s.add_development_dependency 'git', '~> 1.2.5'
23
+ s.add_development_dependency 'rake', '~> 10'
24
+ s.add_development_dependency 'bundler', '~> 1.3'
25
+ s.add_development_dependency 'mocha', '~> 0.14.0'
26
+ s.add_development_dependency 'minitest', '~> 5'
27
+ s.add_development_dependency 'git', '~> 1.2'
29
28
  end
30
29
 
@@ -1,417 +1,46 @@
1
- # Redis configuration file example
2
-
3
- # Note on units: when memory size is needed, it is possible to specifiy
4
- # it in the usual form of 1k 5GB 4M and so forth:
5
- #
6
- # 1k => 1000 bytes
7
- # 1kb => 1024 bytes
8
- # 1m => 1000000 bytes
9
- # 1mb => 1024*1024 bytes
10
- # 1g => 1000000000 bytes
11
- # 1gb => 1024*1024*1024 bytes
12
- #
13
- # units are case insensitive so 1GB 1Gb 1gB are all the same.
14
-
15
- # By default Redis does not run as a daemon. Use 'yes' if you need it.
16
- # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
17
1
  daemonize yes
18
-
19
- # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
20
- # default. You can specify a custom pid file location here.
21
2
  pidfile ./tmp/pids/node-one.pid
22
-
23
- # Accept connections on the specified port, default is 6379.
24
- # If port 0 is specified Redis will not listen on a TCP socket.
25
3
  port 6380
26
-
27
- # If you want you can bind a single interface, if the bind option is not
28
- # specified all the interfaces will listen for incoming connections.
29
- #
30
- # bind 127.0.0.1
31
-
32
- # Specify the path for the unix socket that will be used to listen for
33
- # incoming connections. There is no default, so Redis will not listen
34
- # on a unix socket when not specified.
35
- #
36
- # unixsocket /tmp/redis.sock
37
-
38
- # Close the connection after a client is idle for N seconds (0 to disable)
39
- timeout 300
40
-
41
- # Set server verbosity to 'debug'
42
- # it can be one of:
43
- # debug (a lot of information, useful for development/testing)
44
- # verbose (many rarely useful info, but not a mess like the debug level)
45
- # notice (moderately verbose, what you want in production probably)
46
- # warning (only very important / critical messages are logged)
4
+ timeout 0
47
5
  loglevel verbose
48
-
49
- # Specify the log file name. Also 'stdout' can be used to force
50
- # Redis to log on the standard output. Note that if you use standard
51
- # output for logging but daemonize, logs will be sent to /dev/null
52
6
  logfile stdout
53
-
54
- # To enable logging to the system logger, just set 'syslog-enabled' to yes,
55
- # and optionally update the other syslog parameters to suit your needs.
56
- # syslog-enabled no
57
-
58
- # Specify the syslog identity.
59
- # syslog-ident redis
60
-
61
- # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
62
- # syslog-facility local0
63
-
64
- # Set the number of databases. The default database is DB 0, you can select
65
- # a different one on a per-connection basis using SELECT <dbid> where
66
- # dbid is a number between 0 and 'databases'-1
67
7
  databases 16
68
8
 
69
- ################################ SNAPSHOTTING #################################
70
- #
71
- # Save the DB on disk:
72
- #
73
- # save <seconds> <changes>
74
- #
75
- # Will save the DB if both the given number of seconds and the given
76
- # number of write operations against the DB occurred.
77
- #
78
- # In the example below the behaviour will be to save:
79
- # after 900 sec (15 min) if at least 1 key changed
80
- # after 300 sec (5 min) if at least 10 keys changed
81
- # after 60 sec if at least 10000 keys changed
82
- #
83
- # Note: you can disable saving at all commenting all the "save" lines.
84
-
85
9
  save 900 1
86
10
  save 300 10
87
11
  save 60 10000
88
12
 
89
- # Compress string objects using LZF when dump .rdb databases?
90
- # For default that's set to 'yes' as it's almost always a win.
91
- # If you want to save some CPU in the saving child set it to 'no' but
92
- # the dataset will likely be bigger if you have compressible values or keys.
13
+ # stop-writes-on-bgsave-error yes
93
14
  rdbcompression yes
94
-
95
- # The filename where to dump the DB
15
+ # rdbchecksum yes
96
16
  dbfilename tmp/node-one-dump.rdb
97
-
98
- # The working directory.
99
- #
100
- # The DB will be written inside this directory, with the filename specified
101
- # above using the 'dbfilename' configuration directive.
102
- #
103
- # Also the Append Only File will be created inside this directory.
104
- #
105
- # Note that you must specify a directory here, not a file name.
106
17
  dir ./
107
18
 
108
- ################################# REPLICATION #################################
109
-
110
- # Master-Slave replication. Use slaveof to make a Redis instance a copy of
111
- # another Redis server. Note that the configuration is local to the slave
112
- # so for example it is possible to configure the slave to save the DB with a
113
- # different interval, or to listen to another port, and so on.
114
- #
115
- # slaveof <masterip> <masterport>
116
-
117
- # If the master is password protected (using the "requirepass" configuration
118
- # directive below) it is possible to tell the slave to authenticate before
119
- # starting the replication synchronization process, otherwise the master will
120
- # refuse the slave request.
121
- #
122
- # masterauth <master-password>
123
-
124
- # When a slave lost the connection with the master, or when the replication
125
- # is still in progress, the slave can act in two different ways:
126
- #
127
- # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
128
- # still reply to client requests, possibly with out of data data, or the
129
- # data set may just be empty if this is the first synchronization.
130
- #
131
- # 2) if slave-serve-stale data is set to 'no' the slave will reply with
132
- # an error "SYNC with master in progress" to all the kind of commands
133
- # but to INFO and SLAVEOF.
134
- #
135
19
  slave-serve-stale-data yes
136
-
137
- ################################## SECURITY ###################################
138
-
139
- # Require clients to issue AUTH <PASSWORD> before processing any other
140
- # commands. This might be useful in environments in which you do not trust
141
- # others with access to the host running redis-server.
142
- #
143
- # This should stay commented out for backward compatibility and because most
144
- # people do not need auth (e.g. they run their own servers).
145
- #
146
- # Warning: since Redis is pretty fast an outside user can try up to
147
- # 150k passwords per second against a good box. This means that you should
148
- # use a very strong password otherwise it will be very easy to break.
149
- #
150
- # requirepass foobared
151
-
152
- # Command renaming.
153
- #
154
- # It is possilbe to change the name of dangerous commands in a shared
155
- # environment. For instance the CONFIG command may be renamed into something
156
- # of hard to guess so that it will be still available for internal-use
157
- # tools but not available for general clients.
158
- #
159
- # Example:
160
- #
161
- # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
162
- #
163
- # It is also possilbe to completely kill a command renaming it into
164
- # an empty string:
165
- #
166
- # rename-command CONFIG ""
167
-
168
- ################################### LIMITS ####################################
169
-
170
- # Set the max number of connected clients at the same time. By default there
171
- # is no limit, and it's up to the number of file descriptors the Redis process
172
- # is able to open. The special value '0' means no limits.
173
- # Once the limit is reached Redis will close all the new connections sending
174
- # an error 'max number of clients reached'.
175
- #
176
- # maxclients 128
177
-
178
- # Don't use more memory than the specified amount of bytes.
179
- # When the memory limit is reached Redis will try to remove keys with an
180
- # EXPIRE set. It will try to start freeing keys that are going to expire
181
- # in little time and preserve keys with a longer time to live.
182
- # Redis will also try to remove objects from free lists if possible.
183
- #
184
- # If all this fails, Redis will start to reply with errors to commands
185
- # that will use more memory, like SET, LPUSH, and so on, and will continue
186
- # to reply to most read-only commands like GET.
187
- #
188
- # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
189
- # 'state' server or cache, not as a real DB. When Redis is used as a real
190
- # database the memory usage will grow over the weeks, it will be obvious if
191
- # it is going to use too much memory in the long run, and you'll have the time
192
- # to upgrade. With maxmemory after the limit is reached you'll start to get
193
- # errors for write operations, and this may even lead to DB inconsistency.
194
- #
195
- # maxmemory <bytes>
196
-
197
- # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
198
- # is reached? You can select among five behavior:
199
- #
200
- # volatile-lru -> remove the key with an expire set using an LRU algorithm
201
- # allkeys-lru -> remove any key accordingly to the LRU algorithm
202
- # volatile-random -> remove a random key with an expire set
203
- # allkeys->random -> remove a random key, any key
204
- # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
205
- # noeviction -> don't expire at all, just return an error on write operations
206
- #
207
- # Note: with all the kind of policies, Redis will return an error on write
208
- # operations, when there are not suitable keys for eviction.
209
- #
210
- # At the date of writing this commands are: set setnx setex append
211
- # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
212
- # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
213
- # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
214
- # getset mset msetnx exec sort
215
- #
216
- # The default is:
217
- #
218
- # maxmemory-policy volatile-lru
219
-
220
- # LRU and minimal TTL algorithms are not precise algorithms but approximated
221
- # algorithms (in order to save memory), so you can select as well the sample
222
- # size to check. For instance for default Redis will check three keys and
223
- # pick the one that was used less recently, you can change the sample size
224
- # using the following configuration directive.
225
- #
226
- # maxmemory-samples 3
227
-
228
- ############################## APPEND ONLY MODE ###############################
229
-
230
- # By default Redis asynchronously dumps the dataset on disk. If you can live
231
- # with the idea that the latest records will be lost if something like a crash
232
- # happens this is the preferred way to run Redis. If instead you care a lot
233
- # about your data and don't want to that a single record can get lost you should
234
- # enable the append only mode: when this mode is enabled Redis will append
235
- # every write operation received in the file appendonly.aof. This file will
236
- # be read on startup in order to rebuild the full dataset in memory.
237
- #
238
- # Note that you can have both the async dumps and the append only file if you
239
- # like (you have to comment the "save" statements above to disable the dumps).
240
- # Still if append only mode is enabled Redis will load the data from the
241
- # log file at startup ignoring the dump.rdb file.
242
- #
243
- # IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
244
- # log file in background when it gets too big.
20
+ # slave-read-only yes
21
+ # slave-priority 100
245
22
 
246
23
  appendonly no
247
-
248
- # The name of the append only file (default: "appendonly.aof")
249
- # appendfilename appendonly.aof
250
-
251
- # The fsync() call tells the Operating System to actually write data on disk
252
- # instead to wait for more data in the output buffer. Some OS will really flush
253
- # data on disk, some other OS will just try to do it ASAP.
254
- #
255
- # Redis supports three different modes:
256
- #
257
- # no: don't fsync, just let the OS flush the data when it wants. Faster.
258
- # always: fsync after every write to the append only log . Slow, Safest.
259
- # everysec: fsync only if one second passed since the last fsync. Compromise.
260
- #
261
- # The default is "everysec" that's usually the right compromise between
262
- # speed and data safety. It's up to you to understand if you can relax this to
263
- # "no" that will will let the operating system flush the output buffer when
264
- # it wants, for better performances (but if you can live with the idea of
265
- # some data loss consider the default persistence mode that's snapshotting),
266
- # or on the contrary, use "always" that's very slow but a bit safer than
267
- # everysec.
268
- #
269
- # If unsure, use "everysec".
270
-
271
- # appendfsync always
272
24
  appendfsync everysec
273
- # appendfsync no
274
-
275
- # When the AOF fsync policy is set to always or everysec, and a background
276
- # saving process (a background save or AOF log background rewriting) is
277
- # performing a lot of I/O against the disk, in some Linux configurations
278
- # Redis may block too long on the fsync() call. Note that there is no fix for
279
- # this currently, as even performing fsync in a different thread will block
280
- # our synchronous write(2) call.
281
- #
282
- # In order to mitigate this problem it's possible to use the following option
283
- # that will prevent fsync() from being called in the main process while a
284
- # BGSAVE or BGREWRITEAOF is in progress.
285
- #
286
- # This means that while another child is saving the durability of Redis is
287
- # the same as "appendfsync none", that in pratical terms means that it is
288
- # possible to lost up to 30 seconds of log in the worst scenario (with the
289
- # default Linux settings).
290
- #
291
- # If you have latency problems turn this to "yes". Otherwise leave it as
292
- # "no" that is the safest pick from the point of view of durability.
293
25
  no-appendfsync-on-rewrite no
26
+ # auto-aof-rewrite-percentage 100
27
+ # auto-aof-rewrite-min-size 64mb
294
28
 
295
- ################################ VIRTUAL MEMORY ###############################
29
+ # lua-time-limit 5000
296
30
 
297
- # Virtual Memory allows Redis to work with datasets bigger than the actual
298
- # amount of RAM needed to hold the whole dataset in memory.
299
- # In order to do so very used keys are taken in memory while the other keys
300
- # are swapped into a swap file, similarly to what operating systems do
301
- # with memory pages.
302
- #
303
- # To enable VM just set 'vm-enabled' to yes, and set the following three
304
- # VM parameters accordingly to your needs.
31
+ # slowlog-log-slower-than 10000
32
+ # slowlog-max-len 128
305
33
 
306
- vm-enabled no
307
- # vm-enabled yes
308
-
309
- # This is the path of the Redis swap file. As you can guess, swap files
310
- # can't be shared by different Redis instances, so make sure to use a swap
311
- # file for every redis process you are running. Redis will complain if the
312
- # swap file is already in use.
313
- #
314
- # The best kind of storage for the Redis swap file (that's accessed at random)
315
- # is a Solid State Disk (SSD).
316
- #
317
- # *** WARNING *** if you are using a shared hosting the default of putting
318
- # the swap file under /tmp is not secure. Create a dir with access granted
319
- # only to Redis user and configure Redis to create the swap file there.
320
- vm-swap-file /tmp/redis.swap
321
-
322
- # vm-max-memory configures the VM to use at max the specified amount of
323
- # RAM. Everything that deos not fit will be swapped on disk *if* possible, that
324
- # is, if there is still enough contiguous space in the swap file.
325
- #
326
- # With vm-max-memory 0 the system will swap everything it can. Not a good
327
- # default, just specify the max amount of RAM you can in bytes, but it's
328
- # better to leave some margin. For instance specify an amount of RAM
329
- # that's more or less between 60 and 80% of your free RAM.
330
- vm-max-memory 0
331
-
332
- # Redis swap files is split into pages. An object can be saved using multiple
333
- # contiguous pages, but pages can't be shared between different objects.
334
- # So if your page is too big, small objects swapped out on disk will waste
335
- # a lot of space. If you page is too small, there is less space in the swap
336
- # file (assuming you configured the same number of total swap file pages).
337
- #
338
- # If you use a lot of small objects, use a page size of 64 or 32 bytes.
339
- # If you use a lot of big objects, use a bigger page size.
340
- # If unsure, use the default :)
341
- vm-page-size 32
342
-
343
- # Number of total memory pages in the swap file.
344
- # Given that the page table (a bitmap of free/used pages) is taken in memory,
345
- # every 8 pages on disk will consume 1 byte of RAM.
346
- #
347
- # The total swap size is vm-page-size * vm-pages
348
- #
349
- # With the default of 32-bytes memory pages and 134217728 pages Redis will
350
- # use a 4 GB swap file, that will use 16 MB of RAM for the page table.
351
- #
352
- # It's better to use the smallest acceptable value for your application,
353
- # but the default is large in order to work in most conditions.
354
- vm-pages 134217728
355
-
356
- # Max number of VM I/O threads running at the same time.
357
- # This threads are used to read/write data from/to swap file, since they
358
- # also encode and decode objects from disk to memory or the reverse, a bigger
359
- # number of threads can help with big objects even if they can't help with
360
- # I/O itself as the physical device may not be able to couple with many
361
- # reads/writes operations at the same time.
362
- #
363
- # The special value of 0 turn off threaded I/O and enables the blocking
364
- # Virtual Memory implementation.
365
- vm-max-threads 4
366
-
367
- ############################### ADVANCED CONFIG ###############################
368
-
369
- # Hashes are encoded in a special way (much more memory efficient) when they
370
- # have at max a given numer of elements, and the biggest element does not
371
- # exceed a given threshold. You can configure this limits with the following
372
- # configuration directives.
373
- hash-max-zipmap-entries 512
374
- hash-max-zipmap-value 64
375
-
376
- # Similarly to hashes, small lists are also encoded in a special way in order
377
- # to save a lot of space. The special representation is only used when
378
- # you are under the following limits:
34
+ # hash-max-ziplist-entries 512
35
+ # hash-max-ziplist-value 64
379
36
  list-max-ziplist-entries 512
380
37
  list-max-ziplist-value 64
381
-
382
- # Sets have a special encoding in just one case: when a set is composed
383
- # of just strings that happens to be integers in radix 10 in the range
384
- # of 64 bit signed integers.
385
- # The following configuration setting sets the limit in the size of the
386
- # set in order to use this special memory saving encoding.
387
38
  set-max-intset-entries 512
39
+ # zset-max-ziplist-entries 128
40
+ # zset-max-ziplist-value 64
388
41
 
389
- # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
390
- # order to help rehashing the main Redis hash table (the one mapping top-level
391
- # keys to values). The hash table implementation redis uses (see dict.c)
392
- # performs a lazy rehashing: the more operation you run into an hash table
393
- # that is rhashing, the more rehashing "steps" are performed, so if the
394
- # server is idle the rehashing is never complete and some more memory is used
395
- # by the hash table.
396
- #
397
- # The default is to use this millisecond 10 times every second in order to
398
- # active rehashing the main dictionaries, freeing memory when possible.
399
- #
400
- # If unsure:
401
- # use "activerehashing no" if you have hard latency requirements and it is
402
- # not a good thing in your environment that Redis can reply form time to time
403
- # to queries with 2 milliseconds delay.
404
- #
405
- # use "activerehashing yes" if you don't have such hard requirements but
406
- # want to free memory asap when possible.
407
42
  activerehashing yes
408
43
 
409
- ################################## INCLUDES ###################################
410
-
411
- # Include one or more other config files here. This is useful if you
412
- # have a standard template that goes to all redis server but also need
413
- # to customize a few per-server settings. Include files can include
414
- # other files, so use this wisely.
415
- #
416
- # include /path/to/local.conf
417
- # include /path/to/other.conf
44
+ # client-output-buffer-limit normal 0 0 0
45
+ # client-output-buffer-limit slave 256mb 64mb 60
46
+ # client-output-buffer-limit pubsub 32mb 8mb 60