capistrano-resque_monit 0.0.1 → 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: aee995561346647eec7ac2c16742b4090722454c
4
- data.tar.gz: 182950bdc7039715846ad45c76eec6e00a54b5e5
3
+ metadata.gz: ca0730e0d46a496c024b0bf002a3175dafbbf485
4
+ data.tar.gz: 8312b1c2de92a957ebf06e8fb5683643e442d438
5
5
  SHA512:
6
- metadata.gz: 6afb8e1be581124abe2101161c221f95406ac13eed56504ad30886fec040fd36396120d34b57344af1e037e5c94fe51cf4c5e9e4dd3b51bfa6aca11716a6f33f
7
- data.tar.gz: 9745f079e2aa734ead85beba8aebd3a82b4019cf4e55ef93d90ddc1bccc2eca718e1ab59ff7cf437bf59a6a4ef8008e97ab45a3df264ae440a70a0b411528b44
6
+ metadata.gz: 75f389b5bca47f69c314472202050acc534da7a068a268e11eea0fdef4812a528f65a8478ba0fee135eb54d2ed463f868f166b87b4b79dbb764ca2c1bc3aa75f
7
+ data.tar.gz: cbed3a0119725c2ae88fb28b5aef68044b15e0de9b851f35238f060ac5c3b98eaa98e04d059511e138bc8fcf5fae3e1a1f3faeadbab0f50bc9aad4cc1f7e60d4
data/README.md CHANGED
@@ -2,15 +2,21 @@
2
2
 
3
3
  A set of Capistrano scripts for configuring resque workers to be monitored by monit
4
4
 
5
+ > This is compatible with [Capistrano 3](https://github.com/capistrano/capistrano).
6
+
7
+ > This is compatible with [Resque 1.x](https://github.com/resque/resque/tree/1-x-stable) as the master (2.0 release)
8
+ is still under development and has not been released.
9
+
5
10
  ## Installation
6
11
 
7
- ### Note
8
- This gem requires Capistrano to deploy using `sudo`. This is because scripts are generated and copied
9
- to `/usr/local/bin`, `/etc/init.d/` and `/etc/monit.d`.
12
+ > **Note** This gem requires Capistrano to deploy using `sudo`. This is because generated scripts are copied
13
+ to `/usr/local/bin`, `/etc/init.d/` and `/etc/monit.d`.
10
14
 
11
15
  Add this line to your application's Gemfile:
12
16
 
13
- gem 'capistrano-resque_monit'
17
+ ```ruby
18
+ gem 'capistrano-resque_monit'
19
+ ```
14
20
 
15
21
  And then execute:
16
22
 
@@ -24,18 +30,179 @@ Or install it yourself as:
24
30
 
25
31
  Add to your `Capfile`:
26
32
 
27
- require 'capistrano/resque_monit/tasks'
33
+ ```ruby
34
+ require 'capistrano/resque_monit/tasks'
35
+ ```
36
+
37
+ Setup values for monit in `deploy.rb`:
38
+
39
+ Username and password to access the monit httpd on each server.
40
+ If not provided, username defaults to `monit-#{application}` and
41
+ a random 8-character password is created (for each deployment).
42
+
43
+ ```ruby
44
+ set :monit_user, ENV['MONIT_USER']
45
+ set :monit_password, ENV['MONIT_PASSWORD']
46
+ ```
47
+
48
+ If you are using M/Monit add the URL to the collector. You should include
49
+ the username and password in this URL.
50
+
51
+ ```ruby
52
+ set :mmonit_url, ENV['MMONIT_URL']
53
+ ```
54
+
55
+ If you want monit on indivudual servers to send you email then set an address
56
+ to send those alerts to. You also will need to configure the username,
57
+ password, and SMTP server to send that.
58
+
59
+ ```ruby
60
+ set :monit_email, ENV['MONIT_EMAIL_TO']
61
+ set :monit_email_user, ENV['MONIT_EMAIL_USER']
62
+ set :monit_email_password, ENV['MONIT_EMAIL_PASSWORD']
63
+ set :monit_email_smtp, ENV['MONIT_EMAIL_SMTP']
64
+ ```
65
+
66
+ You can configure the host and port for Redis that has the resque queues. This
67
+ defaults to first app server host and 6379, but you may want to change these.
68
+
69
+ ```ruby
70
+ set :resque_redis_host, -> { localhost }
71
+ set :resque_redis_port, -> { 6379 }
72
+ ```
73
+
74
+ You can set a namespace for resque jobs. This defaults to your `application` name
75
+ (from Capistrano). *This must not contain spaces.*
76
+
77
+ ```ruby
78
+ set :resque_application 'APP_NAME'
79
+ ```
80
+
81
+ Define a `:worker` role for each environment where the workers will be installed. For example,
82
+ you might have separate worker servers in production and run all workers on the app server in staging.
83
+
84
+ ```ruby
85
+ # config/deploy/production.rb
86
+ server 'app.example.com', user: 'deploy', roles: %w(app web db)
87
+ server 'worker1.example.com', user: 'deploy', roles: %w(worker)
88
+ server 'worker2.example.com', user: 'deploy', roles: %w(worker)
89
+ ```
90
+
91
+ ```ruby
92
+ # config/deploy/staging.rb
93
+ server 'staging.example.com', user: 'deploy', roles: %w(app web db worker)
94
+ ```
95
+
96
+ Finally, add a task called `resque:config_workers` to your `deploy.rb` to define the resque queues:
97
+
98
+ ```ruby
99
+ namespace :resque do
100
+ task :config_workers do
101
+ unless fetch(:no_release)
102
+ on roles :worker do |host|
103
+ resque_worker_initd 'import', host
104
+ resque_worker_monitd 'import', host
105
+
106
+ resque_worker_initd 'process', host
107
+ resque_worker_monitd 'process', host
108
+
109
+
110
+ if rails_env == 'production'
111
+ resque_worker_initd 'import2', host, queue: 'import'
112
+ resque_worker_monitd 'import2', host
113
+
114
+ resque_worker_initd 'process2', host, queue: 'process'
115
+ resque_worker_monitd 'process2', host
116
+ end
117
+ end
118
+ end
119
+ end
120
+ end
121
+ ```
122
+
123
+ ## Commands
124
+
125
+ The two commands that you use to define the worker configuration are `resque_worker_initd` and
126
+ `resque_worker_monitd`.
127
+
128
+
129
+ ###`resque_worker_initd`
130
+
131
+ Creates a file in `/etc/init.d` to start and stop the resque worker.
132
+
133
+ Each call to this command must have a unique name. You can modify the queue(s) that the worker works from with the
134
+ `queue` option.
135
+
136
+ `queue`:
137
+ The name of the queue from which the worker pulls jobs.
138
+
139
+ This sets the [queue list](https://github.com/resque/resque/tree/1-x-stable#priorities-and-queue-lists) sent to the
140
+ worker task, so you can give it a single queue, a comma-separated list of queues, or "*" to process all queues.
141
+
142
+ If not provided the queue is assumed to match the worker name.
143
+
144
+ ```ruby
145
+ resque_worker_initd 'import'
146
+ resque_worker_initd 'import2', queue: 'import'
147
+ resque_worker_initd 'priorities', queue: 'critical,high,low'
148
+ resque_worker_initd 'everything', queue: '*'
149
+ ```
150
+
151
+ ###`resque_worker_monitd`
152
+
153
+ Creates a file in `/etc/monit.d` to monitor the resque worker.
154
+
155
+ There are a number of options you can use to tweak the monit rules:
156
+
157
+ `totalmem`
158
+ Number of MB of memory that monit will allow this worker to consume before recycling it. Default is 675.
159
+
160
+ `depends`
161
+ Other monit processes that this worker depends on. Includes `redis` by default.
162
+
163
+ You might want this, for example, if you have a worker that uses `resque-scheduler`. In which case you would
164
+ include that in the options:
165
+
166
+ ```ruby
167
+ resque_worker_monitd 'resque_worker_vacuum', depends: 'resque_scheduler'
168
+ ```
169
+
170
+
171
+ ## Tasks
172
+
173
+ The following tasks are defined for managing your `monit` and `resque` processes.
174
+
175
+ ### monit:config
176
+
177
+ Rebuild the monit configurations and reload monit on each server.
178
+
179
+ ### monit:status
180
+
181
+ Get verbose status of monitored processes from monit.
182
+
183
+ ### monit:log
184
+
185
+ Get a streaming log of monit activity from all servers.
186
+
187
+ ### monit:start
188
+
189
+ Start all monit processes on all servers. This will start all monitored processes,
190
+ not just the resque jobs managed by this gem.
191
+
192
+ ### monit:stop
193
+
194
+ Stop all monit processes on all servers. This will stop all monitored processes,
195
+ not just the resque jobs managed by this gem.
196
+
197
+ ### monit:reload
28
198
 
29
- Set resque prefix for app in `deploy.rb`
199
+ Reload monit configuration and display the summary.
30
200
 
31
- set :resque_prefix 'APP_NAME'
201
+ ### resque:restart
32
202
 
33
- Setup values for monit in `deploy.rb`
203
+ Restart all workers for this application using monit. This only restarts the
204
+ resque workers configured by this gem.
34
205
 
35
- set :monit_user
36
- set :monit_password
37
- set :monit_url
38
- set :monit_email
39
206
 
40
207
  ## Contributing
41
208
 
@@ -0,0 +1,127 @@
1
+ namespace :monit do
2
+ desc 'Rebuild the monit configurations and reload monit on each server.'
3
+ task :config do
4
+ on roles [:app, :worker] do
5
+ execute :sudo, 'chkconfig monit on'
6
+ end
7
+
8
+ unless fetch(:no_release, false)
9
+ on roles [:app, :worker] do |host|
10
+ %w(
11
+ etc/init.d/monit
12
+ etc/monit.d/logging
13
+ ).each do |template|
14
+ content = Capistrano::ResqueMonit.template(template)
15
+ Capistrano::ResqueMonit.put_as_root(content, "/#{template}", host)
16
+ end
17
+
18
+ if fetch(:monit_email)
19
+ %w(
20
+ etc/monit.d/alert
21
+ etc/monit.d/mailserver
22
+ ).each do |template|
23
+ content = Capistrano::ResqueMonit.template(
24
+ template,
25
+ EMAIL: fetch(:monit_email),
26
+ MAIL_SERVER: fetch(:monit_email_smtp),
27
+ MAIL_USER: fetch(:monit_email_user),
28
+ MAIL_PASSWORD: fetch(:monit_email_password)
29
+ )
30
+ Capistrano::ResqueMonit.put_as_root(content, "/#{template}", host)
31
+ end
32
+ end
33
+ end
34
+
35
+ app_hostname = nil
36
+ on roles :app do |host|
37
+ app_hostname ||= host.hostname
38
+
39
+ content = Capistrano::ResqueMonit.template('etc/monit.d/redis')
40
+ Capistrano::ResqueMonit.put_as_root(content, '/etc/monit.d/redis', host)
41
+
42
+ content = Capistrano::ResqueMonit.template(
43
+ 'etc/monitrc',
44
+ USER: fetch(:monit_user),
45
+ PASSWORD: fetch(:monit_password),
46
+ URL: fetch(:mmonit_url),
47
+ )
48
+ Capistrano::ResqueMonit.put_as_root(content, '/etc/monitrc', host, :mode => 0600)
49
+ end
50
+
51
+ on roles :worker do |host|
52
+ file = Capistrano::ResqueMonit.file_name('resque_scheduler')
53
+ script = Capistrano::ResqueMonit.template(
54
+ 'etc/init.d/resque_scheduler',
55
+ gem_home: fetch(:gem_home),
56
+ current_path: fetch(:current_path),
57
+ rails_env: fetch(:rails_env),
58
+ file: file
59
+ )
60
+ Capistrano::ResqueMonit.put_as_root(script, "/etc/init.d/#{file}", host, :mode => 0755)
61
+ resque_worker_monitd 'resque_scheduler', host
62
+
63
+ content = Capistrano::ResqueMonit.template(
64
+ 'usr/local/bin/redis-check-queue',
65
+ RESQUE_HOST: fetch(:resque_redis_host, app_hostname),
66
+ RESQUE_PORT: fetch(:resque_redis_port)
67
+ )
68
+ Capistrano::ResqueMonit.put_as_root(content, '/usr/local/bin/redis-check-queue', host, :mode => 0755)
69
+ end
70
+ end
71
+ end
72
+
73
+ desc 'Get verbose status of monitored processes from monit.'
74
+ task :status do
75
+ on roles [:app, :worker] do
76
+ execute :sudo, 'monit status'
77
+ end
78
+ end
79
+
80
+ desc 'Get a streaming log of monit activity from all servers.'
81
+ task :log do
82
+ on roles [:app, :worker] do
83
+ execute :sudo, 'tail -f /var/log/monit'
84
+ end
85
+ end
86
+
87
+ desc 'Start all monit processes on all servers.'
88
+ task :start do
89
+ on roles [:app, :worker] do
90
+ execute :sudo, 'monit start all'
91
+ end
92
+ end
93
+
94
+ desc 'Stop all monit processes on all servers.'
95
+ task :stop do
96
+ on roles [:app, :worker] do
97
+ execute :sudo, 'monit stop all'
98
+ end
99
+ end
100
+
101
+ desc 'Reload monit configuration.'
102
+ task :reload do
103
+ on roles [:app, :worker] do
104
+ execute :sudo, 'monit reload'
105
+ execute :sudo, 'monit summary all'
106
+ end
107
+ end
108
+ end
109
+
110
+ after 'monit:config', 'monit:reload'
111
+
112
+ namespace :load do
113
+ task :defaults do
114
+ set :monit_user, ->{ "monit-#{fetch(:application)}" } # Username for connecting to monit on individual servers.
115
+ set :monit_password, ->{ SecureRandom.hex(8) } # Email address used to send notifications by monit from individual servers.
116
+
117
+ set :monit_email, ->{ nil } # Email address that notifications are sent to by monit from individual servers.
118
+ set :monit_email_user, ->{ nil } # Username to send email notifications from monit.
119
+ set :monit_email_password, ->{ nil } # Password to send email notifications from monit.
120
+ set :monit_email_smtp, ->{ nil } # Hostname of the SMTP server to end notifications through.
121
+
122
+ set :mmonit_url, ->{ nil } # URL of the M/Monit instance to report up to. Should contain username and password.
123
+
124
+ set :resque_redis_host, -> { nil } # Host on which the redis is running for the resque queues.
125
+ set :resque_redis_port, -> { 6379 } # Port redis is running at for resque queues.
126
+ end
127
+ end
@@ -0,0 +1,72 @@
1
+ namespace :resque do
2
+
3
+ desc 'Restart all workers for this application using monit'
4
+ task :restart do
5
+ unless fetch(:no_release, false)
6
+ on roles :worker do
7
+ execute :sudo, 'monit reload'
8
+ sleep 2
9
+ execute :sudo, "monit -g resque_workers_#{fetch(:resque_application)} restart"
10
+ execute :sudo, "monit -g resque_workers_#{fetch(:resque_application)} summary"
11
+ end
12
+ end
13
+ end
14
+
15
+ desc <<-EOS
16
+ Set up init.d and monit.d files for all resque workers.
17
+
18
+ This task does nothing by default. You should define it in your `deploy.rb` and
19
+ configure your workers with `resque_worker_monitd` and `resque_worker_initd`.
20
+ EOS
21
+ task :config_workers do
22
+ end
23
+ end
24
+
25
+ after 'deploy', 'resque:restart'
26
+ before 'monit:config', 'resque:config_workers'
27
+
28
+
29
+ namespace :load do
30
+ task :defaults do
31
+ set :resque_application, ->{ fetch(:application) } # Used to namespace the workers; should not contain spaces.
32
+ end
33
+ end
34
+
35
+
36
+ def resque_worker_monitd(name, host, options = {})
37
+ file = Capistrano::ResqueMonit.file_name(name)
38
+
39
+ mem = options[:totalmem] || '675'
40
+
41
+ depends = []
42
+ depends << 'redis'
43
+ depends << options[:depends]
44
+ depends.flatten!
45
+ depends.compact!
46
+ depends = depends.empty? ? '' : "depends on #{depends.join(', ')}"
47
+
48
+ script = Capistrano::ResqueMonit.template(
49
+ 'resque_monitd',
50
+ depends: depends,
51
+ file: file,
52
+ current_path: fetch(:current_path),
53
+ mem: mem,
54
+ resque_application: fetch(:resque_application)
55
+ )
56
+ Capistrano::ResqueMonit.put_as_root(script, "/etc/monit.d/#{file}", host, :mode => 0644)
57
+ end
58
+
59
+
60
+ def resque_worker_initd(worker, host, options = {})
61
+ queue = options[:queue] || worker
62
+ file = Capistrano::ResqueMonit.file_name(worker)
63
+ script = Capistrano::ResqueMonit.template(
64
+ 'resque_initd',
65
+ gem_home: fetch(:gem_home),
66
+ current_path: fetch(:current_path),
67
+ rails_env: fetch(:rails_env),
68
+ queue: queue,
69
+ file: file
70
+ )
71
+ Capistrano::ResqueMonit.put_as_root(script, "/etc/init.d/#{file}", host, :mode => 0755)
72
+ end
@@ -1,25 +1,3 @@
1
- load 'capistrano/resque_monit/monit'
2
- load 'capistrano/resque_monit/resque'
3
-
4
- # TODO: Update templates path to Gem root path
5
-
6
- namespace :monit do
7
- desc 'Set up base files'
8
- task :setup do
9
- sed_monitrc
10
- run "cd #{deploy_to}/current && sudo cp templates/*.conf /etc"
11
- end
12
-
13
- desc 'Set up init.d and monit.d files for monit'
14
- task :config_app, :roles => :app, :except => { :no_release => true } do
15
- sed_monitd 'nginx', :app
16
- end
17
- end
18
-
19
- namespace :resque_monit do
20
- desc 'Set up init.d and monit.d files for all resque_monit workers'
21
- task :config_worker, :roles => :worker, :except => { :no_release => true } do
22
- sed_initd 'resque_scheduler', :worker
23
- sed_bin 'redis-check-queue', :worker
24
- end
25
- end
1
+ require 'capistrano/resque_monit'
2
+ load File.expand_path('../tasks/monit.rake', __FILE__)
3
+ load File.expand_path('../tasks/resque.rake', __FILE__)
@@ -1,5 +1,5 @@
1
1
  module Capistrano
2
2
  module ResqueMonit
3
- VERSION = "0.0.1"
3
+ VERSION = "0.1.0"
4
4
  end
5
5
  end
@@ -2,5 +2,31 @@ require "capistrano/resque_monit/version"
2
2
 
3
3
  module Capistrano
4
4
  module ResqueMonit
5
+
6
+ def self.root
7
+ @root ||= Gem::Specification.find_by_name('capistrano-resque_monit').gem_dir
8
+ end
9
+
10
+ def self.file_name(name)
11
+ "resque_worker_#{fetch(:resque_application)}_#{name}"
12
+ end
13
+
14
+ def self.template(filename, values = {})
15
+ template = File.open(File.join(Capistrano::ResqueMonit.root, 'templates', filename)).read
16
+ unless values.empty?
17
+ template.gsub!(/#\{([^}]+)\}/) { values[$1.to_sym] }
18
+ end
19
+ template
20
+ end
21
+
22
+ def self.put_as_root(content, destination, host, options = {})
23
+ SSHKit::Coordinator.new(host).each do
24
+ basename ||= File.basename(destination)
25
+ tmp_path = "#{current_path}/tmp/#{basename}"
26
+ upload! StringIO.new(content), tmp_path, options
27
+ execute :sudo, "mv #{tmp_path} #{destination}"
28
+ execute :sudo, "chown root:root #{destination}"
29
+ end
30
+ end
5
31
  end
6
32
  end
@@ -1,17 +1,17 @@
1
1
  #! /bin/sh
2
2
 
3
- cd_path="%CURRENT%"
4
- export_gem_home="export GEM_HOME=%GEMHOME%"
3
+ cd_path="#{current_path}"
4
+ export_gem_home="export GEM_HOME=#{gem_home}"
5
5
 
6
6
  case "$1" in
7
7
  start)
8
8
  echo -n "Starting resque_scheduler: "
9
- su - deploy -c "cd $cd_path && $export_gem_home && bundle exec rake RAILS_ENV=%RAILSENV% PIDFILE=%PIDFILE% resque:scheduler >> log/resque_scheduler.log 2>&1 &"
9
+ su - deploy -c "cd $cd_path && $export_gem_home && bundle exec rake RAILS_ENV=#{rails_env} PIDFILE=tmp/pids/#{file}.pid resque:scheduler >> log/resque_scheduler.log 2>&1 &"
10
10
  echo "OK."
11
11
  ;;
12
12
  stop)
13
13
  echo -n "Stopping resque_scheduler: "
14
- su - deploy -c "kill -QUIT `cat $cd_path/%PIDFILE%` && rm -f $cd_path/%PIDFILE% && exit 0"
14
+ su - deploy -c "kill -QUIT `cat $cd_path/tmp/pids/#{file}.pid` && rm -f $cd_path/tmp/pids/#{file}.pid && exit 0"
15
15
  echo "done."
16
16
  ;;
17
17
  *)
@@ -1 +1 @@
1
- set alert %EMAIL% but not on { action }
1
+ set alert #{EMAIL} but not on { action }
@@ -0,0 +1,2 @@
1
+ set mailserver #{MAIL_SERVER} PORT 587
2
+ USERNAME #{MAIL_USER} PASSWORD '#{MAIL_PASSWORD}' using TLSV1
@@ -6,7 +6,7 @@ set logfile syslog facility log_daemon
6
6
  set mail-format { subject: monit alert [$HOST]: $EVENT $SERVICE }
7
7
  include /etc/monit.d/*
8
8
  set httpd port 2812 and
9
- allow %USER%:%PASSWORD%
9
+ allow #{USER}:#{PASSWORD}
10
10
 
11
11
  set eventqueue basedir /var/monit slots 1000
12
12
 
@@ -14,4 +14,4 @@ set eventqueue basedir /var/monit slots 1000
14
14
  # the URL, that is, monit:monit, specify a username and password
15
15
  # registered in M/Monit. If you change the password for the monit
16
16
  # user in M/Monit it must be changed here as well.
17
- set mmonit %URL%
17
+ set mmonit #{URL}
@@ -1,7 +1,7 @@
1
1
  #! /bin/sh
2
2
 
3
3
  cd_path="#{current_path}"
4
- export_gem_home="export GEM_HOME=#{rvm_path}/gems/#{rvm_ruby_string}"
4
+ export_gem_home="export GEM_HOME=#{rvm_path}/gems/#{rvm_ruby_version}"
5
5
 
6
6
  case "$1" in
7
7
  start)
@@ -4,4 +4,4 @@ check process #{file}
4
4
  stop program = "/etc/init.d/#{file} stop"
5
5
  #{depends}
6
6
  if totalmem is greater than #{mem} MB for 10 cycles then restart # eating up memory?
7
- group #{resque_prefix}_resque_workers
7
+ group resque_workers_#{resque_application}
@@ -1,2 +1,2 @@
1
1
  #!/bin/bash
2
- /usr/local/bin/redis-cli -h %RESQUE_HOST% -p %RESQUE_PORT% ping
2
+ /usr/local/bin/redis-cli -h #{RESQUE_HOST} -p #{RESQUE_PORT} ping
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: capistrano-resque_monit
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.1
4
+ version: 0.1.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Gino Clement
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2015-07-15 00:00:00.000000000 Z
12
+ date: 2015-07-17 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: capistrano
@@ -69,18 +69,17 @@ files:
69
69
  - Rakefile
70
70
  - capistrano-resque_monit.gemspec
71
71
  - lib/capistrano/resque_monit.rb
72
- - lib/capistrano/resque_monit/monit.rb
73
- - lib/capistrano/resque_monit/resque.rb
74
72
  - lib/capistrano/resque_monit/tasks.rb
73
+ - lib/capistrano/resque_monit/tasks/monit.rake
74
+ - lib/capistrano/resque_monit/tasks/resque.rake
75
75
  - lib/capistrano/resque_monit/version.rb
76
76
  - templates/etc/init.d/monit
77
77
  - templates/etc/init.d/resque_scheduler
78
78
  - templates/etc/monit.d/alert
79
79
  - templates/etc/monit.d/logging
80
- - templates/etc/monit.d/nginx
80
+ - templates/etc/monit.d/mailserver
81
81
  - templates/etc/monit.d/redis
82
- - templates/monitrc
83
- - templates/redis.conf
82
+ - templates/etc/monitrc
84
83
  - templates/resque_initd
85
84
  - templates/resque_monitd
86
85
  - templates/usr/local/bin/redis-check-queue
@@ -1,70 +0,0 @@
1
- after 'monit:config', 'monit:restart'
2
-
3
- namespace :monit do
4
-
5
- task :config, :except => { :no_release => true } do
6
- run 'sudo chkconfig monit on'
7
- end
8
-
9
- task :status do
10
- run 'sudo monit status'
11
- end
12
-
13
- task :log do
14
- run 'sudo tail -f /var/log/monit'
15
- end
16
-
17
- task :start do
18
- run 'sudo monit start all'
19
- end
20
-
21
- task :stop do
22
- run 'sudo monit stop all'
23
- end
24
-
25
- task :restart, :except => { :no_release => true } do
26
- run 'sudo monit reload'
27
- run 'sudo monit summary all'
28
- end
29
- end
30
-
31
- def sed_initd(file, role)
32
- sed_template "templates/etc/init.d/#{file}", {
33
- CURRENT: current_path,
34
- PIDFILE: "tmp/pids/#{file}.pid",
35
- RAILSENV: rails_env,
36
- GEMHOME: "#{rvm_path}/gems/#{rvm_ruby_string}",
37
- }, '/etc/init.d/#{file}'
38
- end
39
-
40
- def sed_monitd(file, role)
41
- sed_template "templates/etc/monit.d/#{file}", {
42
- HOST: server_name,
43
- EMAIL: monit_email
44
- }, '/etc/monit.d/#{file}'
45
- end
46
-
47
- def sed_monitrc
48
- sed_template "templates/monitrc", {
49
- USER: monit_user,
50
- PASSWORD: monit_password,
51
- URL: monit_url
52
- }, '/etc/monitrc'
53
- run 'sudo chmod 600 /etc/monitrc'
54
- end
55
-
56
- def sed_bin(file, role)
57
- resque_config = YAML.load_file('config/resque_monit.yml')
58
- (host, port) = resque_config[rails_env].split ':'
59
- sed_template "templates/usr/local/bin/#{file}", {
60
- RESQUE_HOST: host,
61
- RESQUE_PORT: port
62
- }, '/usr/local/bin/#{file}'
63
- run "sudo chmod 755 /usr/local/bin/#{file}"
64
- end
65
-
66
- def sed_template file, values, dest
67
- cmds = values.map { |k, v| "-e 's/%#{k}%/#{v.gsub(%r(/), '\\/')}/g'" }.join ' '
68
- run "cd #{deploy_to}/current && sudo sed #{cmds} #{file} > #{dest}"
69
- end
70
-
@@ -1,58 +0,0 @@
1
- # Requires that a :worker role is defined in your configuration
2
-
3
- # TODO: Update template paths
4
-
5
- after 'deploy', 'resque_monit:restart'
6
- after 'deploy:migrations', 'resque_monit:restart'
7
-
8
- namespace :resque_monit do
9
-
10
- task :restart, roles: :worker, :except => { :no_release => true } do
11
- run 'sudo monit reload'
12
- sleep 2
13
- run "sudo monit -g #{resque_prefix}_resque_workers restart"
14
- run "sudo monit -g #{resque_prefix}_resque_workers summary"
15
- end
16
-
17
- end
18
-
19
- def resque_template(filename, values)
20
- template = File.open(File.join('templates', filename)).read
21
- template.gsub(/#\{([^}]+)\}/) { |m| values[$1.to_sym]}
22
- end
23
-
24
-
25
- def resque_worker_monitd(file, options = {})
26
- file = "#{file}_#{resque_prefix}"
27
-
28
- mem = options[:totalmem] || '675'
29
-
30
- depends = []
31
- depends << 'redis'
32
- depends << options[:depends]
33
- depends.flatten!
34
- depends.compact!
35
- depends = depends.empty? ? '' : "depends on #{depends.join(', ')}"
36
-
37
- script = resque_template('resque_monitd', depends: depends, file: file, current_path: current_path, mem: mem, resque_prefix: resque_prefix)
38
-
39
- put script, "#{current_path}/tmp/#{file}", :mode => 0644
40
- run "sudo mv #{current_path}/tmp/#{file} /etc/monit.d/#{file}"
41
- run "sudo chown root:root /etc/monit.d/#{file}"
42
- end
43
-
44
-
45
- def resque_worker_initd(worker, options = {})
46
-
47
- queue = options[:queue] || worker
48
-
49
- file = "resque_worker"
50
- file += "_#{resque_prefix}"
51
- file += "_#{worker}"
52
-
53
- script = resque_template('resque_initd', rvm_path: rvm_path, rvm_ruby_string: rvm_ruby_string, current_path: current_path, rails_env: rails_env, queue: queue, file: file)
54
-
55
- put script, "#{current_path}/tmp/#{file}", :mode => 0755
56
- run "sudo mv #{current_path}/tmp/#{file} /etc/init.d/#{file}"
57
- run "sudo chown root:root /etc/init.d/#{file}"
58
- end
@@ -1,4 +0,0 @@
1
- check host nginx address %HOST%
2
- if failed host %HOST% port 80 then alert
3
- start = "/sbin/service nginx start" with timeout 60 seconds
4
- stop = "/sbin/service nginx stop"
data/templates/redis.conf DELETED
@@ -1,540 +0,0 @@
1
- # Redis configuration file example
2
-
3
- # Note on units: when memory size is needed, it is possible to specify
4
- # it in the usual form of 1k 5GB 4M and so forth:
5
- #
6
- # 1k => 1000 bytes
7
- # 1kb => 1024 bytes
8
- # 1m => 1000000 bytes
9
- # 1mb => 1024*1024 bytes
10
- # 1g => 1000000000 bytes
11
- # 1gb => 1024*1024*1024 bytes
12
- #
13
- # units are case insensitive so 1GB 1Gb 1gB are all the same.
14
-
15
- # By default Redis does not run as a daemon. Use 'yes' if you need it.
16
- # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
17
- daemonize yes
18
-
19
- # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
20
- # default. You can specify a custom pid file location here.
21
- pidfile /var/run/redis/redis.pid
22
-
23
- # Accept connections on the specified port, default is 6379.
24
- # If port 0 is specified Redis will not listen on a TCP socket.
25
- port 6379
26
-
27
- # If you want you can bind a single interface, if the bind option is not
28
- # specified all the interfaces will listen for incoming connections.
29
- #
30
- #bind 127.0.0.1
31
-
32
- # Specify the path for the unix socket that will be used to listen for
33
- # incoming connections. There is no default, so Redis will not listen
34
- # on a unix socket when not specified.
35
- #
36
- # unixsocket /tmp/redis.sock
37
- # unixsocketperm 755
38
-
39
- # Close the connection after a client is idle for N seconds (0 to disable)
40
- timeout 300
41
-
42
- # Set server verbosity to 'debug'
43
- # it can be one of:
44
- # debug (a lot of information, useful for development/testing)
45
- # verbose (many rarely useful info, but not a mess like the debug level)
46
- # notice (moderately verbose, what you want in production probably)
47
- # warning (only very important / critical messages are logged)
48
- loglevel notice
49
-
50
- # Specify the log file name. Also 'stdout' can be used to force
51
- # Redis to log on the standard output. Note that if you use standard
52
- # output for logging but daemonize, logs will be sent to /dev/null
53
- logfile /var/log/redis/redis.log
54
-
55
- # To enable logging to the system logger, just set 'syslog-enabled' to yes,
56
- # and optionally update the other syslog parameters to suit your needs.
57
- # syslog-enabled no
58
-
59
- # Specify the syslog identity.
60
- # syslog-ident redis
61
-
62
- # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
63
- # syslog-facility local0
64
-
65
- # Set the number of databases. The default database is DB 0, you can select
66
- # a different one on a per-connection basis using SELECT <dbid> where
67
- # dbid is a number between 0 and 'databases'-1
68
- databases 16
69
-
70
- ################################ SNAPSHOTTING #################################
71
- #
72
- # Save the DB on disk:
73
- #
74
- # save <seconds> <changes>
75
- #
76
- # Will save the DB if both the given number of seconds and the given
77
- # number of write operations against the DB occurred.
78
- #
79
- # In the example below the behaviour will be to save:
80
- # after 900 sec (15 min) if at least 1 key changed
81
- # after 300 sec (5 min) if at least 10 keys changed
82
- # after 60 sec if at least 10000 keys changed
83
- #
84
- # Note: you can disable saving at all commenting all the "save" lines.
85
- #
86
- # It is also possible to remove all the previously configured save
87
- # points by adding a save directive with a single empty string argument
88
- # like in the following example:
89
- #
90
- # save ""
91
-
92
- save 900 1
93
- save 300 10
94
- save 60 10000
95
-
96
- # By default Redis will stop accepting writes if RDB snapshots are enabled
97
- # (at least one save point) and the latest background save failed.
98
- # This will make the user aware (in an hard way) that data is not persisting
99
- # on disk properly, otherwise chances are that no one will notice and some
100
- # distater will happen.
101
- #
102
- # If the background saving process will start working again Redis will
103
- # automatically allow writes again.
104
- #
105
- # However if you have setup your proper monitoring of the Redis server
106
- # and persistence, you may want to disable this feature so that Redis will
107
- # continue to work as usually even if there are problems with disk,
108
- # permissions, and so forth.
109
- stop-writes-on-bgsave-error yes
110
-
111
- # Compress string objects using LZF when dump .rdb databases?
112
- # For default that's set to 'yes' as it's almost always a win.
113
- # If you want to save some CPU in the saving child set it to 'no' but
114
- # the dataset will likely be bigger if you have compressible values or keys.
115
- rdbcompression yes
116
-
117
- # Since verison 5 of RDB a CRC64 checksum is placed at the end of the file.
118
- # This makes the format more resistant to corruption but there is a performance
119
- # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
120
- # for maximum performances.
121
- #
122
- # RDB files created with checksum disabled have a checksum of zero that will
123
- # tell the loading code to skip the check.
124
- rdbchecksum yes
125
-
126
- # The filename where to dump the DB
127
- dbfilename dump.rdb
128
-
129
- # The working directory.
130
- #
131
- # The DB will be written inside this directory, with the filename specified
132
- # above using the 'dbfilename' configuration directive.
133
- #
134
- # Also the Append Only File will be created inside this directory.
135
- #
136
- # Note that you must specify a directory here, not a file name.
137
- dir /var/lib/redis/
138
-
139
- ################################# REPLICATION #################################
140
-
141
- # Master-Slave replication. Use slaveof to make a Redis instance a copy of
142
- # another Redis server. Note that the configuration is local to the slave
143
- # so for example it is possible to configure the slave to save the DB with a
144
- # different interval, or to listen to another port, and so on.
145
- #
146
- # slaveof <masterip> <masterport>
147
-
148
- # If the master is password protected (using the "requirepass" configuration
149
- # directive below) it is possible to tell the slave to authenticate before
150
- # starting the replication synchronization process, otherwise the master will
151
- # refuse the slave request.
152
- #
153
- # masterauth <master-password>
154
-
155
- # When a slave lost the connection with the master, or when the replication
156
- # is still in progress, the slave can act in two different ways:
157
- #
158
- # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
159
- # still reply to client requests, possibly with out of date data, or the
160
- # data set may just be empty if this is the first synchronization.
161
- #
162
- # 2) if slave-serve-stale data is set to 'no' the slave will reply with
163
- # an error "SYNC with master in progress" to all the kind of commands
164
- # but to INFO and SLAVEOF.
165
- #
166
- slave-serve-stale-data yes
167
-
168
- # You can configure a slave instance to accept writes or not. Writing against
169
- # a slave instance may be useful to store some ephemeral data (because data
170
- # written on a slave will be easily deleted after resync with the master) but
171
- # may also cause problems if clients are writing to it because of a
172
- # misconfiguration.
173
- #
174
- # Since Redis 2.6 by default slaves are read-only.
175
- #
176
- # Note: read only slaves are not designed to be exposed to untrusted clients
177
- # on the internet. It's just a protection layer against misuse of the instance.
178
- # Still a read only slave exports by default all the administrative commands
179
- # such as CONFIG, DEBUG, and so forth. To a limited extend you can improve
180
- # security of read only slaves using 'rename-command' to shadow all the
181
- # administrative / dangerous commands.
182
- slave-read-only yes
183
-
184
- # Slaves send PINGs to server in a predefined interval. It's possible to change
185
- # this interval with the repl_ping_slave_period option. The default value is 10
186
- # seconds.
187
- #
188
- # repl-ping-slave-period 10
189
-
190
- # The following option sets a timeout for both Bulk transfer I/O timeout and
191
- # master data or ping response timeout. The default value is 60 seconds.
192
- #
193
- # It is important to make sure that this value is greater than the value
194
- # specified for repl-ping-slave-period otherwise a timeout will be detected
195
- # every time there is low traffic between the master and the slave.
196
- #
197
- # repl-timeout 60
198
-
199
- # The slave priority is an integer number published by Redis in the INFO output.
200
- # It is used by Redis Sentinel in order to select a slave to promote into a
201
- # master if the master is no longer working correctly.
202
- #
203
- # A slave with a low priority number is considered better for promotion, so
204
- # for instance if there are three slaves with priority 10, 100, 25 Sentinel will
205
- # pick the one wtih priority 10, that is the lowest.
206
- #
207
- # However a special priority of 0 marks the slave as not able to perform the
208
- # role of master, so a slave with priority of 0 will never be selected by
209
- # Redis Sentinel for promotion.
210
- #
211
- # By default the priority is 100.
212
- slave-priority 100
213
-
214
- ################################## SECURITY ###################################
215
-
216
- # Require clients to issue AUTH <PASSWORD> before processing any other
217
- # commands. This might be useful in environments in which you do not trust
218
- # others with access to the host running redis-server.
219
- #
220
- # This should stay commented out for backward compatibility and because most
221
- # people do not need auth (e.g. they run their own servers).
222
- #
223
- # Warning: since Redis is pretty fast an outside user can try up to
224
- # 150k passwords per second against a good box. This means that you should
225
- # use a very strong password otherwise it will be very easy to break.
226
- #
227
- # requirepass foobared
228
-
229
- # Command renaming.
230
- #
231
- # It is possible to change the name of dangerous commands in a shared
232
- # environment. For instance the CONFIG command may be renamed into something
233
- # of hard to guess so that it will be still available for internal-use
234
- # tools but not available for general clients.
235
- #
236
- # Example:
237
- #
238
- # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
239
- #
240
- # It is also possible to completely kill a command renaming it into
241
- # an empty string:
242
- #
243
- # rename-command CONFIG ""
244
-
245
- ################################### LIMITS ####################################
246
-
247
- # Set the max number of connected clients at the same time. By default
248
- # this limit is set to 10000 clients, however if the Redis server is not
249
- # able ot configure the process file limit to allow for the specified limit
250
- # the max number of allowed clients is set to the current file limit
251
- # minus 32 (as Redis reserves a few file descriptors for internal uses).
252
- #
253
- # Once the limit is reached Redis will close all the new connections sending
254
- # an error 'max number of clients reached'.
255
- #
256
- # maxclients 10000
257
-
258
- # Don't use more memory than the specified amount of bytes.
259
- # When the memory limit is reached Redis will try to remove keys
260
- # accordingly to the eviction policy selected (see maxmemmory-policy).
261
- #
262
- # If Redis can't remove keys according to the policy, or if the policy is
263
- # set to 'noeviction', Redis will start to reply with errors to commands
264
- # that would use more memory, like SET, LPUSH, and so on, and will continue
265
- # to reply to read-only commands like GET.
266
- #
267
- # This option is usually useful when using Redis as an LRU cache, or to set
268
- # an hard memory limit for an instance (using the 'noeviction' policy).
269
- #
270
- # WARNING: If you have slaves attached to an instance with maxmemory on,
271
- # the size of the output buffers needed to feed the slaves are subtracted
272
- # from the used memory count, so that network problems / resyncs will
273
- # not trigger a loop where keys are evicted, and in turn the output
274
- # buffer of slaves is full with DELs of keys evicted triggering the deletion
275
- # of more keys, and so forth until the database is completely emptied.
276
- #
277
- # In short... if you have slaves attached it is suggested that you set a lower
278
- # limit for maxmemory so that there is some free RAM on the system for slave
279
- # output buffers (but this is not needed if the policy is 'noeviction').
280
- #
281
- # maxmemory <bytes>
282
-
283
- # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
284
- # is reached? You can select among five behavior:
285
- #
286
- # volatile-lru -> remove the key with an expire set using an LRU algorithm
287
- # allkeys-lru -> remove any key accordingly to the LRU algorithm
288
- # volatile-random -> remove a random key with an expire set
289
- # allkeys-random -> remove a random key, any key
290
- # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
291
- # noeviction -> don't expire at all, just return an error on write operations
292
- #
293
- # Note: with all the kind of policies, Redis will return an error on write
294
- # operations, when there are not suitable keys for eviction.
295
- #
296
- # At the date of writing this commands are: set setnx setex append
297
- # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
298
- # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
299
- # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
300
- # getset mset msetnx exec sort
301
- #
302
- # The default is:
303
- #
304
- # maxmemory-policy volatile-lru
305
-
306
- # LRU and minimal TTL algorithms are not precise algorithms but approximated
307
- # algorithms (in order to save memory), so you can select as well the sample
308
- # size to check. For instance for default Redis will check three keys and
309
- # pick the one that was used less recently, you can change the sample size
310
- # using the following configuration directive.
311
- #
312
- # maxmemory-samples 3
313
-
314
- ############################## APPEND ONLY MODE ###############################
315
-
316
- # By default Redis asynchronously dumps the dataset on disk. This mode is
317
- # good enough in many applications, but an issue with the Redis process or
318
- # a power outage may result into a few minutes of writes lost (depending on
319
- # the configured save points).
320
- #
321
- # The Append Only File is an alternative persistence mode that provides
322
- # much better durability. For instance using the default data fsync policy
323
- # (see later in the config file) Redis can lose just one second of writes in a
324
- # dramatic event like a server power outage, or a single write if something
325
- # wrong with the Redis process itself happens, but the operating system is
326
- # still running correctly.
327
- #
328
- # AOF and RDB persistence can be enabled at the same time without problems.
329
- # If the AOF is enabled on startup Redis will load the AOF, that is the file
330
- # with the better durability guarantees.
331
- #
332
- # Please check http://redis.io/topics/persistence for more information.
333
-
334
- appendonly no
335
-
336
- # The name of the append only file (default: "appendonly.aof")
337
- # appendfilename appendonly.aof
338
-
339
- # The fsync() call tells the Operating System to actually write data on disk
340
- # instead to wait for more data in the output buffer. Some OS will really flush
341
- # data on disk, some other OS will just try to do it ASAP.
342
- #
343
- # Redis supports three different modes:
344
- #
345
- # no: don't fsync, just let the OS flush the data when it wants. Faster.
346
- # always: fsync after every write to the append only log . Slow, Safest.
347
- # everysec: fsync only one time every second. Compromise.
348
- #
349
- # The default is "everysec" that's usually the right compromise between
350
- # speed and data safety. It's up to you to understand if you can relax this to
351
- # "no" that will let the operating system flush the output buffer when
352
- # it wants, for better performances (but if you can live with the idea of
353
- # some data loss consider the default persistence mode that's snapshotting),
354
- # or on the contrary, use "always" that's very slow but a bit safer than
355
- # everysec.
356
- #
357
- # More details please check the following article:
358
- # http://antirez.com/post/redis-persistence-demystified.html
359
- #
360
- # If unsure, use "everysec".
361
-
362
- # appendfsync always
363
- appendfsync everysec
364
- # appendfsync no
365
-
366
- # When the AOF fsync policy is set to always or everysec, and a background
367
- # saving process (a background save or AOF log background rewriting) is
368
- # performing a lot of I/O against the disk, in some Linux configurations
369
- # Redis may block too long on the fsync() call. Note that there is no fix for
370
- # this currently, as even performing fsync in a different thread will block
371
- # our synchronous write(2) call.
372
- #
373
- # In order to mitigate this problem it's possible to use the following option
374
- # that will prevent fsync() from being called in the main process while a
375
- # BGSAVE or BGREWRITEAOF is in progress.
376
- #
377
- # This means that while another child is saving the durability of Redis is
378
- # the same as "appendfsync none", that in practical terms means that it is
379
- # possible to lost up to 30 seconds of log in the worst scenario (with the
380
- # default Linux settings).
381
- #
382
- # If you have latency problems turn this to "yes". Otherwise leave it as
383
- # "no" that is the safest pick from the point of view of durability.
384
- no-appendfsync-on-rewrite no
385
-
386
- # Automatic rewrite of the append only file.
387
- # Redis is able to automatically rewrite the log file implicitly calling
388
- # BGREWRITEAOF when the AOF log size will growth by the specified percentage.
389
- #
390
- # This is how it works: Redis remembers the size of the AOF file after the
391
- # latest rewrite (or if no rewrite happened since the restart, the size of
392
- # the AOF at startup is used).
393
- #
394
- # This base size is compared to the current size. If the current size is
395
- # bigger than the specified percentage, the rewrite is triggered. Also
396
- # you need to specify a minimal size for the AOF file to be rewritten, this
397
- # is useful to avoid rewriting the AOF file even if the percentage increase
398
- # is reached but it is still pretty small.
399
- #
400
- # Specify a percentage of zero in order to disable the automatic AOF
401
- # rewrite feature.
402
-
403
- auto-aof-rewrite-percentage 100
404
- auto-aof-rewrite-min-size 64mb
405
-
406
- ################################ LUA SCRIPTING ###############################
407
-
408
- # Max execution time of a Lua script in milliseconds.
409
- #
410
- # If the maximum execution time is reached Redis will log that a script is
411
- # still in execution after the maximum allowed time and will start to
412
- # reply to queries with an error.
413
- #
414
- # When a long running script exceed the maximum execution time only the
415
- # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
416
- # used to stop a script that did not yet called write commands. The second
417
- # is the only way to shut down the server in the case a write commands was
418
- # already issue by the script but the user don't want to wait for the natural
419
- # termination of the script.
420
- #
421
- # Set it to 0 or a negative value for unlimited execution without warnings.
422
- lua-time-limit 5000
423
-
424
- ################################## SLOW LOG ###################################
425
-
426
- # The Redis Slow Log is a system to log queries that exceeded a specified
427
- # execution time. The execution time does not include the I/O operations
428
- # like talking with the client, sending the reply and so forth,
429
- # but just the time needed to actually execute the command (this is the only
430
- # stage of command execution where the thread is blocked and can not serve
431
- # other requests in the meantime).
432
- #
433
- # You can configure the slow log with two parameters: one tells Redis
434
- # what is the execution time, in microseconds, to exceed in order for the
435
- # command to get logged, and the other parameter is the length of the
436
- # slow log. When a new command is logged the oldest one is removed from the
437
- # queue of logged commands.
438
-
439
- # The following time is expressed in microseconds, so 1000000 is equivalent
440
- # to one second. Note that a negative number disables the slow log, while
441
- # a value of zero forces the logging of every command.
442
- slowlog-log-slower-than 10000
443
-
444
- # There is no limit to this length. Just be aware that it will consume memory.
445
- # You can reclaim memory used by the slow log with SLOWLOG RESET.
446
- slowlog-max-len 128
447
-
448
- ############################### ADVANCED CONFIG ###############################
449
-
450
- # Hashes are encoded using a memory efficient data structure when they have a
451
- # small number of entries, and the biggest entry does not exceed a given
452
- # threshold. These thresholds can be configured using the following directives.
453
- hash-max-ziplist-entries 512
454
- hash-max-ziplist-value 64
455
-
456
- # Similarly to hashes, small lists are also encoded in a special way in order
457
- # to save a lot of space. The special representation is only used when
458
- # you are under the following limits:
459
- list-max-ziplist-entries 512
460
- list-max-ziplist-value 64
461
-
462
- # Sets have a special encoding in just one case: when a set is composed
463
- # of just strings that happens to be integers in radix 10 in the range
464
- # of 64 bit signed integers.
465
- # The following configuration setting sets the limit in the size of the
466
- # set in order to use this special memory saving encoding.
467
- set-max-intset-entries 512
468
-
469
- # Similarly to hashes and lists, sorted sets are also specially encoded in
470
- # order to save a lot of space. This encoding is only used when the length and
471
- # elements of a sorted set are below the following limits:
472
- zset-max-ziplist-entries 128
473
- zset-max-ziplist-value 64
474
-
475
- # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
476
- # order to help rehashing the main Redis hash table (the one mapping top-level
477
- # keys to values). The hash table implementation Redis uses (see dict.c)
478
- # performs a lazy rehashing: the more operation you run into an hash table
479
- # that is rehashing, the more rehashing "steps" are performed, so if the
480
- # server is idle the rehashing is never complete and some more memory is used
481
- # by the hash table.
482
- #
483
- # The default is to use this millisecond 10 times every second in order to
484
- # active rehashing the main dictionaries, freeing memory when possible.
485
- #
486
- # If unsure:
487
- # use "activerehashing no" if you have hard latency requirements and it is
488
- # not a good thing in your environment that Redis can reply form time to time
489
- # to queries with 2 milliseconds delay.
490
- #
491
- # use "activerehashing yes" if you don't have such hard requirements but
492
- # want to free memory asap when possible.
493
- activerehashing yes
494
-
495
- # The client output buffer limits can be used to force disconnection of clients
496
- # that are not reading data from the server fast enough for some reason (a
497
- # common reason is that a Pub/Sub client can't consume messages as fast as the
498
- # publisher can produce them).
499
- #
500
- # The limit can be set differently for the three different classes of clients:
501
- #
502
- # normal -> normal clients
503
- # slave -> slave clients and MONITOR clients
504
- # pubsub -> clients subcribed to at least one pubsub channel or pattern
505
- #
506
- # The syntax of every client-output-buffer-limit directive is the following:
507
- #
508
- # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
509
- #
510
- # A client is immediately disconnected once the hard limit is reached, or if
511
- # the soft limit is reached and remains reached for the specified number of
512
- # seconds (continuously).
513
- # So for instance if the hard limit is 32 megabytes and the soft limit is
514
- # 16 megabytes / 10 seconds, the client will get disconnected immediately
515
- # if the size of the output buffers reach 32 megabytes, but will also get
516
- # disconnected if the client reaches 16 megabytes and continuously overcomes
517
- # the limit for 10 seconds.
518
- #
519
- # By default normal clients are not limited because they don't receive data
520
- # without asking (in a push way), but just after a request, so only
521
- # asynchronous clients may create a scenario where data is requested faster
522
- # than it can read.
523
- #
524
- # Instead there is a default limit for pubsub and slave clients, since
525
- # subscribers and slaves receive data in a push fashion.
526
- #
527
- # Both the hard or the soft limit can be disabled just setting it to zero.
528
- client-output-buffer-limit normal 0 0 0
529
- client-output-buffer-limit slave 256mb 64mb 60
530
- client-output-buffer-limit pubsub 32mb 8mb 60
531
-
532
- ################################## INCLUDES ###################################
533
-
534
- # Include one or more other config files here. This is useful if you
535
- # have a standard template that goes to all Redis server but also need
536
- # to customize a few per-server settings. Include files can include
537
- # other files, so use this wisely.
538
- #
539
- # include /path/to/local.conf
540
- # include /path/to/other.conf