resque-data-warehouse 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
data/HISTORY.md ADDED
@@ -0,0 +1,3 @@
1
+ ## 0.1.0 (2011-01-20)
2
+
3
+ * Initial version.
data/LICENSE ADDED
@@ -0,0 +1,20 @@
1
+ Copyright (c) 2011 Monica McArthur (mechaferret@gmail.com)
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining
4
+ a copy of this software and associated documentation files (the
5
+ Software), to deal in the Software without restriction, including
6
+ without limitation the rights to use, copy, modify, merge, publish,
7
+ distribute, sublicense, and/or sell copies of the Software, and to
8
+ permit persons to whom the Software is furnished to do so, subject to
9
+ the following conditions:
10
+
11
+ The above copyright notice and this permission notice shall be
12
+ included in all copies or substantial portions of the Software.
13
+
14
+ THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND,
15
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
17
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,69 @@
1
+ Resque Data Warehouse
2
+ =====================
3
+
4
+ A [Resque][rq] plugin. Requires Resque 1.9.10.
5
+
6
+ resque-data-warehouse allows you to use Redis to queue up and then Resque to process transactions
7
+ on transaction-heavy tables that need to be replicated on other tables optimized for
8
+ reporting.
9
+
10
+ Transactions for a given object (classname + ID) are queued up behind a Redis key,
11
+ and then processed using Resque jobs. If load is low, each transaction will be processed
12
+ almost immediately after it occurs; at higher loads, multiple transactions will queue up
13
+ before the Resque job gets to them, and then only the last transaction will be applied to the
14
+ data warehousing table, thus minimizing database load and dynamically adjusting the delay
15
+ in the copy to match the current load.
16
+
17
+ This only works with Rails; it has only been tested with Rails 2.3.4 in which case the after_commit
18
+ gem is also required.
19
+
20
+ Usage / Examples
21
+ ----------------
22
+
23
+ Suppose you have a database class Transactional, that gets a lot of traffic, and you want to have a counterpart
24
+ to this class, TransactionalFact, in a data warehouse. All you need to do are the following:
25
+
26
+ * Create an ActiveRecord model named TransactionalFact, with all the instance variables you want in it.
27
+ * Add a method to TransactionalFact named execute_transaction, which assumes that all the fields in the fact that
28
+ match the original Transactional are already set, saves them, and then performs any additional logic (denormalization, etc.)
29
+ to update any remaining fields.
30
+ * Require 'resque-data-warehouse' in Transactional, and add a line "warehoused".
31
+
32
+ Very simple examples of both classes:
33
+
34
+ class Transactional < ActiveRecord::Base
35
+ require 'resque-data-warehouse'
36
+ warehoused
37
+ end
38
+
39
+ class TransactionalFact < ActiveRecord::Base
40
+ def execute_transaction
41
+ self.save
42
+ # Any additional logic here
43
+ end
44
+ end
45
+
46
+ Customize & Extend
47
+ ==================
48
+
49
+ No customizations are available right now, but some obvious ones (such as how to derive the name of the warehoused class)
50
+ are likely to occur soon.
51
+
52
+ Install
53
+ =======
54
+
55
+ ### As a gem
56
+
57
+ $ gem install resque-data-warehouse
58
+
59
+ ### In a Rails app, as a plugin
60
+
61
+ $ ./script/plugin install git://github.com/mechaferret/resque-data-warehouse
62
+
63
+
64
+ Acknowledgements
65
+ ================
66
+
67
+ Thanks to resque and Redis for making this work possible.
68
+
69
+ [rq]: http://github.com/defunkt/resque
data/Rakefile ADDED
@@ -0,0 +1,10 @@
1
+ require 'rake/testtask'
2
+
3
+ task :default => :test
4
+
5
+ desc 'Run tests.'
6
+ Rake::TestTask.new(:test) do |task|
7
+ task.test_files = FileList['test/*_test.rb']
8
+ task.verbose = true
9
+ end
10
+
@@ -0,0 +1,2 @@
1
+ require 'resque/plugins/data_warehouse'
2
+ self.send(:include, Resque::Plugins::DataWarehouse)
@@ -0,0 +1,32 @@
1
+ module Resque
2
+ module Plugins
3
+ #
4
+ # data_warehoused
5
+ #
6
+ module DataWarehouse
7
+ Dir[File.dirname(__FILE__) + '/data_warehouse/*.rb'].each{|g| require g}
8
+ def self.included(base)
9
+ base.extend ClassMethods
10
+ end
11
+
12
+ module ClassMethods
13
+ def warehoused
14
+ include InstanceMethods
15
+ after_commit_on_save :record_to_fact
16
+ after_destroy :destroy_fact
17
+ end
18
+ end
19
+
20
+ module InstanceMethods
21
+ def record_to_fact
22
+ DataWarehouse::Transaction.new.enqueue(self)
23
+ end
24
+
25
+ def destroy_fact
26
+ DataWarehouse::Transaction.new.enqueue(self, 'delete')
27
+ end
28
+ end
29
+
30
+ end
31
+ end
32
+ end
@@ -0,0 +1,20 @@
1
+ module Resque
2
+ module Plugins
3
+ module DataWarehouse
4
+
5
+ module Fact
6
+ def self.find(type, values)
7
+ klass = "#{type}Fact".constantize
8
+ fact = klass.send(:find, values["id"]) rescue nil
9
+ fact = klass.new if fact.nil?
10
+ fact.id = values["id"]
11
+ values.delete("id")
12
+ values.delete_if{|key,value| !fact.attribute_names.include?(key)}
13
+ fact.attributes = values
14
+ fact
15
+ end
16
+ end
17
+
18
+ end
19
+ end
20
+ end
@@ -0,0 +1,48 @@
1
+ module Resque
2
+ module Plugins
3
+ module DataWarehouse
4
+ class Transaction
5
+ @queue = :transaction
6
+
7
+ def self.perform(transaction_id, transaction_type, transaction_date)
8
+ puts "Whee-hee, we're gonna work on a transaction for #{transaction_type} ID #{transaction_id} on #{transaction_date}\n"
9
+ redis = Resque.redis
10
+ record = TransactionRecord.new(transaction_id, transaction_type)
11
+ got_lock = false
12
+ retries = 0
13
+ while (!got_lock && retries<2)
14
+ if (got_lock=record.get_lock)
15
+ num_trans = redis.llen(record.transaction_key)
16
+ puts "we'll be processing #{num_trans} transactions\n"
17
+ num_trans_actual = 0
18
+ while (data = redis.lpop(record.transaction_key))
19
+ num_trans_actual = num_trans_actual+1
20
+ puts "transaction\n"
21
+ next_record = TransactionRecord.new(transaction_id, transaction_type).from_json(data)
22
+ puts "next_record is #{next_record.inspect}\n"
23
+ record = record.merge(next_record)
24
+ puts "merged is #{record.inspect}\n"
25
+ end
26
+ puts "Read #{num_trans_actual} transactions"
27
+ puts "Final trans #{record.inspect}\n"
28
+ record.execute unless num_trans_actual==0
29
+ puts "done!"
30
+ record.release_lock
31
+ else
32
+ retries = retries+1
33
+ end
34
+ end
35
+ end
36
+
37
+ def enqueue(model, action = 'save')
38
+ record = TransactionRecord.new(model.id, model.class.to_s, model.updated_at, model.attributes, action)
39
+ Resque.redis.rpush(record.transaction_key, record.transaction_data.to_json)
40
+ Resque.enqueue(self.class, model.id, model.class.to_s, model.updated_at)
41
+ rescue Exception => ex
42
+ puts "transaction failing due to exception #{ex.inspect} #{ex.backtrace.join("\n")}"
43
+ end
44
+
45
+ end
46
+ end
47
+ end
48
+ end
@@ -0,0 +1,79 @@
1
+ module Resque
2
+ module Plugins
3
+ module DataWarehouse
4
+ class TransactionRecord
5
+ attr_accessor :id, :type, :date, :values, :action
6
+
7
+ def initialize(id, type, date = nil, values = nil, action = 'save')
8
+ self.id = id
9
+ self.type = type
10
+ self.date = date
11
+ self.values = values
12
+ self.action = action
13
+ self
14
+ end
15
+
16
+ def from_json(data)
17
+ data_array = JSON.parse(data)
18
+ self.date = Time.parse(data_array[0])
19
+ self.values = data_array[1]
20
+ self.action = data_array[2]
21
+ self
22
+ end
23
+
24
+ def transaction_key
25
+ "#{self.type}_#{self.id}"
26
+ end
27
+
28
+ def transaction_data
29
+ [self.date, self.values, self.action]
30
+ end
31
+
32
+ def fact
33
+ @fact ||= Fact.find(self.type, self.values)
34
+ end
35
+
36
+ def get_lock
37
+ redis = Resque.redis
38
+ if redis.setnx("#{self.transaction_key}_lock", 1)
39
+ puts "got lock"
40
+ return true
41
+ else
42
+ puts "no lock"
43
+ return false
44
+ end
45
+ end
46
+
47
+ def release_lock
48
+ redis = Resque.redis
49
+ redis.del("#{self.transaction_key}_lock")
50
+ puts "released lock"
51
+ end
52
+
53
+ def execute
54
+ if self.action=='delete'
55
+ self.fact.send("destroy")
56
+ else
57
+ self.fact.send("execute_transaction")
58
+ end
59
+ end
60
+
61
+ def empty?
62
+ self.id.blank? || self.type.blank? || self.date.blank?
63
+ end
64
+
65
+ def merge(t2)
66
+ if t2.empty?
67
+ return self
68
+ elsif self.empty?
69
+ return t2
70
+ end
71
+ d = (self.date <= t2.date) ? t2.date : self.date
72
+ data = (self.date <= t2.date) ? t2.values : self.values
73
+ action = (self.date <= t2.date) ? t2.action : self.action
74
+ TransactionRecord.new(self.id, self.type, d, data, action)
75
+ end
76
+ end
77
+ end
78
+ end
79
+ end
data/test/database.yml ADDED
@@ -0,0 +1,6 @@
1
+ mysql:
2
+ :adapter: mysql
3
+ :host: localhost
4
+ :username: root
5
+ :password: root
6
+ :database: dw_test
data/test/debug.log ADDED
@@ -0,0 +1,16 @@
1
+ # Logfile created on Thu Jan 20 18:20:02 -0800 2011 by logger.rb/22285
2
+ SQL (0.2ms) SET SQL_AUTO_IS_NULL=0
3
+ SQL (0.3ms) SHOW TABLES
4
+ SQL (430.6ms) CREATE TABLE `transactionals` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `name` varchar(255), `description` varchar(255), `other_id` int(11)) ENGINE=InnoDB
5
+ SQL (0.4ms) SHOW TABLES
6
+ SQL (114.4ms) CREATE TABLE `transactional_facts` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `name` varchar(255), `description` varchar(255), `other_id` int(11)) ENGINE=InnoDB
7
+ SQL (0.4ms) SHOW TABLES
8
+ SQL (102.1ms) CREATE TABLE `schema_migrations` (`version` varchar(255) NOT NULL) ENGINE=InnoDB
9
+ SQL (172.5ms) CREATE UNIQUE INDEX `unique_schema_migrations` ON `schema_migrations` (`version`)
10
+ SQL (0.4ms) SHOW TABLES
11
+ SQL (0.1ms) SELECT version FROM `schema_migrations`
12
+ SQL (0.4ms) INSERT INTO `schema_migrations` (version) VALUES ('3')
13
+ Transactional Columns (38.0ms) SHOW FIELDS FROM `transactionals`
14
+ SQL (0.3ms) BEGIN
15
+ Transactional Create (0.3ms) INSERT INTO `transactionals` (`name`, `other_id`, `description`) VALUES(NULL, NULL, NULL)
16
+ SQL (0.7ms) COMMIT
@@ -0,0 +1,132 @@
1
+ # Redis configuration file example
2
+
3
+ # By default Redis does not run as a daemon. Use 'yes' if you need it.
4
+ # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
5
+ daemonize yes
6
+
7
+ # When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
8
+ # You can specify a custom pid file location here.
9
+ pidfile ./test/redis-test.pid
10
+
11
+ # Accept connections on the specified port, default is 6379
12
+ port 9736
13
+
14
+ # If you want you can bind a single interface, if the bind option is not
15
+ # specified all the interfaces will listen for connections.
16
+ #
17
+ # bind 127.0.0.1
18
+
19
+ # Close the connection after a client is idle for N seconds (0 to disable)
20
+ timeout 300
21
+
22
+ # Save the DB on disk:
23
+ #
24
+ # save <seconds> <changes>
25
+ #
26
+ # Will save the DB if both the given number of seconds and the given
27
+ # number of write operations against the DB occurred.
28
+ #
29
+ # In the example below the behaviour will be to save:
30
+ # after 900 sec (15 min) if at least 1 key changed
31
+ # after 300 sec (5 min) if at least 10 keys changed
32
+ # after 60 sec if at least 10000 keys changed
33
+ save 900 1
34
+ save 300 10
35
+ save 60 10000
36
+
37
+ # The filename where to dump the DB
38
+ dbfilename dump.rdb
39
+
40
+ # For default save/load DB in/from the working directory
41
+ # Note that you must specify a directory not a file name.
42
+ dir ./test/
43
+
44
+ # Set server verbosity to 'debug'
45
+ # it can be one of:
46
+ # debug (a lot of information, useful for development/testing)
47
+ # notice (moderately verbose, what you want in production probably)
48
+ # warning (only very important / critical messages are logged)
49
+ loglevel debug
50
+
51
+ # Specify the log file name. Also 'stdout' can be used to force
52
+ # the demon to log on the standard output. Note that if you use standard
53
+ # output for logging but daemonize, logs will be sent to /dev/null
54
+ logfile stdout
55
+
56
+ # Set the number of databases. The default database is DB 0, you can select
57
+ # a different one on a per-connection basis using SELECT <dbid> where
58
+ # dbid is a number between 0 and 'databases'-1
59
+ databases 16
60
+
61
+ ################################# REPLICATION #################################
62
+
63
+ # Master-Slave replication. Use slaveof to make a Redis instance a copy of
64
+ # another Redis server. Note that the configuration is local to the slave
65
+ # so for example it is possible to configure the slave to save the DB with a
66
+ # different interval, or to listen to another port, and so on.
67
+
68
+ # slaveof <masterip> <masterport>
69
+
70
+ ################################## SECURITY ###################################
71
+
72
+ # Require clients to issue AUTH <PASSWORD> before processing any other
73
+ # commands. This might be useful in environments in which you do not trust
74
+ # others with access to the host running redis-server.
75
+ #
76
+ # This should stay commented out for backward compatibility and because most
77
+ # people do not need auth (e.g. they run their own servers).
78
+
79
+ # requirepass foobared
80
+
81
+ ################################### LIMITS ####################################
82
+
83
+ # Set the max number of connected clients at the same time. By default there
84
+ # is no limit, and it's up to the number of file descriptors the Redis process
85
+ # is able to open. The special value '0' means no limts.
86
+ # Once the limit is reached Redis will close all the new connections sending
87
+ # an error 'max number of clients reached'.
88
+
89
+ # maxclients 128
90
+
91
+ # Don't use more memory than the specified amount of bytes.
92
+ # When the memory limit is reached Redis will try to remove keys with an
93
+ # EXPIRE set. It will try to start freeing keys that are going to expire
94
+ # in little time and preserve keys with a longer time to live.
95
+ # Redis will also try to remove objects from free lists if possible.
96
+ #
97
+ # If all this fails, Redis will start to reply with errors to commands
98
+ # that will use more memory, like SET, LPUSH, and so on, and will continue
99
+ # to reply to most read-only commands like GET.
100
+ #
101
+ # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
102
+ # 'state' server or cache, not as a real DB. When Redis is used as a real
103
+ # database the memory usage will grow over the weeks, it will be obvious if
104
+ # it is going to use too much memory in the long run, and you'll have the time
105
+ # to upgrade. With maxmemory after the limit is reached you'll start to get
106
+ # errors for write operations, and this may even lead to DB inconsistency.
107
+
108
+ # maxmemory <bytes>
109
+
110
+ ############################### ADVANCED CONFIG ###############################
111
+
112
+ # Glue small output buffers together in order to send small replies in a
113
+ # single TCP packet. Uses a bit more CPU but most of the times it is a win
114
+ # in terms of number of queries per second. Use 'yes' if unsure.
115
+ glueoutputbuf yes
116
+
117
+ # Use object sharing. Can save a lot of memory if you have many common
118
+ # string in your dataset, but performs lookups against the shared objects
119
+ # pool so it uses more CPU and can be a bit slower. Usually it's a good
120
+ # idea.
121
+ #
122
+ # When object sharing is enabled (shareobjects yes) you can use
123
+ # shareobjectspoolsize to control the size of the pool used in order to try
124
+ # object sharing. A bigger pool size will lead to better sharing capabilities.
125
+ # In general you want this value to be at least the double of the number of
126
+ # very common strings you have in your dataset.
127
+ #
128
+ # WARNING: object sharing is experimental, don't enable this feature
129
+ # in production before of Redis 1.0-stable. Still please try this feature in
130
+ # your development environment so that we can test it better.
131
+ #shareobjects no
132
+ #shareobjectspoolsize 1024
data/test/schema.rb ADDED
@@ -0,0 +1,16 @@
1
+ ActiveRecord::Schema.define(:version => 3) do
2
+ create_table :transactionals, :force => true do |t|
3
+ t.column :name, :string
4
+ t.column :description, :string
5
+ t.column :other_id, :integer
6
+ t.timestamps
7
+ end
8
+
9
+ create_table :transactional_facts, :force => true do |t|
10
+ t.column :name, :string
11
+ t.column :description, :string
12
+ t.column :other_id, :integer
13
+ t.timestamps
14
+ end
15
+
16
+ end
@@ -0,0 +1,51 @@
1
+ dir = File.dirname(File.expand_path(__FILE__))
2
+ $LOAD_PATH.unshift dir + '/../lib'
3
+ $TESTING = true
4
+
5
+ require 'rubygems'
6
+ require 'test/unit'
7
+ require 'resque'
8
+ require 'active_record'
9
+ require 'active_record/fixtures'
10
+ require 'active_support'
11
+ require 'active_support/test_case'
12
+ require 'after_commit'
13
+
14
+ require 'resque-data-warehouse'
15
+ require dir + '/test_models'
16
+
17
+ config = YAML::load(IO.read(File.dirname(__FILE__) + '/database.yml'))
18
+ ActiveRecord::Base.configurations = {'test' => config[ENV['DB'] || 'mysql']}
19
+ ActiveRecord::Base.establish_connection(ActiveRecord::Base.configurations['test'])
20
+
21
+ load(File.dirname(__FILE__) + "/schema.rb")
22
+ ##
23
+ # make sure we can run redis
24
+ if !system("which redis-server")
25
+ puts '', "** can't find `redis-server` in your path"
26
+ puts "** try running `sudo rake install`"
27
+ abort ''
28
+ end
29
+
30
+ ##
31
+ # start our own redis when the tests start,
32
+ # kill it when they end
33
+ at_exit do
34
+ next if $!
35
+
36
+ if defined?(MiniTest)
37
+ exit_code = MiniTest::Unit.new.run(ARGV)
38
+ else
39
+ exit_code = Test::Unit::AutoRunner.run
40
+ end
41
+
42
+ pid = `ps -e -o pid,command | grep [r]edis-test`.split(" ")[0]
43
+ puts "Killing test redis server..."
44
+ `rm -f #{dir}/dump.rdb`
45
+ Process.kill("KILL", pid.to_i)
46
+ exit exit_code
47
+ end
48
+
49
+ puts "Starting redis for testing at localhost:9736..."
50
+ `redis-server #{dir}/redis-test.conf`
51
+ Resque.redis = '127.0.0.1:9736'
@@ -0,0 +1,10 @@
1
+ class Transactional < ActiveRecord::Base
2
+ warehoused
3
+ end
4
+
5
+ class TransactionalFact < ActiveRecord::Base
6
+ def execute_transaction
7
+ puts "executing transaction on transactional fact"
8
+ self.save
9
+ end
10
+ end
@@ -0,0 +1,78 @@
1
+ require File.dirname(__FILE__) + '/test_helper'
2
+
3
+ class LockTest < Test::Unit::TestCase
4
+ def setup
5
+ $success = $lock_failed = $lock_expired = 0
6
+ Resque.redis.flushall
7
+ @worker = Resque::Worker.new(:transaction)
8
+ end
9
+
10
+ def test_lint
11
+ assert_nothing_raised do
12
+ Resque::Plugin.lint(Resque::Plugins::DataWarehouse)
13
+ end
14
+ end
15
+
16
+ def test_create
17
+ t = Transactional.new(:name=>'Test 1', :description=>'First transaction', :other_id=>2)
18
+ t.save
19
+ @worker.process
20
+ tf = TransactionalFact.find(t.id)
21
+ assert !tf.nil?
22
+ assert tf.name==t.name
23
+ assert tf.description==t.description
24
+ assert tf.other_id==t.other_id
25
+ end
26
+
27
+ def test_update
28
+ t = Transactional.new(:name=>'Test 1', :description=>'First transaction', :other_id=>2)
29
+ t.save
30
+ @worker.process
31
+ tf = TransactionalFact.find(t.id)
32
+ assert !tf.nil?
33
+ assert tf.name==t.name
34
+ assert tf.description==t.description
35
+ assert tf.other_id==t.other_id
36
+ t.description = 'Change me'
37
+ t.save
38
+ @worker.process
39
+ tf = TransactionalFact.find(t.id)
40
+ assert !tf.nil?
41
+ assert tf.name==t.name
42
+ assert tf.description==t.description
43
+ assert tf.other_id==t.other_id
44
+ end
45
+
46
+ def test_delete
47
+ t = Transactional.new(:name=>'Test 1', :description=>'First transaction', :other_id=>2)
48
+ t.save
49
+ @worker.process
50
+ tf = TransactionalFact.find(t.id)
51
+ assert !tf.nil?
52
+ assert tf.name==t.name
53
+ assert tf.description==t.description
54
+ assert tf.other_id==t.other_id
55
+ t.destroy
56
+ @worker.process
57
+ assert_raise(ActiveRecord::RecordNotFound) do
58
+ tf = TransactionalFact.find(t.id)
59
+ end
60
+ end
61
+
62
+ def test_multiple_transactions
63
+ t = Transactional.new(:name=>'Test 1', :description=>'First transaction', :other_id=>2)
64
+ t.save
65
+ assert_raise(ActiveRecord::RecordNotFound) do
66
+ tf = TransactionalFact.find(t.id)
67
+ end
68
+ t.description = 'Update me'
69
+ t.save
70
+ @worker.process
71
+ tf = TransactionalFact.find(t.id)
72
+ assert !tf.nil?
73
+ assert tf.name==t.name
74
+ assert tf.description==t.description
75
+ assert tf.other_id==t.other_id
76
+ end
77
+
78
+ end
metadata ADDED
@@ -0,0 +1,129 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: resque-data-warehouse
3
+ version: !ruby/object:Gem::Version
4
+ hash: 27
5
+ prerelease: false
6
+ segments:
7
+ - 0
8
+ - 1
9
+ - 0
10
+ version: 0.1.0
11
+ platform: ruby
12
+ authors:
13
+ - Monica McArthur
14
+ autorequire:
15
+ bindir: bin
16
+ cert_chain: []
17
+
18
+ date: 2011-01-21 00:00:00 -08:00
19
+ default_executable:
20
+ dependencies:
21
+ - !ruby/object:Gem::Dependency
22
+ name: resque
23
+ prerelease: false
24
+ requirement: &id001 !ruby/object:Gem::Requirement
25
+ none: false
26
+ requirements:
27
+ - - ">="
28
+ - !ruby/object:Gem::Version
29
+ hash: 39
30
+ segments:
31
+ - 1
32
+ - 9
33
+ - 10
34
+ version: 1.9.10
35
+ type: :runtime
36
+ version_requirements: *id001
37
+ - !ruby/object:Gem::Dependency
38
+ name: rails
39
+ prerelease: false
40
+ requirement: &id002 !ruby/object:Gem::Requirement
41
+ none: false
42
+ requirements:
43
+ - - ">="
44
+ - !ruby/object:Gem::Version
45
+ hash: 11
46
+ segments:
47
+ - 2
48
+ - 3
49
+ - 4
50
+ version: 2.3.4
51
+ type: :runtime
52
+ version_requirements: *id002
53
+ - !ruby/object:Gem::Dependency
54
+ name: after_commit
55
+ prerelease: false
56
+ requirement: &id003 !ruby/object:Gem::Requirement
57
+ none: false
58
+ requirements:
59
+ - - ">="
60
+ - !ruby/object:Gem::Version
61
+ hash: 7
62
+ segments:
63
+ - 1
64
+ - 0
65
+ - 8
66
+ version: 1.0.8
67
+ type: :runtime
68
+ version_requirements: *id003
69
+ description: " A Resque plugin. Allows you to use Redis to queue up and then Resque to process transactions \n on transaction-heavy tables that need to be replicated on other tables optimized for \n reporting.\n"
70
+ email: mechaferret@gmail.com
71
+ executables: []
72
+
73
+ extensions: []
74
+
75
+ extra_rdoc_files: []
76
+
77
+ files:
78
+ - README.md
79
+ - Rakefile
80
+ - LICENSE
81
+ - HISTORY.md
82
+ - lib/resque/plugins/data_warehouse/fact.rb
83
+ - lib/resque/plugins/data_warehouse/transaction.rb
84
+ - lib/resque/plugins/data_warehouse/transaction_record.rb
85
+ - lib/resque/plugins/data_warehouse.rb
86
+ - lib/resque-data-warehouse.rb
87
+ - test/database.yml
88
+ - test/debug.log
89
+ - test/redis-test.conf
90
+ - test/schema.rb
91
+ - test/test_helper.rb
92
+ - test/test_models.rb
93
+ - test/warehouse_test.rb
94
+ has_rdoc: true
95
+ homepage: http://github.com/mechaferret/resque-data-warehouse
96
+ licenses: []
97
+
98
+ post_install_message:
99
+ rdoc_options: []
100
+
101
+ require_paths:
102
+ - lib
103
+ required_ruby_version: !ruby/object:Gem::Requirement
104
+ none: false
105
+ requirements:
106
+ - - ">="
107
+ - !ruby/object:Gem::Version
108
+ hash: 3
109
+ segments:
110
+ - 0
111
+ version: "0"
112
+ required_rubygems_version: !ruby/object:Gem::Requirement
113
+ none: false
114
+ requirements:
115
+ - - ">="
116
+ - !ruby/object:Gem::Version
117
+ hash: 3
118
+ segments:
119
+ - 0
120
+ version: "0"
121
+ requirements: []
122
+
123
+ rubyforge_project:
124
+ rubygems_version: 1.3.7
125
+ signing_key:
126
+ specification_version: 3
127
+ summary: A Resque plugin for using Resque and Redis to store and process transactions between transactional and data warehouse tables.
128
+ test_files: []
129
+