slubydoo 0.0.4.1
Sign up to get free protection for your applications and to get access to all the features.
- data/README +111 -0
- data/lib/slubydoo.rb +4 -0
- data/lib/slubydoo/slony_cluster.rb +206 -0
- data/lib/slubydoo/slony_node.rb +60 -0
- data/lib/slubydoo/slony_preset.rb +116 -0
- data/lib/slubydoo/slony_set.rb +328 -0
- data/lib/slubydoo/slony_tasks.rake +57 -0
- data/lib/slubydoo/tasks.rb +1 -0
- data/test/slubydoo_spec.rb +5 -0
- data/test/spec.opts +2 -0
- data/test/spec_helper.rb +3 -0
- metadata +89 -0
data/README
ADDED
@@ -0,0 +1,111 @@
|
|
1
|
+
Copyright (c) August 2008 EFishman, released under the MIT license
|
2
|
+
|
3
|
+
|
4
|
+
-----
|
5
|
+
Notes
|
6
|
+
-----
|
7
|
+
SLUBYDOO is a feature set that allows you to work with slony - ruby - postgresql. Slubydoo is not just an API. With Slubydoo you can use migrations to manipulate your database and still use slony for replication. You may be able to get by using Slubydoo with a minimal knowledge of slony. However, I advise you to read up on Slony to see if it truly is a fit for your database schema. Slony is trigger based, so Slubydoo is intended to work with slonys strengths while insulating from finer details. I currently use Slubydoo with Ruby On Rails ActiveRecord, however it is easy to change a couple lines and you are off and running with another ORM.
|
8
|
+
|
9
|
+
|
10
|
+
-------------------
|
11
|
+
Yaml Configurations
|
12
|
+
-------------------
|
13
|
+
The principal to recognize first of all is that slony is highly configurable. I have attempted to create a highly configurable feature set that is adaptable to your configurations and choices. There are two configuration files slony_nodes.yml and slony_paths.yml. Here is where you put your database connection information, remote machine connection information and file system path information. Slubydoo will run off of your configurations.
|
14
|
+
|
15
|
+
The application_configuration gem is required and allows you to set these configs to your specific systems.
|
16
|
+
|
17
|
+
In your application_configuration.yml file add a param db_slony_replication_status to true or false. Then in your app you can refer to this variable as app_config.db_slony_replication_status. This allows you to switch replication on or off which can be useful in development and test modes.
|
18
|
+
|
19
|
+
Example app_config/slubydoo/slony_nodes.yml file:
|
20
|
+
|
21
|
+
---
|
22
|
+
nodes:
|
23
|
+
-
|
24
|
+
node_name: master1
|
25
|
+
node_id: 1
|
26
|
+
node_schema: public
|
27
|
+
node_cluster_name: slony_cluster
|
28
|
+
node_host: 127.0.0.1
|
29
|
+
node_database_name: master
|
30
|
+
node_username: foo
|
31
|
+
node_password: password
|
32
|
+
node_port: 5432
|
33
|
+
node_adapter: postgresql
|
34
|
+
node_encoding: unicode
|
35
|
+
node_type: master
|
36
|
+
node_system_username: foo
|
37
|
+
node_system_password: password
|
38
|
+
node_slonik_cmd_path: "/usr/local/pgsql/bin/slonik"
|
39
|
+
node_slon_cmd_path: "/usr/local/pgsql/bin/slon"
|
40
|
+
-
|
41
|
+
node_name: slave2
|
42
|
+
node_id: 2
|
43
|
+
node_schema: public
|
44
|
+
node_cluster_name: slony_cluster
|
45
|
+
node_host: 127.0.0.0
|
46
|
+
node_database_name: slave
|
47
|
+
node_username: foo
|
48
|
+
node_password: password
|
49
|
+
node_port: 5432
|
50
|
+
node_adapter: postgresql
|
51
|
+
node_encoding: unicode
|
52
|
+
node_type: slave
|
53
|
+
node_system_username: foo
|
54
|
+
node_system_password: password
|
55
|
+
node_slonik_cmd_path: "/usr/bin/slonik"
|
56
|
+
node_slon_cmd_path: "/usr/bin/slon"
|
57
|
+
|
58
|
+
|
59
|
+
Example app_config/slubydoo/slony_paths.yml file:
|
60
|
+
|
61
|
+
slubydoo_initialize_yaml_dir_path: "/lib/slubydoo/sluby_snacks/"
|
62
|
+
slubydoo_config_dir_path: "/config/app_config/slubydoo/"
|
63
|
+
slubydoo_node_file_name: "slony_nodes.yml"
|
64
|
+
slubydoo_tables_file_name: "slon_tables.yml"
|
65
|
+
slubydoo_sequences_file_name: "slon_sequences.yml"
|
66
|
+
slubydoo_sets_file_name: "slon_sets.yml"
|
67
|
+
slubydoo_execute_dir_path: "/tmp/"
|
68
|
+
slubydoo_log_file_path: "/tmp/foo.log"
|
69
|
+
slubydoo_slonik_dir_path: "/tmp/"
|
70
|
+
slubydoo_delete_files_on_remote_hosts: false
|
71
|
+
slubydoo_refine_table_search_terms: ""
|
72
|
+
slubydoo_sleep_time: 40
|
73
|
+
slubydoo_master_slave_process_conditions: "-d 1 -s 20000 -g 50 -t 60000"
|
74
|
+
|
75
|
+
Make sure all these paths exist and in the same path locations on both your master and slave databases.
|
76
|
+
|
77
|
+
|
78
|
+
------------------
|
79
|
+
Other Useful Intel
|
80
|
+
------------------
|
81
|
+
For legacy tables, you have several choices with slubydoo. You can create yaml files according to my specifications (See SlonySet) and slubydoo will create slony sets and tables based on these yamls of your design. This will allow you to design your slony sets according to your databases foreign keys and relationships. The other choice is to automaticaly create one set for each table that has a primary key (default). Slubydoo will search your database catalog and determine the tables with primary keys and which ones have sequences. It's your choice. There is nothing stopping you from altering it further if you like a different approach. If you want the magic push one button approach where you don't have to configure anything, slony is NOT for you and neither is slubydoo.
|
82
|
+
|
83
|
+
Make sure that you can ssh from the application servers directly to the database machines with just a username, password and hostaddress. Also that you can ssh from the master db to the slave db and vice versa.
|
84
|
+
|
85
|
+
I have put sql and connection specific information in the SlonyPreset class. That way you only have to change this file if you want to switch from postgresql specific sql or from ActiveRecord methods for example.
|
86
|
+
|
87
|
+
|
88
|
+
----------
|
89
|
+
Rake Tasks
|
90
|
+
----------
|
91
|
+
rake slubydoo:replication_start
|
92
|
+
Where to begin. This will create slony sets based on either your yaml configs or your current schema (default). It will include tables with primary keys only. If you want to include tables without primary keys you can't use slony. I put these rake tasks in my app, but you could uncomment them and use them here if you want.
|
93
|
+
|
94
|
+
|
95
|
+
----------
|
96
|
+
Migrations
|
97
|
+
----------
|
98
|
+
See the RDocumentation on how to use slubydoo with migrations. This works nicely and is pretty much the entire reason of why I built this functionality.
|
99
|
+
#replicate_nodes(rep_action, options = {}) {|if block_given?| ...}
|
100
|
+
#basically runs the same migration code on more than one database you wrap your migrations like this:
|
101
|
+
#self.up
|
102
|
+
# replicate_nodes(:alter_table, :rep_table => :table1) do
|
103
|
+
# add_column :table1, :column2, :integer
|
104
|
+
# end
|
105
|
+
#end
|
106
|
+
|
107
|
+
|
108
|
+
---------------
|
109
|
+
Recommendations
|
110
|
+
---------------
|
111
|
+
Browse through the Slubydoo RDocumentation after reading this.
|
data/lib/slubydoo.rb
ADDED
@@ -0,0 +1,4 @@
|
|
1
|
+
require File.join(File.dirname(__FILE__),"slubydoo", "slony_set")
|
2
|
+
require File.join(File.dirname(__FILE__),"slubydoo", "slony_cluster")
|
3
|
+
require File.join(File.dirname(__FILE__),"slubydoo", "slony_node")
|
4
|
+
require File.join(File.dirname(__FILE__),"slubydoo", "slony_preset")
|
@@ -0,0 +1,206 @@
|
|
1
|
+
require 'net/ssh'
|
2
|
+
require 'net/sftp'
|
3
|
+
require 'stringio'
|
4
|
+
|
5
|
+
class SlonyCluster
|
6
|
+
|
7
|
+
class << self
|
8
|
+
|
9
|
+
# Kick it ALL off!
|
10
|
+
# Make sure your configs are all set first.
|
11
|
+
def slubydoo_start
|
12
|
+
bresult = false
|
13
|
+
begin
|
14
|
+
puts "\nBeginning creation of slony nodes, processing of sets, and starting processes.\n"
|
15
|
+
mn = SlonyNode.master_node
|
16
|
+
sn = SlonyNode.slave_nodes
|
17
|
+
bresult = initialize_nodes(mn, sn)
|
18
|
+
bresult = master_and_slave_processes("start", mn, sn) if bresult == true
|
19
|
+
bresult = SlonySet.subscribe_sets(mn, sn) if bresult == true
|
20
|
+
rescue Exception => ex
|
21
|
+
puts "Error while running slubydoo_start: #{ex}\n" if bresult != true
|
22
|
+
end # begin
|
23
|
+
bresult
|
24
|
+
end
|
25
|
+
|
26
|
+
def initialize_nodes(mn, sn)
|
27
|
+
bresult = false
|
28
|
+
nodes = []
|
29
|
+
nodes << mn
|
30
|
+
nodes << sn
|
31
|
+
nodes.flatten.each do |node|
|
32
|
+
begin
|
33
|
+
conf_string = connection_namespace(mn, sn)
|
34
|
+
conf_string << init_cluster if node["node_type"] == "master"
|
35
|
+
conf_string << create_cluster_store(mn, node) if node["node_type"] == "slave"
|
36
|
+
sfname = "#{app_config.slubydoo_slonik_dir_path}#{node["node_name"]}_init_" << String.create_random << ".sh"
|
37
|
+
bresult = create_file_on_system(node, sfname, conf_string)
|
38
|
+
bresult = execute_command_on_system(node, "#{node["node_slonik_cmd_path"]} #{sfname}") if bresult == true
|
39
|
+
bresult = delete_file_on_system(node, sfname) if bresult == true and app_config.slubydoo_delete_files_on_remote_hosts
|
40
|
+
sleep app_config.slubydoo_sleep_time
|
41
|
+
rescue Exception => ex
|
42
|
+
puts "Error while initializing nodes: #{ex}\n" if bresult != true
|
43
|
+
end
|
44
|
+
end
|
45
|
+
bresult
|
46
|
+
end
|
47
|
+
|
48
|
+
# slon is the daemon application that "runs" Slony-I replication. A slon instance must be run for each node in a Slony-I cluster.
|
49
|
+
def master_and_slave_processes(start_stop, mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes, log_path = app_config.slubydoo_log_file_path, conditions = app_config.slubydoo_master_slave_process_conditions )
|
50
|
+
bresult = false
|
51
|
+
sn = [mn] << sn
|
52
|
+
begin
|
53
|
+
sn.flatten.each do |node|
|
54
|
+
puts %{#{node["node_slon_cmd_path"]} will #{start_stop} after #{Time.now} on db=#{node["node_database_name"]} host=#{node["node_host"]}\n}
|
55
|
+
filestr = %{#{node["node_slon_cmd_path"]} #{conditions} #{node["node_cluster_name"]} #{conninfo(node)}} if start_stop == "start"
|
56
|
+
filestr = "killall -15 slon" if start_stop == "stop"
|
57
|
+
#sfname = "#{app_config.slubydoo_slonik_dir_path}#{node["node_name"]}_#{start_stop}_process_" << String.create_random << ".sh"
|
58
|
+
#bresult = create_file_on_system(node, sfname, filestr)
|
59
|
+
#bresult = execute_command_on_system(node, "sh #{sfname} &") if bresult == true
|
60
|
+
bresult = execute_command_on_system(node, "#{filestr}")
|
61
|
+
bresult = true if start_stop == "stop"
|
62
|
+
end
|
63
|
+
rescue Exception => ex
|
64
|
+
puts "Error with master_and_slave_processes: #{ex}\n"
|
65
|
+
bresult = false
|
66
|
+
end # begin
|
67
|
+
bresult
|
68
|
+
end
|
69
|
+
|
70
|
+
# This is used to create all sets, tables, sequences, and subscribe those sets,
|
71
|
+
# depending on what arguments are in the yamls that you feed it.
|
72
|
+
# For ease of use to create a file and run it with the same method.
|
73
|
+
def run_slonik_on_system(mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes, in_text = "", description = "")
|
74
|
+
bresult = false
|
75
|
+
if in_text == ""
|
76
|
+
puts "You did not supply any text for run_slonik_on_system. Are your yamls configured correctly?\n"
|
77
|
+
else
|
78
|
+
begin
|
79
|
+
new_file_str = "#!/bin/sh\n"
|
80
|
+
new_file_str = connection_namespace(mn, sn) << in_text
|
81
|
+
sfname = "#{app_config.slubydoo_slonik_dir_path}slony_#{description.downcase.gsub(/[\s]/, '_')}_" << String.create_random << ".sh"
|
82
|
+
bresult = create_file_on_system(mn, sfname, new_file_str)
|
83
|
+
bresult = execute_command_on_system(mn, "#{mn["node_slonik_cmd_path"]} #{sfname}") if bresult == true
|
84
|
+
bresult = delete_file_on_system(mn, sfname) if bresult == true and app_config.slubydoo_delete_files_on_remote_hosts
|
85
|
+
sleep app_config.slubydoo_sleep_time # give things a few seconds to catch up
|
86
|
+
rescue Exception => ex
|
87
|
+
puts "Error message: #{ex}\nFailed to #{description} \nThis was the attempted text: \n\n#{new_file_str}\n"
|
88
|
+
end # begin
|
89
|
+
end # if in_text == ""
|
90
|
+
bresult
|
91
|
+
end
|
92
|
+
|
93
|
+
# allows you to take a command and execute it remotely using the gem net-ssh
|
94
|
+
def execute_command_on_system(event_node, in_command)
|
95
|
+
bresult = false
|
96
|
+
output = ""
|
97
|
+
begin
|
98
|
+
Net::SSH.start(event_node["node_host"], event_node["node_system_username"], :password => event_node["node_system_password"]) do |ssh|
|
99
|
+
output = ssh.exec!(in_command << " > #{app_config.slubydoo_log_file_path} 2>&1 &")
|
100
|
+
end # Net::SSH.start
|
101
|
+
puts "This command was excecuted: #{in_command}\nThe host was #{event_node["node_host"]} #{output}\n"
|
102
|
+
bresult = true
|
103
|
+
rescue Exception => ex
|
104
|
+
puts "Failed to execute command: #{in_command}. Error message: #{ex}\n"
|
105
|
+
end # begin
|
106
|
+
bresult
|
107
|
+
end
|
108
|
+
|
109
|
+
# allows you to create a file remotely using the gem net-sftp
|
110
|
+
def create_file_on_system(event_node, sfname_remote, text_in_file)
|
111
|
+
bresult = false
|
112
|
+
output = ""
|
113
|
+
begin
|
114
|
+
Net::SFTP.start(event_node["node_host"], event_node["node_system_username"], :password => event_node["node_system_password"]) do |sftp|
|
115
|
+
io = StringIO.new(text_in_file)
|
116
|
+
sftp.upload! io, sfname_remote
|
117
|
+
end
|
118
|
+
Net::SSH.start(event_node["node_host"], event_node["node_system_username"], :password => event_node["node_system_password"]) do |ssh|
|
119
|
+
output = ssh.exec!("chmod u+x #{sfname_remote}" << " > #{app_config.slubydoo_log_file_path} 2>&1 &") # exec prints the bresults to $stdout, catch into output
|
120
|
+
end # Net::SSH.start
|
121
|
+
puts "This file was created: #{sfname_remote}\nThe host was #{event_node["node_host"]}\nThe text in the file is: #{text_in_file}\n #{output}\n"
|
122
|
+
bresult = true
|
123
|
+
rescue Exception => ex
|
124
|
+
puts "Failed to create file: #{sfname_remote}\nError message: #{ex}\n"
|
125
|
+
puts "\nThe host was #{event_node["node_host"]}\n" rescue ""
|
126
|
+
bresult = false
|
127
|
+
end # begin
|
128
|
+
bresult
|
129
|
+
end
|
130
|
+
|
131
|
+
# allows you to delete a file remotely using the gem net-ssh
|
132
|
+
def delete_file_on_system(event_node, sfname_remote)
|
133
|
+
bresult = false
|
134
|
+
output = ""
|
135
|
+
begin
|
136
|
+
Net::SSH.start(event_node["node_host"], event_node["node_system_username"], :password => event_node["node_system_password"]) do |ssh|
|
137
|
+
output = ssh.exec!("rm #{sfname_remote}" << " > #{app_config.slubydoo_log_file_path} 2>&1 &")
|
138
|
+
end # Net::SSH.start
|
139
|
+
puts "This file was deleted: #{sfname_remote}\nThe host was #{event_node["node_host"]}\n #{output}\n"
|
140
|
+
bresult = true
|
141
|
+
rescue Exception => ex
|
142
|
+
puts "Failed to delete file: #{sfname_remote}\nError message: #{ex}\n"
|
143
|
+
puts "\nThe host was #{event_node["node_host"]}\n" rescue ""
|
144
|
+
end # begin
|
145
|
+
bresult
|
146
|
+
end
|
147
|
+
|
148
|
+
def store_paths(master_node = SlonyNode.master_node, slave_nodes = SlonyNode.slave_nodes)
|
149
|
+
str = ""
|
150
|
+
master_and_slave_processes("stop")
|
151
|
+
slave_nodes.each do |slave_node|
|
152
|
+
str << store_path( master_node["node_id"], slave_node["node_id"], conninfo(master_node)) # master node
|
153
|
+
str << store_path( slave_node["node_id"], master_node["node_id"], conninfo(slave_node)) # slave nodes
|
154
|
+
end
|
155
|
+
str
|
156
|
+
end
|
157
|
+
|
158
|
+
|
159
|
+
private
|
160
|
+
|
161
|
+
# used by methods that need connection information
|
162
|
+
def conninfo(node)
|
163
|
+
%{'dbname=#{node["node_database_name"]} host=#{node["node_host"]} user=#{node["node_username"]} password=#{node["node_password"]} port=#{node["node_port"].to_s}'}
|
164
|
+
end
|
165
|
+
|
166
|
+
# init the first node. Its id MUST be 1. This creates the schema
|
167
|
+
# _$CLUSTERNAME containing all replication system specific database objects.
|
168
|
+
def init_cluster( id = 1, comment = "Master Node")
|
169
|
+
%{INIT CLUSTER ( ID = #{id}, COMMENT = '#{comment} #{id}');\n}
|
170
|
+
end
|
171
|
+
|
172
|
+
# custom slony store node adds each subsequent node.
|
173
|
+
def store_node(id, comment = "Slave Node", ev = 1)
|
174
|
+
%{STORE NODE ( ID = #{id}, COMMENT = '#{comment} #{id}', EVENT NODE = #{ev});\n}
|
175
|
+
end
|
176
|
+
|
177
|
+
# custom slony he path is defined by designating one node as a server and the other as a client for messaging.
|
178
|
+
def store_path(server, client, conninfo)
|
179
|
+
%{STORE PATH (SERVER = #{server}, CLIENT = #{client}, CONNINFO = #{conninfo});\n}
|
180
|
+
end
|
181
|
+
|
182
|
+
def drop_path(server, client)
|
183
|
+
%{DROP PATH (SERVER = #{server}, CLIENT = #{client});\n}
|
184
|
+
end
|
185
|
+
|
186
|
+
# Defines which namespace the replication system uses
|
187
|
+
# Admin conninfo's are used by the slonik program to connect to the node databases.
|
188
|
+
def connection_namespace(mn, sn)
|
189
|
+
conf_string = ""
|
190
|
+
conf_string << "CLUSTER NAME = #{mn["node_cluster_name"]};\n"
|
191
|
+
conf_string << %{NODE #{mn["node_id"]} ADMIN CONNINFO = #{conninfo(mn)};\n}
|
192
|
+
sn.each {|s| conf_string << %{NODE #{s["node_id"]} ADMIN CONNINFO = #{conninfo(s)};\n} }
|
193
|
+
conf_string
|
194
|
+
end
|
195
|
+
|
196
|
+
# Create the slave node and tell the two nodes how to connect to
|
197
|
+
# each other and how they should listen for events.
|
198
|
+
def create_cluster_store(mn, slave_node, str = "\n")
|
199
|
+
str << store_node(slave_node["node_id"]) # slave nodes
|
200
|
+
str << store_path( mn["node_id"], slave_node["node_id"], conninfo(mn)) # master node
|
201
|
+
str << store_path( slave_node["node_id"], mn["node_id"], conninfo(slave_node)) # slave nodes
|
202
|
+
str
|
203
|
+
end
|
204
|
+
|
205
|
+
end # class << self
|
206
|
+
end # class SlonyCluster
|
@@ -0,0 +1,60 @@
|
|
1
|
+
class SlonyNode
|
2
|
+
|
3
|
+
|
4
|
+
class << self
|
5
|
+
|
6
|
+
def slave_nodes
|
7
|
+
if @slave_nodes.nil? || @slave_nodes.empty?
|
8
|
+
x = YAML.load File.open(SlonyPreset.app_slubydoo_config_dir_path << app_config.slubydoo_node_file_name)
|
9
|
+
node_info = x["nodes"]
|
10
|
+
@slave_nodes = []
|
11
|
+
node_info.each do |n|
|
12
|
+
@slave_nodes << {}.merge(n) if n["node_type"] == "slave"
|
13
|
+
end
|
14
|
+
end
|
15
|
+
@slave_nodes
|
16
|
+
end
|
17
|
+
|
18
|
+
def master_node
|
19
|
+
if @master_node.nil? || @master_node.empty?
|
20
|
+
x = YAML.load File.open(SlonyPreset.app_slubydoo_config_dir_path << app_config.slubydoo_node_file_name)
|
21
|
+
node_info = x["nodes"]
|
22
|
+
@master_node = {}
|
23
|
+
node_info.each do |n|
|
24
|
+
if n["node_type"] == "master"
|
25
|
+
@master_node = {}.merge(n)
|
26
|
+
break
|
27
|
+
end # if n["node_type"] == "master"
|
28
|
+
end # node_info.each do |n|
|
29
|
+
end
|
30
|
+
@master_node
|
31
|
+
end
|
32
|
+
|
33
|
+
# basically runs the same migration code on more than one database
|
34
|
+
# you wrap your migrations like this:
|
35
|
+
# self.up
|
36
|
+
# replicate_nodes(:alter_table, :rep_table => :table1) do
|
37
|
+
# add_column :table1, :column2, :integer
|
38
|
+
# end
|
39
|
+
# end
|
40
|
+
def replicate_nodes(rep_action, options = {})
|
41
|
+
yield if block_given? # run the migration on the master node
|
42
|
+
if app_config.db_slony_replication_status == true
|
43
|
+
begin
|
44
|
+
nodes = slave_nodes
|
45
|
+
nodes.each do |node|
|
46
|
+
SlonyPreset.establish_node_connection(node)
|
47
|
+
yield if block_given?
|
48
|
+
SlonyPreset.restore_node_connection(node)
|
49
|
+
end
|
50
|
+
SlonySet.execute_default_set_script(master_node, nodes, {:rep_action => rep_action}.merge(options)) if rep_action.to_s != "schema_only"
|
51
|
+
rescue Exception => ex
|
52
|
+
SlonyPreset.restore_node_connection(master_node)
|
53
|
+
puts "Error: #{ex}"
|
54
|
+
end
|
55
|
+
end
|
56
|
+
end
|
57
|
+
|
58
|
+
end # class << self
|
59
|
+
|
60
|
+
end # class SlonyNode
|
@@ -0,0 +1,116 @@
|
|
1
|
+
class SlonyPreset
|
2
|
+
# put any configurations here that might change depending on the database/adapter/framework
|
3
|
+
|
4
|
+
class << self
|
5
|
+
|
6
|
+
def default_root_path
|
7
|
+
RAILS_ROOT
|
8
|
+
end
|
9
|
+
|
10
|
+
def default_logger
|
11
|
+
RAILS_DEFAULT_LOGGER
|
12
|
+
end
|
13
|
+
|
14
|
+
def app_slubydoo_config_dir_path
|
15
|
+
"#{default_root_path}#{app_config.slubydoo_config_dir_path}"
|
16
|
+
end
|
17
|
+
|
18
|
+
def app_slubydoo_yaml_dir_path
|
19
|
+
"#{default_root_path}#{app_config.slubydoo_initialize_yaml_dir_path}"
|
20
|
+
end
|
21
|
+
|
22
|
+
def establish_node_connection(node)
|
23
|
+
ActiveRecord::Base.establish_connection(
|
24
|
+
:adapter => node["node_adapter"],
|
25
|
+
:host => node["node_host"],
|
26
|
+
:username => node["node_username"],
|
27
|
+
:password => node["node_password"],
|
28
|
+
:database => node["node_database_name"],
|
29
|
+
:port => node["node_port"],
|
30
|
+
:encoding => node["node_encoding"],
|
31
|
+
:schema_search_path => node["node_schema"]
|
32
|
+
)
|
33
|
+
end
|
34
|
+
|
35
|
+
def restore_node_connection(node)
|
36
|
+
ActiveRecord::Base.establish_connection
|
37
|
+
end
|
38
|
+
|
39
|
+
def execute_sql(sqlscript)
|
40
|
+
res = ActiveRecord::Base.connection.execute(sqlscript).result.flatten
|
41
|
+
res
|
42
|
+
end
|
43
|
+
|
44
|
+
def find_all_tables_with_primary_keys(further_refine_search = "")
|
45
|
+
execute_sql %{
|
46
|
+
SELECT distinct(r.relname)
|
47
|
+
FROM pg_class r, pg_constraint c
|
48
|
+
WHERE r.oid = c.conrelid AND c.contype = 'p'
|
49
|
+
AND r.relname not like 'sl_%' #{further_refine_search}
|
50
|
+
ORDER BY r.relname desc;
|
51
|
+
}
|
52
|
+
end
|
53
|
+
|
54
|
+
def find_all_sequences
|
55
|
+
execute_sql %{
|
56
|
+
SELECT distinct(relname) FROM pg_class
|
57
|
+
WHERE relkind = 'S'
|
58
|
+
ORDER BY relname DESC;
|
59
|
+
}
|
60
|
+
end
|
61
|
+
|
62
|
+
def find_all_columns_in_table(tabname = "")
|
63
|
+
execute_sql %{
|
64
|
+
SELECT distinct(attname) FROM pg_attribute
|
65
|
+
WHERE attrelid = '#{tabname}'::regclass
|
66
|
+
AND attisdropped IS FALSE
|
67
|
+
AND attnum >= 1
|
68
|
+
ORDER BY attnum;
|
69
|
+
}
|
70
|
+
end
|
71
|
+
|
72
|
+
# this should also check for a sequence in the catalog
|
73
|
+
def has_id_seq_column?(tabname = "")
|
74
|
+
col_count = execute_sql %{
|
75
|
+
SELECT count(*)::integer FROM pg_class
|
76
|
+
WHERE relkind = 'S'
|
77
|
+
AND relname = '#{default_sequence_name_structure(tabname)}';
|
78
|
+
}
|
79
|
+
return true if (col_count.first.to_i > 0)
|
80
|
+
false
|
81
|
+
end
|
82
|
+
|
83
|
+
def find_tables_set(node_cluster_name,tablname)
|
84
|
+
execute_sql("SELECT tab_set FROM _#{node_cluster_name}.sl_table WHERE tab_relname = '#{tablname}';").first.to_s
|
85
|
+
end
|
86
|
+
|
87
|
+
def find_sequences_set(node_cluster_name,seqname)
|
88
|
+
execute_sql("SELECT seq_set FROM _#{node_cluster_name}.sl_sequence WHERE seq_relname = '#{seqname}';").first.to_s
|
89
|
+
end
|
90
|
+
|
91
|
+
def find_max_sequence_id(node_cluster_name)
|
92
|
+
execute_sql("SELECT (max(seq_id))::integer as foo_id FROM _#{node_cluster_name}.sl_sequence;").first.to_i
|
93
|
+
end
|
94
|
+
|
95
|
+
def find_max_table_id(node_cluster_name)
|
96
|
+
execute_sql("SELECT (max(tab_id))::integer as foo_id FROM _#{node_cluster_name}.sl_table;").first.to_i
|
97
|
+
end
|
98
|
+
|
99
|
+
def find_max_set_id(node_cluster_name)
|
100
|
+
execute_sql("SELECT (max(set_id))::integer as foo_id FROM _#{node_cluster_name}.sl_set;").first.to_i
|
101
|
+
end
|
102
|
+
|
103
|
+
def bounce_sql_statement
|
104
|
+
"SELECT 1;"
|
105
|
+
end
|
106
|
+
|
107
|
+
def default_sequence_name_structure(tablename, columname = "id")
|
108
|
+
"#{tablename}_#{columname}_seq"
|
109
|
+
end
|
110
|
+
|
111
|
+
def drop_rep_schema(schemaname)
|
112
|
+
execute_sql("DROP SCHEMA IF EXISTS #{schemaname} CASCADE;")
|
113
|
+
end
|
114
|
+
|
115
|
+
end # class << self
|
116
|
+
end # class SlonyPreset
|
@@ -0,0 +1,328 @@
|
|
1
|
+
class SlonySet
|
2
|
+
|
3
|
+
class << self
|
4
|
+
|
5
|
+
|
6
|
+
######## sets ########
|
7
|
+
# The smallest unit one node can subscribe for replication from another node is a set. A set always has an origin.
|
8
|
+
# ID of the set to be created, ORIGIN = Initial origin node of the set, COMMENT = descriptive text
|
9
|
+
def create_set(id, origin, comment)
|
10
|
+
%{CREATE SET ( ID = #{id}, ORIGIN = #{origin}, COMMENT = '#{comment}');\n}
|
11
|
+
end
|
12
|
+
|
13
|
+
# All sets need to be created if they are new or they have been dropped
|
14
|
+
# this gets run before any create table/sequence runs
|
15
|
+
# after sets have been created, then they can get subscribed.
|
16
|
+
# Creates a set for each table that has a primary key automatically.
|
17
|
+
# You can use the alternative create_all_sets_from_yaml if you want to specify your sets manually.
|
18
|
+
def create_all_sets(master_node = SlonyNode.master_node, set_str = "")
|
19
|
+
prepared_sets = one_set_for_each_table
|
20
|
+
prepared_sets.each_with_index {|r,i| set_str << create_set((i+1).to_s, master_node["node_id"], "set_#{r}")}
|
21
|
+
set_str
|
22
|
+
end
|
23
|
+
|
24
|
+
# This is what uses your configured yaml if you've chosen to use a yaml to configure on your own
|
25
|
+
# otherwise you can use create_all_sets which will automatically create sets based on your db tables.
|
26
|
+
def create_all_sets_from_yaml(set_str = "")
|
27
|
+
prepared_sets = read_slon_set_yaml
|
28
|
+
prepared_sets.each {|c| set_str << create_set(c["setid"], c["set_origin"], c["set_comment"])}
|
29
|
+
set_str
|
30
|
+
end
|
31
|
+
|
32
|
+
# Drop a set of tables from the Slony-I configuration. This automatically unsubscribes all nodes from the set
|
33
|
+
# and restores the original triggers and rules on all subscribers.
|
34
|
+
# ID = ID of the set to be dropped, ORIGIN = Current origin node of the set.
|
35
|
+
def drop_set(id, origin)
|
36
|
+
"DROP SET ( ID = #{id}, ORIGIN = #{origin} );\n"
|
37
|
+
end
|
38
|
+
|
39
|
+
# If you do this, you'll need to run create sets, tables and sequences again
|
40
|
+
# then you'll have to re subscribe those sets again.
|
41
|
+
# This is pretty much a reset button that undoes everything, but you'll need to
|
42
|
+
# use the other resources to put it back together.
|
43
|
+
def drop_sets(master_node = SlonyNode.master_node, set_str = "")
|
44
|
+
prepared_sets = one_set_for_each_table
|
45
|
+
prepared_sets.each_with_index {|r,i| set_str << drop_set((i+1).to_s, master_node["node_id"])}
|
46
|
+
set_str
|
47
|
+
end
|
48
|
+
|
49
|
+
def drop_all_sets_from_yaml(set_str = "")
|
50
|
+
prepared_sets = read_slon_set_yaml
|
51
|
+
prepared_sets.each {|c| set_str << drop_set(c["setid"], c["set_origin"])}
|
52
|
+
set_str
|
53
|
+
end
|
54
|
+
|
55
|
+
# Initiates replication for a replication set OR Revising subscription information for already-subscribed nodes.
|
56
|
+
# ID = ID of the set to subscribe, PROVIDER = Node ID of the data provider from which this node draws data,
|
57
|
+
# RECEIVER = Node ID of the new subscriber, FORWARD = boolean Flag whether or not the new subscriber should store the log information.
|
58
|
+
def subscribe_set( id, receiver, provider, forward = "NO")
|
59
|
+
%{SUBSCRIBE SET ( ID = #{id}, PROVIDER = #{provider}, RECEIVER = #{receiver}, FORWARD = #{forward} );\n}
|
60
|
+
end
|
61
|
+
|
62
|
+
# To get this to work, you need to first shut down the master and slave daemons
|
63
|
+
# then unsubscribe and subscribe, then restart the daemons
|
64
|
+
# otherwise it will not truncate the tables.
|
65
|
+
def subscribe_all_sets(mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes, set_str = "")
|
66
|
+
prepared_sets = one_set_for_each_table
|
67
|
+
sn.each {|node| prepared_sets.each_with_index {|c,i| set_str << subscribe_set((i+1).to_s, node["node_id"], mn["node_id"])}}
|
68
|
+
set_str
|
69
|
+
end
|
70
|
+
|
71
|
+
def subscribe_all_sets_from_yaml(mn, sn, set_str = "")
|
72
|
+
prepared_sets = read_slon_set_yaml
|
73
|
+
sn.each {|node| prepared_sets.each {|c| set_str << subscribe_set(c["setid"], node["node_id"], mn["node_id"])}}
|
74
|
+
set_str
|
75
|
+
end
|
76
|
+
|
77
|
+
# Stops the subscriber from replicating the set.
|
78
|
+
# ID = ID of the set to unsubscribe, RECEIVER = Node ID of the subscriber
|
79
|
+
def unsubscribe_set(id, receiver)
|
80
|
+
%{UNSUBSCRIBE SET ( ID = #{id}, RECEIVER = #{receiver} );\n}
|
81
|
+
end
|
82
|
+
|
83
|
+
# To get this to work, you need to first shut down the master and slave daemons
|
84
|
+
# then unsubscribe and subscribe, then restart the daemons
|
85
|
+
# otherwise it will not truncate the tables.
|
86
|
+
def unsubscribe_all_sets(mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes, set_str = "")
|
87
|
+
prepared_sets = one_set_for_each_table
|
88
|
+
sn.each {|node| prepared_sets.each_with_index {|c,i| set_str << unsubscribe_set((i+1).to_s, node["node_id"])}}
|
89
|
+
set_str
|
90
|
+
end
|
91
|
+
|
92
|
+
def unsubscribe_all_sets_from_yaml(mn, sn, set_str = "")
|
93
|
+
prepared_sets = read_slon_set_yaml
|
94
|
+
sn.each {|node| prepared_sets.each {|c| set_str << unsubscribe_set(c["setid"], node["node_id"])}}
|
95
|
+
set_str
|
96
|
+
end
|
97
|
+
|
98
|
+
######## end of sets ########
|
99
|
+
|
100
|
+
######## tables ########
|
101
|
+
# SET ID of the set to which the table is to be added, ORIGIN = Origin node for the set, ID = Unique ID of the table,
|
102
|
+
# FULLY QUALIFIED NAME = The full table name, COMMENT = descriptive text
|
103
|
+
def set_add_table( set_id, set_tablename, set_tableid, set_comment, set_schema, set_origin)
|
104
|
+
%{SET ADD TABLE (SET ID = #{set_id}, ORIGIN = #{set_origin}, ID = #{set_tableid}, FULLY QUALIFIED NAME = '#{set_schema}.#{set_tablename}', COMMENT = '#{set_schema}.#{set_tablename}');\n}
|
105
|
+
end
|
106
|
+
|
107
|
+
# after a set gets created you can add table/s, sequences to it
|
108
|
+
# Todo: redo the sequence area a bit in next version
|
109
|
+
def add_all_tables_with_sequences(master_node, set_str = "")
|
110
|
+
prepared_sets = one_set_for_each_table
|
111
|
+
prepared_sets.each_with_index do |tabl,i|
|
112
|
+
i += 1
|
113
|
+
set_str << set_add_table(i.to_s, tabl.to_s, i.to_s, tabl.to_s, master_node["node_schema"], master_node["node_id"])
|
114
|
+
set_str << set_add_sequence(i.to_s, SlonyPreset.default_sequence_name_structure(tabl.to_s), i.to_s, SlonyPreset.default_sequence_name_structure(tabl.to_s), master_node["node_schema"], master_node["node_id"]) if SlonyPreset.has_id_seq_column?(tabl)
|
115
|
+
end
|
116
|
+
set_str
|
117
|
+
end
|
118
|
+
|
119
|
+
# Use prepared tables with associated sets you prepare beforehand in a yaml file
|
120
|
+
# it's your choice to do this or just build with add_all_tables.
|
121
|
+
def add_all_tables_from_yaml(set_str = "")
|
122
|
+
prepared_sets = read_table_set_yaml
|
123
|
+
prepared_sets.each {|c| set_str << set_add_table(c["tab_set"], c["tab_relname"], c["tab_id"], c["tab_comment"], c["node_schema"], c["node_id"])}
|
124
|
+
set_str
|
125
|
+
end
|
126
|
+
######## end of tables ########
|
127
|
+
|
128
|
+
######## sequences ########
|
129
|
+
# SET ID of the set to which the sequence is to be added, ORIGIN = Origin node for the set, ID = Unique ID of the sequence,
|
130
|
+
# FULLY QUALIFIED NAME = The full sequence name, COMMENT = descriptive text.
|
131
|
+
# (happier are you if you don't use sequences at all)
|
132
|
+
def set_add_sequence( set_id, set_sequence, set_sequenceid, set_comment, set_schema, set_origin)
|
133
|
+
%{SET ADD SEQUENCE (SET ID = #{set_id}, ORIGIN = #{set_origin}, ID = #{set_sequenceid}, FULLY QUALIFIED NAME = '#{set_schema}.#{set_sequence}', COMMENT = '#{set_schema}.#{set_sequence}');\n}
|
134
|
+
end
|
135
|
+
|
136
|
+
# After a set gets created you can add table/s, sequences to it
|
137
|
+
# but don't add sequences only sets.
|
138
|
+
def add_all_sequences_from_yaml(set_str = "")
|
139
|
+
prepared_sets = read_sequence_set_yaml
|
140
|
+
prepared_sets.each {|c| set_str << set_add_sequence(c["seq_set"], c["seq_relname"], c["seq_id"], c["seq_comment"], c["node_schema"], c["node_id"])}
|
141
|
+
set_str
|
142
|
+
end
|
143
|
+
######## end of sequences ########
|
144
|
+
|
145
|
+
# Install all table, set and sequence information through command line slonik
|
146
|
+
# for everything on first time install.
|
147
|
+
def subscribe_sets(mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes)
|
148
|
+
result = false
|
149
|
+
result = SlonyCluster.run_slonik_on_system(mn, sn, create_all_sets(mn), description = "create_all_sets")
|
150
|
+
result = SlonyCluster.run_slonik_on_system(mn, sn, add_all_tables_with_sequences(mn), description = "add_all_tables") if result == true
|
151
|
+
prepared_sets = one_set_for_each_table
|
152
|
+
sn.each {|node| prepared_sets.each_with_index {|c,i| SlonyCluster.run_slonik_on_system(mn, sn, subscribe_set((i+1).to_s, node["node_id"], mn["node_id"]), description = "subscribe_sets")}} if result == true
|
153
|
+
result
|
154
|
+
end
|
155
|
+
|
156
|
+
# This is useful for more freedom in migrations. That way you can run your migration and run an sql file.
|
157
|
+
# options are {:rep_table => [:tablename], :rep_sequence => :columnname, :rep_description => string, :rep_filename => "/path", :rep_set_id => "1", :rep_set_origin => "1"}
|
158
|
+
# slony execute script needs to run a file, I optionally create one if you want to alter something in a migration
|
159
|
+
# There are basically three table manipulation statements here in slubydoo, create, drop and alter.
|
160
|
+
# Therefore, I have included ways of handling these three.
|
161
|
+
# Note that :rep_table should be an array of table names.
|
162
|
+
# Todo: need to be able to combine table names with create_table
|
163
|
+
def execute_default_set_script(mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes, options = {})
|
164
|
+
result = false
|
165
|
+
if options[:rep_table].nil? or options[:rep_action].nil? or options[:rep_table].empty?
|
166
|
+
puts "Incorrect input, you must supply both :rep_table and :rep_action options."
|
167
|
+
else
|
168
|
+
options[:rep_table].each do |rep_table_name|
|
169
|
+
options[:rep_set_origin] = "#{mn["node_id"].to_s}" if options[:rep_set_origin].nil?
|
170
|
+
options[:rep_description] = "#{rep_table_name.to_s}"
|
171
|
+
case options[:rep_action].to_s
|
172
|
+
when "create_table"
|
173
|
+
result = create_slonik_table(rep_table_name.to_s, mn, sn, options)
|
174
|
+
when "drop_table"
|
175
|
+
result = drop_slonik_table_with_set(rep_table_name.to_s, mn, sn, options)
|
176
|
+
else #alter_table
|
177
|
+
result = alter_slonik_table(rep_table_name.to_s, mn, sn, options)
|
178
|
+
end # case options[:rep_action]
|
179
|
+
end
|
180
|
+
end # options[:rep_table].each do |rep_table_name|
|
181
|
+
result
|
182
|
+
end
|
183
|
+
|
184
|
+
|
185
|
+
private
|
186
|
+
|
187
|
+
# Basically executes a script against a particular set derived from the input table
|
188
|
+
# or alternatively you can pass in a set id with :rep_set_id.
|
189
|
+
def alter_slonik_table(rep_table_name, mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes, options = {})
|
190
|
+
result = false
|
191
|
+
new_str = ""
|
192
|
+
begin
|
193
|
+
rep_set_id = options[:rep_set_id].to_s if !options[:rep_set_id].nil?
|
194
|
+
rep_set_id = SlonyPreset.find_tables_set(mn["node_cluster_name"],rep_table_name) if rep_set_id.nil?
|
195
|
+
rep_set_id = SlonyPreset.find_sequences_set(mn["node_cluster_name"], SlonyPreset.default_sequence_name_structure(rep_table_name, options[:rep_sequence].to_s)) if rep_set_id.nil? and !options[:rep_sequence].nil?
|
196
|
+
if options[:rep_filename].nil?
|
197
|
+
new_rep_filename = "#{app_config.slubydoo_execute_dir_path}#{options[:rep_description].to_s.downcase.gsub(/[\s]/, '_')}_#{String.create_random}.sql"
|
198
|
+
result = SlonyCluster.create_file_on_system(mn, new_rep_filename, SlonyPreset.bounce_sql_statement)
|
199
|
+
end
|
200
|
+
set_str = %{EXECUTE SCRIPT ( SET ID = #{rep_set_id}, FILENAME = '#{new_rep_filename}', EVENT NODE = #{mn["node_id"]} );}
|
201
|
+
result = SlonyCluster.run_slonik_on_system(mn, sn, set_str, options[:rep_description].to_s)
|
202
|
+
result = SlonyCluster.delete_file_on_system(mn, new_rep_filename) if result == true and app_config.slubydoo_delete_files_on_remote_hosts
|
203
|
+
rescue Exception => ex
|
204
|
+
puts "Error: #{ex}\n"
|
205
|
+
result = false
|
206
|
+
end
|
207
|
+
result
|
208
|
+
end
|
209
|
+
|
210
|
+
# Since I have one set for one table I have altered this method to drop set instead of altering a set.
|
211
|
+
# If you want this altered you can simply change this to be the same as the alter table.
|
212
|
+
# TODO: expand this in a later release to use optional multi table set drop of table.
|
213
|
+
def drop_slonik_table_with_set(rep_table_name, mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes, options = {})
|
214
|
+
result = false
|
215
|
+
new_str = ""
|
216
|
+
begin
|
217
|
+
tab_set = SlonyPreset.find_tables_set(mn["node_cluster_name"],rep_table_name)
|
218
|
+
sn.each {|slave_node| new_str << unsubscribe_set(tab_set, slave_node["node_id"]) }
|
219
|
+
new_str << drop_set(tab_set, options[:rep_set_origin].to_s)
|
220
|
+
result = SlonyCluster.run_slonik_on_system(mn, sn, new_str, options[:rep_description].to_s)
|
221
|
+
rescue Exception => ex
|
222
|
+
puts "Error: #{ex}\n"
|
223
|
+
result = false
|
224
|
+
end
|
225
|
+
result
|
226
|
+
end
|
227
|
+
|
228
|
+
# If you don't pass in a sequence name, it will not create a sequence.
|
229
|
+
# opts = options from migrations or other application scripts
|
230
|
+
# adds a new set for this table.
|
231
|
+
# Creates a table in a supplied set if you give options a :rep_set_id.
|
232
|
+
def create_slonik_table(rep_table_name, mn = SlonyNode.master_node, sn = SlonyNode.slave_nodes, options = {})
|
233
|
+
begin
|
234
|
+
new_str = ""
|
235
|
+
rep_set_id = options[:rep_set_id].to_s if !options[:rep_set_id].nil?
|
236
|
+
rep_set_id = SlonyPreset.find_max_set_id(mn["node_cluster_name"]) + 1 if rep_set_id.nil?
|
237
|
+
tableid = SlonyPreset.find_max_table_id(mn["node_cluster_name"]) + 1
|
238
|
+
seqid = SlonyPreset.find_max_sequence_id(mn["node_cluster_name"]) + 1 if !options[:rep_sequence].nil?
|
239
|
+
new_str << create_set(rep_set_id, options[:rep_set_origin].to_s, options[:rep_description].to_s) if options[:rep_set_id].nil?
|
240
|
+
new_str << set_add_table(rep_set_id, rep_table_name, tableid, rep_table_name, mn["node_schema"], mn["node_id"])
|
241
|
+
new_str << set_add_sequence(rep_set_id, SlonyPreset.default_sequence_name_structure(rep_table_name, options[:rep_sequence].to_s), seqid, SlonyPreset.default_sequence_name_structure(rep_table_name, options[:rep_sequence].to_s), mn["node_schema"], mn["node_id"]) if !options[:rep_sequence].nil?
|
242
|
+
sn.each {|node| new_str << subscribe_set(rep_set_id, node["node_id"], mn["node_id"])}
|
243
|
+
result = SlonyCluster.run_slonik_on_system(mn, sn, new_str, options[:rep_description].to_s)
|
244
|
+
rescue Exception => ex
|
245
|
+
puts "Error: #{ex}\n"
|
246
|
+
result = false
|
247
|
+
end
|
248
|
+
result
|
249
|
+
end
|
250
|
+
|
251
|
+
######## yaml reader methods ########
|
252
|
+
# This is how you can start with existing tables.
|
253
|
+
# Put them in yamls, they will get added all at once.
|
254
|
+
# From then on you can use other tools above to add sets/tables/sequences.
|
255
|
+
# What I did was dump my current tables and sequences out into yamls.
|
256
|
+
# Then I created a few methods to create the other set yamls
|
257
|
+
# This is completely different for everyone, depending on your current schema,
|
258
|
+
# so you need to do this part yourself. Decide how you want to make your sets,
|
259
|
+
# which tables/sequences you want in your sets.
|
260
|
+
# Set up your yaml files following the pattern below:
|
261
|
+
|
262
|
+
# ---
|
263
|
+
# sets:
|
264
|
+
# -
|
265
|
+
# setid: 1
|
266
|
+
# set_origin: 1
|
267
|
+
# set_comment: create set 1
|
268
|
+
# -
|
269
|
+
# setid: 2
|
270
|
+
# set_origin: 1
|
271
|
+
# set_comment: create set 2
|
272
|
+
def read_slon_set_yaml(prepared_sets = [])
|
273
|
+
configs = YAML.load File.open(SlonyPreset.app_slubydoo_yaml_dir_path << app_config.slubydoo_sets_file_name)
|
274
|
+
sets = configs["sets"]
|
275
|
+
sets.each {|c| prepared_sets << {}.merge(c) }
|
276
|
+
prepared_sets
|
277
|
+
end
|
278
|
+
|
279
|
+
# ---
|
280
|
+
# tables:
|
281
|
+
# -
|
282
|
+
# tab_set: "1"
|
283
|
+
# tab_relname: table1
|
284
|
+
# tab_id: "1"
|
285
|
+
# tab_comment: table1 comment
|
286
|
+
# -
|
287
|
+
# tab_set: "1"
|
288
|
+
# tab_relname: table2
|
289
|
+
# tab_id: "2"
|
290
|
+
# tab_comment: table2 comment
|
291
|
+
def read_table_set_yaml(prepared_sets = [])
|
292
|
+
configs = YAML.load File.open(SlonyPreset.app_slubydoo_yaml_dir_path << app_config.slubydoo_tables_file_name)
|
293
|
+
sets = configs["tables"]
|
294
|
+
x = SlonyNode.master_node
|
295
|
+
sets.each {|c| prepared_sets << {}.merge(c).merge(x) }
|
296
|
+
prepared_sets
|
297
|
+
end
|
298
|
+
|
299
|
+
# ---
|
300
|
+
# sequences:
|
301
|
+
# -
|
302
|
+
# seq_set: "1"
|
303
|
+
# seq_id: "1"
|
304
|
+
# seq_relname: table1_id_seq
|
305
|
+
# seq_comment: table1 sequence comment
|
306
|
+
# -
|
307
|
+
# seq_set: "1"
|
308
|
+
# seq_id: "2"
|
309
|
+
# seq_relname: table2_id_seq
|
310
|
+
# seq_comment: table2 sequence comment
|
311
|
+
def read_sequence_set_yaml(prepared_sets = [])
|
312
|
+
configs = YAML.load File.open(SlonyPreset.app_slubydoo_yaml_dir_path << app_config.slubydoo_sequences_file_name)
|
313
|
+
sets = configs["sequences"]
|
314
|
+
x = SlonyNode.master_node
|
315
|
+
sets.each {|c| prepared_sets << {}.merge(c).merge(x) }
|
316
|
+
prepared_sets
|
317
|
+
end
|
318
|
+
|
319
|
+
# Default way to make new slony sets in slubydoo
|
320
|
+
# instead of the set-table-sequence yamls.
|
321
|
+
def one_set_for_each_table(search_terms = app_config.slubydoo_refine_table_search_terms)
|
322
|
+
SlonyPreset.find_all_tables_with_primary_keys(search_terms)
|
323
|
+
end
|
324
|
+
|
325
|
+
######## end of yaml reader methods ########
|
326
|
+
|
327
|
+
end # class << self
|
328
|
+
end # class SlonySet
|
@@ -0,0 +1,57 @@
|
|
1
|
+
# require 'rake'
|
2
|
+
#
|
3
|
+
# namespace :slubydoo do
|
4
|
+
#
|
5
|
+
# # ====================================================
|
6
|
+
# # task :replication_start
|
7
|
+
# # ====================================================
|
8
|
+
# desc "Initialize and start replicating all in one command"
|
9
|
+
# task :replication_start => :environment do |t|
|
10
|
+
# value = SlonyCluster.slubydoo_start if app_config.db_slony_replication_status == true
|
11
|
+
# puts "Slony PostgreSQL Replication configuration parameter is not set" if app_config.db_slony_replication_status != true
|
12
|
+
# end # task :replication_start
|
13
|
+
#
|
14
|
+
# # ====================================================
|
15
|
+
# # task :master_and_slave_processes
|
16
|
+
# # ====================================================
|
17
|
+
# desc <<-DESC
|
18
|
+
# Start or Stop individual processes on the master and on the slave \
|
19
|
+
# on already initialized nodes.
|
20
|
+
# DESC
|
21
|
+
# task :master_and_slave_processes => :environment do |t|
|
22
|
+
# start_stop = ENV["PROCESS_ACTION"] ||= "start"
|
23
|
+
# start_stop = "start" if (start_stop != "start") and (start_stop != "stop")
|
24
|
+
# value = SlonyCluster.master_and_slave_processes(start_stop) if app_config.db_slony_replication_status == true
|
25
|
+
# puts "Slony PostgreSQL Replication configuration parameter is not set" if app_config.db_slony_replication_status != true
|
26
|
+
# end # task :master_and_slave_processes
|
27
|
+
#
|
28
|
+
# # ====================================================
|
29
|
+
# # task :drop_sets
|
30
|
+
# # ====================================================
|
31
|
+
# desc "drop_sets"
|
32
|
+
# task :drop_sets => :environment do |t|
|
33
|
+
# value = SlonyCluster.run_slonik_on_system(SlonyNode.master_node, SlonyNode.slave_nodes, SlonySet.drop_sets, description = "drop_sets") if app_config.db_slony_replication_status == true
|
34
|
+
# puts "Slony PostgreSQL Replication configuration parameter is not set" if app_config.db_slony_replication_status != true
|
35
|
+
# end # task :drop_sets
|
36
|
+
#
|
37
|
+
# # ====================================================
|
38
|
+
# # task :drop_set
|
39
|
+
# # ====================================================
|
40
|
+
# desc "drop_set"
|
41
|
+
# task :drop_set => :environment do |t|
|
42
|
+
# setid = ENV["SETID"]
|
43
|
+
# value = SlonyCluster.run_slonik_on_system(SlonyNode.master_node, SlonyNode.slave_nodes, SlonySet.drop_set(setid, SlonyNode.master_node["node_id"]), description = "drop_set_#{setid}") if app_config.db_slony_replication_status == true
|
44
|
+
# puts "Slony PostgreSQL Replication configuration parameter is not set" if app_config.db_slony_replication_status != true
|
45
|
+
# end # task :drop_set
|
46
|
+
#
|
47
|
+
# # ====================================================
|
48
|
+
# # task :store_paths
|
49
|
+
# # ====================================================
|
50
|
+
# desc "If your ip address changes on either your master or slave run store_paths to get slony up-to-date."
|
51
|
+
# task :store_paths => :environment do |t|
|
52
|
+
# value = SlonyCluster.run_slonik_on_system(SlonyNode.master_node, SlonyNode.slave_nodes, SlonyCluster.store_paths, description = "store_paths") if app_config.db_slony_replication_status == true
|
53
|
+
# puts "Slony PostgreSQL Replication configuration parameter is not set" if app_config.db_slony_replication_status != true
|
54
|
+
# end # task :store_paths
|
55
|
+
#
|
56
|
+
#
|
57
|
+
# end # namespace :slubydoo do
|
@@ -0,0 +1 @@
|
|
1
|
+
load File.join(File.dirname(__FILE__), "slony_tasks.rake")
|
data/test/spec.opts
ADDED
data/test/spec_helper.rb
ADDED
metadata
ADDED
@@ -0,0 +1,89 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: slubydoo
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.0.4.1
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- efishman
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
|
12
|
+
date: 2009-06-12 00:00:00 -04:00
|
13
|
+
default_executable:
|
14
|
+
dependencies:
|
15
|
+
- !ruby/object:Gem::Dependency
|
16
|
+
name: application_configuration
|
17
|
+
version_requirement:
|
18
|
+
version_requirements: !ruby/object:Gem::Requirement
|
19
|
+
requirements:
|
20
|
+
- - "="
|
21
|
+
- !ruby/object:Gem::Version
|
22
|
+
version: 1.5.3.1
|
23
|
+
version:
|
24
|
+
- !ruby/object:Gem::Dependency
|
25
|
+
name: net-ssh
|
26
|
+
version_requirement:
|
27
|
+
version_requirements: !ruby/object:Gem::Requirement
|
28
|
+
requirements:
|
29
|
+
- - "="
|
30
|
+
- !ruby/object:Gem::Version
|
31
|
+
version: 2.0.3
|
32
|
+
version:
|
33
|
+
- !ruby/object:Gem::Dependency
|
34
|
+
name: net-sftp
|
35
|
+
version_requirement:
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
37
|
+
requirements:
|
38
|
+
- - "="
|
39
|
+
- !ruby/object:Gem::Version
|
40
|
+
version: 2.0.1
|
41
|
+
version:
|
42
|
+
description: "slubydoo was developed by: efishman"
|
43
|
+
email:
|
44
|
+
executables: []
|
45
|
+
|
46
|
+
extensions: []
|
47
|
+
|
48
|
+
extra_rdoc_files: []
|
49
|
+
|
50
|
+
files:
|
51
|
+
- lib/slubydoo/slony_cluster.rb
|
52
|
+
- lib/slubydoo/slony_node.rb
|
53
|
+
- lib/slubydoo/slony_preset.rb
|
54
|
+
- lib/slubydoo/slony_set.rb
|
55
|
+
- lib/slubydoo/slony_tasks.rake
|
56
|
+
- lib/slubydoo/tasks.rb
|
57
|
+
- lib/slubydoo.rb
|
58
|
+
- README
|
59
|
+
has_rdoc: true
|
60
|
+
homepage:
|
61
|
+
post_install_message:
|
62
|
+
rdoc_options: []
|
63
|
+
|
64
|
+
require_paths:
|
65
|
+
- lib
|
66
|
+
- lib
|
67
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
68
|
+
requirements:
|
69
|
+
- - ">="
|
70
|
+
- !ruby/object:Gem::Version
|
71
|
+
version: "0"
|
72
|
+
version:
|
73
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
74
|
+
requirements:
|
75
|
+
- - ">="
|
76
|
+
- !ruby/object:Gem::Version
|
77
|
+
version: "0"
|
78
|
+
version:
|
79
|
+
requirements: []
|
80
|
+
|
81
|
+
rubyforge_project: slubydoo
|
82
|
+
rubygems_version: 1.0.1
|
83
|
+
signing_key:
|
84
|
+
specification_version: 2
|
85
|
+
summary: slubydoo
|
86
|
+
test_files:
|
87
|
+
- test/slubydoo_spec.rb
|
88
|
+
- test/spec.opts
|
89
|
+
- test/spec_helper.rb
|