scbi_queue_system 0.0.2
Sign up to get free protection for your applications and to get access to all the features.
- data/History.txt +8 -0
- data/Manifest.txt +23 -0
- data/PostInstall.txt +7 -0
- data/README.rdoc +138 -0
- data/Rakefile +26 -0
- data/bin/queue_manager.rb +209 -0
- data/bin/sqs_install_daemon +67 -0
- data/bin/sqstat +74 -0
- data/bin/sqsub +52 -0
- data/lib/scbi_queue_system.rb +57 -0
- data/lib/scbi_queue_system/autolaunch/com.scbi_queue_system.plist +20 -0
- data/lib/scbi_queue_system/autolaunch/sqsd-linux +84 -0
- data/lib/scbi_queue_system/done_job_list.rb +10 -0
- data/lib/scbi_queue_system/internal_config/internal_config.json +3 -0
- data/lib/scbi_queue_system/job_list.rb +112 -0
- data/lib/scbi_queue_system/queued_job_list.rb +57 -0
- data/lib/scbi_queue_system/running_job_list.rb +67 -0
- data/script/console +10 -0
- data/script/destroy +14 -0
- data/script/generate +14 -0
- data/test/submit_script.sh +6 -0
- data/test/test_helper.rb +3 -0
- data/test/test_scbi_queue_system.rb +11 -0
- metadata +105 -0
data/History.txt
ADDED
data/Manifest.txt
ADDED
@@ -0,0 +1,23 @@
|
|
1
|
+
bin/queue_manager.rb
|
2
|
+
bin/sqstat
|
3
|
+
bin/sqsub
|
4
|
+
bin/sqs_install_daemon
|
5
|
+
History.txt
|
6
|
+
lib/scbi_queue_system/autolaunch/com.scbi_queue_system.plist
|
7
|
+
lib/scbi_queue_system/autolaunch/sqsd-linux
|
8
|
+
lib/scbi_queue_system/done_job_list.rb
|
9
|
+
lib/scbi_queue_system/internal_config/internal_config.json
|
10
|
+
lib/scbi_queue_system/job_list.rb
|
11
|
+
lib/scbi_queue_system/queued_job_list.rb
|
12
|
+
lib/scbi_queue_system/running_job_list.rb
|
13
|
+
lib/scbi_queue_system.rb
|
14
|
+
Manifest.txt
|
15
|
+
PostInstall.txt
|
16
|
+
Rakefile
|
17
|
+
README.rdoc
|
18
|
+
script/console
|
19
|
+
script/destroy
|
20
|
+
script/generate
|
21
|
+
test/submit_script.sh
|
22
|
+
test/test_helper.rb
|
23
|
+
test/test_scbi_queue_system.rb
|
data/PostInstall.txt
ADDED
data/README.rdoc
ADDED
@@ -0,0 +1,138 @@
|
|
1
|
+
= scbi_queue_system
|
2
|
+
|
3
|
+
* http://www.scbi.uma.es/downloads
|
4
|
+
|
5
|
+
== DESCRIPTION:
|
6
|
+
|
7
|
+
scbi_queue_system (SQS) handles a simple queue of jobs executions over multiple machines (clustered installation) or your own personal computer.
|
8
|
+
|
9
|
+
== FEATURES/PROBLEMS:
|
10
|
+
|
11
|
+
* SQS can be used as very simple batch queue system for personal multicore computers as well as for small clusters
|
12
|
+
* It handles machines with different number of cores/CPUs
|
13
|
+
* It believes on well intentioned users, and because of that it doesn't kill jobs if they are using more CPUs than requested
|
14
|
+
* All jobs run under the same user (the one that SQS manager run under)
|
15
|
+
|
16
|
+
== SYNOPSIS:
|
17
|
+
|
18
|
+
Once SQS is installed and queue manager is running, you can start using it:
|
19
|
+
|
20
|
+
=== To submit a new job:
|
21
|
+
|
22
|
+
sqsub file.sh
|
23
|
+
|
24
|
+
where file.sh is a script file where you run your programs. Example file issuing ls and hostname commands:
|
25
|
+
|
26
|
+
$> cat file.sh
|
27
|
+
#!/usr/bin/env bash
|
28
|
+
|
29
|
+
ls
|
30
|
+
hostname
|
31
|
+
|
32
|
+
=== To submit a new job using 4 cpus:
|
33
|
+
|
34
|
+
sqsub file.sh 4
|
35
|
+
|
36
|
+
You can also set the cpu count inside the submit script this way:
|
37
|
+
|
38
|
+
$> cat file.sh
|
39
|
+
#!/usr/bin/env bash
|
40
|
+
# CPUS = 4
|
41
|
+
|
42
|
+
ls
|
43
|
+
hostname
|
44
|
+
|
45
|
+
=== To view queue status
|
46
|
+
|
47
|
+
sqstat
|
48
|
+
|
49
|
+
To view already done jobs:
|
50
|
+
|
51
|
+
sqstat -d
|
52
|
+
|
53
|
+
|
54
|
+
== REQUIREMENTS:
|
55
|
+
|
56
|
+
* OS X / linux operating systems.
|
57
|
+
|
58
|
+
== INSTALL:
|
59
|
+
|
60
|
+
=== 1.- Personal/standalone computer:
|
61
|
+
|
62
|
+
In a personal installation, all pieces of the queue system are installed on the same computer. To install the queue manager on one computer:
|
63
|
+
|
64
|
+
gem install scbi_queue_system
|
65
|
+
|
66
|
+
Once installed, you should start the queue manager:
|
67
|
+
|
68
|
+
queue_manager.rb
|
69
|
+
|
70
|
+
By default, it is configured to use localhost with 1 cpu as testing
|
71
|
+
|
72
|
+
<b>NOTE:</b> See automatic startup
|
73
|
+
|
74
|
+
|
75
|
+
=== 2.-Cluster:
|
76
|
+
|
77
|
+
On a clustered installation you need to choose a node to act as the queue manager.
|
78
|
+
|
79
|
+
==== -On the manager machine
|
80
|
+
|
81
|
+
Install SQS on the manager machine:
|
82
|
+
|
83
|
+
gem install scbi_queue_system
|
84
|
+
|
85
|
+
Start the queue manager:
|
86
|
+
|
87
|
+
queue_manager.rb
|
88
|
+
|
89
|
+
Share your QUEUED folder with frontend machines and give write-only permission to others (chmod 772)
|
90
|
+
|
91
|
+
==== -On the frontends machines:
|
92
|
+
|
93
|
+
Later on, you should install the sqsub command on your cluster's frontend machines (the one/ones from where you are going to submit jobs).
|
94
|
+
|
95
|
+
gem install scbi_queue_system
|
96
|
+
|
97
|
+
=== 3.- Environment variables:
|
98
|
+
|
99
|
+
* SQS_BASE_PATH : sets the location of sqs directoies.
|
100
|
+
|
101
|
+
* SQS_QUEUED_PATH : sets the location of QUEUED directory. This location must be shared and with write only permissions for all SQS clients (those that need to use sqsub).
|
102
|
+
|
103
|
+
* SQS_LOG_FILE: sets the location of log files.
|
104
|
+
|
105
|
+
=== 4.- Automatic startup
|
106
|
+
|
107
|
+
Once the gem is installed you can start the queue manager (it is recommended to setup automatic startup so you don't have to start it manually after each reboot). The queue manager only should be started up in the manager machine, and not on frontends.
|
108
|
+
|
109
|
+
==== -On Linux
|
110
|
+
|
111
|
+
|
112
|
+
==== -On Mac OS X
|
113
|
+
|
114
|
+
|
115
|
+
== LICENSE:
|
116
|
+
|
117
|
+
(The MIT License)
|
118
|
+
|
119
|
+
Copyright (c) 2011 Dario guerrero
|
120
|
+
|
121
|
+
Permission is hereby granted, free of charge, to any person obtaining
|
122
|
+
a copy of this software and associated documentation files (the
|
123
|
+
'Software'), to deal in the Software without restriction, including
|
124
|
+
without limitation the rights to use, copy, modify, merge, publish,
|
125
|
+
distribute, sublicense, and/or sell copies of the Software, and to
|
126
|
+
permit persons to whom the Software is furnished to do so, subject to
|
127
|
+
the following conditions:
|
128
|
+
|
129
|
+
The above copyright notice and this permission notice shall be
|
130
|
+
included in all copies or substantial portions of the Software.
|
131
|
+
|
132
|
+
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
|
133
|
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
134
|
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
135
|
+
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
136
|
+
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
137
|
+
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
138
|
+
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/Rakefile
ADDED
@@ -0,0 +1,26 @@
|
|
1
|
+
require 'rubygems'
|
2
|
+
gem 'hoe', '>= 2.1.0'
|
3
|
+
require 'hoe'
|
4
|
+
require 'fileutils'
|
5
|
+
require './lib/scbi_queue_system'
|
6
|
+
|
7
|
+
Hoe.plugin :newgem
|
8
|
+
# Hoe.plugin :website
|
9
|
+
# Hoe.plugin :cucumberfeatures
|
10
|
+
|
11
|
+
# Generate all the Rake tasks
|
12
|
+
# Run 'rake -T' to see list of generated tasks (from gem root directory)
|
13
|
+
$hoe = Hoe.spec 'scbi_queue_system' do
|
14
|
+
self.developer 'Dario Guerrero', 'dariogf@gmail.com'
|
15
|
+
self.post_install_message = 'PostInstall.txt' # TODO remove if post-install message not required
|
16
|
+
self.rubyforge_name = self.name # TODO this is default value
|
17
|
+
self.extra_deps = [['json','>= 1.5.3']]
|
18
|
+
|
19
|
+
end
|
20
|
+
|
21
|
+
require 'newgem/tasks'
|
22
|
+
Dir['tasks/**/*.rake'].each { |t| load t }
|
23
|
+
|
24
|
+
# TODO - want other tests/tasks run by default? Add them to the list
|
25
|
+
# remove_task :default
|
26
|
+
# task :default => [:spec, :features]
|
@@ -0,0 +1,209 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
|
3
|
+
# $: << File.join(File.dirname(__FILE__),'..','lib')
|
4
|
+
|
5
|
+
require 'scbi_queue_system'
|
6
|
+
require 'logger'
|
7
|
+
|
8
|
+
require 'running_job_list'
|
9
|
+
require 'queued_job_list'
|
10
|
+
require 'done_job_list'
|
11
|
+
|
12
|
+
system_log_message="SCBI SQS queue at: #{QUEUED_PATH}"
|
13
|
+
`logger "#{system_log_message}"`
|
14
|
+
|
15
|
+
user=`whoami`
|
16
|
+
system_log_message="SCBI SQS user: #{user}"
|
17
|
+
`logger "#{system_log_message}"`
|
18
|
+
|
19
|
+
|
20
|
+
$LOG = Logger.new(LOG_FILE, 10, 1024000)
|
21
|
+
|
22
|
+
# $LOG = Logger.new(STDOUT, 10, 1024000)
|
23
|
+
|
24
|
+
$LOG.level = Logger::INFO
|
25
|
+
|
26
|
+
$LOG.info 'Starting up SQS'
|
27
|
+
|
28
|
+
# LOAD CONFIG FILES
|
29
|
+
|
30
|
+
def load_config
|
31
|
+
|
32
|
+
$LOG.info 'Reading config files'
|
33
|
+
|
34
|
+
if File.exists?(CONFIG_FILE)
|
35
|
+
config_txt=File.read(CONFIG_FILE)
|
36
|
+
|
37
|
+
# remove comments
|
38
|
+
config_txt_cleaned=config_txt.split("\n").select{|l| !(l=~ /^\s*\#/)}.join("\n")
|
39
|
+
# puts config_txt_cleaned
|
40
|
+
config=JSON::parse(config_txt_cleaned,:symbolize_names => true)
|
41
|
+
|
42
|
+
else
|
43
|
+
# default config
|
44
|
+
config={}
|
45
|
+
|
46
|
+
config[:polling_time] = 10
|
47
|
+
config[:machine_list] = []
|
48
|
+
|
49
|
+
cpus=1
|
50
|
+
|
51
|
+
if RUBY_PLATFORM.downcase.include?("darwin")
|
52
|
+
cpus=`hwprefs -cpu_count`.chomp.to_i
|
53
|
+
else
|
54
|
+
cpus=`grep processor /proc/cpuinfo |wc -l`.chomp.to_i
|
55
|
+
end
|
56
|
+
|
57
|
+
machine={:name=>'localhost', :cpus => cpus}
|
58
|
+
|
59
|
+
config[:machine_list] << machine
|
60
|
+
# config[:sqs_user] = 'dariogf'
|
61
|
+
|
62
|
+
f=File.open(CONFIG_FILE,'w')
|
63
|
+
f.puts "# You can add comment lines to this config file"
|
64
|
+
|
65
|
+
f.puts JSON::pretty_generate(config)
|
66
|
+
f.close
|
67
|
+
$LOG.debug "Generating default config. You can modify it by editing #{CONFIG_FILE}"
|
68
|
+
end
|
69
|
+
|
70
|
+
$LOG.debug "Config loaded #{config.to_json}"
|
71
|
+
|
72
|
+
|
73
|
+
# # create machine slots
|
74
|
+
config[:machine_list].each do |machine|
|
75
|
+
FileUtils.mkdir_p(File.join(RUNNING_PATH,machine[:name]))
|
76
|
+
FileUtils.mkdir_p(File.join(SENT_PATH,machine[:name]))
|
77
|
+
end
|
78
|
+
return config
|
79
|
+
end
|
80
|
+
|
81
|
+
def select_next_jobs(machines,queued,running)
|
82
|
+
# select next job for running
|
83
|
+
res = {}
|
84
|
+
|
85
|
+
if !queued.jobs.empty?
|
86
|
+
|
87
|
+
# iterate over machines
|
88
|
+
machines.each do |machine|
|
89
|
+
|
90
|
+
# find free_cpus
|
91
|
+
running_cpus=running.running_cpus_in_machine(machine[:name])
|
92
|
+
free_cpus = machine[:cpus]-running_cpus
|
93
|
+
|
94
|
+
$LOG.debug("Free space on #{machine[:name]}: #{free_cpus}")
|
95
|
+
|
96
|
+
# use free cpus
|
97
|
+
while (free_cpus > 0) do
|
98
|
+
|
99
|
+
next_job=queued.jobs.find{|job| job[:cpus] <= free_cpus}
|
100
|
+
|
101
|
+
if next_job
|
102
|
+
# delete job from queue
|
103
|
+
queued.jobs.delete(next_job)
|
104
|
+
|
105
|
+
# discount cpus
|
106
|
+
free_cpus -= next_job[:cpus]
|
107
|
+
|
108
|
+
# add job to be run on current machine
|
109
|
+
res[machine[:name]]=[] if res[machine[:name]].nil?
|
110
|
+
res[machine[:name]] << next_job
|
111
|
+
else
|
112
|
+
break
|
113
|
+
end
|
114
|
+
|
115
|
+
end
|
116
|
+
end
|
117
|
+
end
|
118
|
+
|
119
|
+
return res
|
120
|
+
end
|
121
|
+
|
122
|
+
def update_running_jobs(machines,running)
|
123
|
+
|
124
|
+
machines.each do |machine|
|
125
|
+
|
126
|
+
running_in_machine=running.running_jobs_in_machine(machine[:name])
|
127
|
+
if machine[:name].upcase.index('LOCALHOST')
|
128
|
+
cmd= machine[:ps_command] || "ps ax -o command"
|
129
|
+
else
|
130
|
+
ps_cmd=machine[:ps_command] || "ps ax -o command"
|
131
|
+
cmd="ssh #{machine[:name]} \"#{ps_cmd}\""
|
132
|
+
end
|
133
|
+
# puts cmd
|
134
|
+
res=`#{cmd}`
|
135
|
+
|
136
|
+
running_in_machine.each do |job|
|
137
|
+
sqs_job_file=File.join(RUNNING_PATH,machine[:name],job[:run_script])
|
138
|
+
user_job_file=File.join(job[:cwd],job[:run_script])
|
139
|
+
if File.exists?(sqs_job_file)
|
140
|
+
|
141
|
+
# if script is not running
|
142
|
+
if !res.index(user_job_file)
|
143
|
+
# remove it from folder
|
144
|
+
File.delete(sqs_job_file)
|
145
|
+
FileUtils.mv(File.join(SENT_PATH,machine[:name],job[:run_script]),DONE_PATH)
|
146
|
+
$LOG.info("Job finished: #{job[:run_script]}")
|
147
|
+
end
|
148
|
+
end
|
149
|
+
end
|
150
|
+
|
151
|
+
# puts "#{machine[:name]}, #{res}"
|
152
|
+
|
153
|
+
end
|
154
|
+
|
155
|
+
end
|
156
|
+
|
157
|
+
config=load_config
|
158
|
+
|
159
|
+
exit_loop=false
|
160
|
+
|
161
|
+
Signal.trap("INT") do
|
162
|
+
puts "Terminating SQS manager..."
|
163
|
+
puts "Please wait #{config[:polling_time]} seconds..."
|
164
|
+
exit_loop=true
|
165
|
+
end
|
166
|
+
|
167
|
+
# event loop
|
168
|
+
begin
|
169
|
+
|
170
|
+
config=load_config
|
171
|
+
# clear screen
|
172
|
+
# print "\e[2J\e[f"
|
173
|
+
|
174
|
+
# check files
|
175
|
+
$LOG.debug "Checking queue and machine status"
|
176
|
+
|
177
|
+
queued=QueuedJobList.new
|
178
|
+
running=RunningJobList.new
|
179
|
+
|
180
|
+
# puts queued.to_s
|
181
|
+
# puts running.to_s
|
182
|
+
|
183
|
+
|
184
|
+
update_running_jobs(config[:machine_list],running)
|
185
|
+
|
186
|
+
# puts "#{queued.count} queued jobs"
|
187
|
+
# puts "QUEUED:\n#{queued.map{|q| q.to_json}.join("\n")}"
|
188
|
+
|
189
|
+
next_jobs=select_next_jobs(config[:machine_list],queued,running)
|
190
|
+
|
191
|
+
if !next_jobs.empty?
|
192
|
+
$LOG.debug("Running next jobs")
|
193
|
+
next_jobs.each do |machine,jobs|
|
194
|
+
jobs.each do |job|
|
195
|
+
QueuedJobList.execute_job(job,machine)
|
196
|
+
end
|
197
|
+
end
|
198
|
+
end
|
199
|
+
|
200
|
+
# queued=QueuedJobList.new
|
201
|
+
# running=RunningJobList.new
|
202
|
+
#
|
203
|
+
# puts queued.stats_header
|
204
|
+
# puts queued.to_s
|
205
|
+
# puts running.to_s
|
206
|
+
|
207
|
+
|
208
|
+
sleep config[:polling_time]
|
209
|
+
end while !exit_loop
|
@@ -0,0 +1,67 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
|
3
|
+
require 'fileutils'
|
4
|
+
|
5
|
+
user_name=ARGV.shift
|
6
|
+
|
7
|
+
if user_name.nil? || user_name.empty? || user_name=='root'
|
8
|
+
puts "You must provide an user name to run SQS with that user instead of root"
|
9
|
+
exit -1
|
10
|
+
end
|
11
|
+
|
12
|
+
if RUBY_PLATFORM.downcase.include?("darwin")
|
13
|
+
daemon_path=File.join(File.dirname(__FILE__),'..','lib','scbi_queue_system','autolaunch','com.scbi_queue_system.plist')
|
14
|
+
destination_path=File.expand_path(File.join('/','Library','LaunchDaemons','com.scbi_queue_system.plist'))
|
15
|
+
|
16
|
+
if system('launchctl list com.scbi_queue_system.plist')
|
17
|
+
puts "Daemon already installed, uninstalling it. Run again to install"
|
18
|
+
`sudo -u #{user_name} launchctl unload #{destination_path}`
|
19
|
+
FileUtils.rm destination_path
|
20
|
+
else
|
21
|
+
puts "Installing daemon"
|
22
|
+
text=File.read(daemon_path)
|
23
|
+
|
24
|
+
text.gsub!('USER_NAME',user_name)
|
25
|
+
|
26
|
+
f=File.open(destination_path,'w')
|
27
|
+
f.puts text
|
28
|
+
f.close
|
29
|
+
|
30
|
+
FileUtils.chown user_name, nil, destination_path
|
31
|
+
|
32
|
+
FileUtils.chmod 0755, destination_path
|
33
|
+
|
34
|
+
# FileUtils.cp dameon_path, destination_path
|
35
|
+
`sudo -u #{user_name} launchctl load #{destination_path}`
|
36
|
+
end
|
37
|
+
else
|
38
|
+
daemon_path=File.join(File.dirname(__FILE__),'..','lib','scbi_queue_system','autolaunch','sqsd-linux')
|
39
|
+
destination_path=File.join('/','etc','init.d','sqsd-linux')
|
40
|
+
check=`chkconfig -l |grep -c sqsd-linux`.chomp
|
41
|
+
puts "Checking #{check}"
|
42
|
+
|
43
|
+
if check!='0'
|
44
|
+
puts "Daemon already installed, uninstalling it. Run again to install"
|
45
|
+
|
46
|
+
`#{destination_path} stop`
|
47
|
+
`chkconfig -d sqsd-linux`
|
48
|
+
FileUtils.rm destination_path
|
49
|
+
else
|
50
|
+
|
51
|
+
puts "Installing daemon #{daemon_path}"
|
52
|
+
|
53
|
+
text=File.read(daemon_path)
|
54
|
+
text.gsub!('USER_NAME',user_name)
|
55
|
+
|
56
|
+
f=File.open(destination_path,'w')
|
57
|
+
f.puts text
|
58
|
+
f.close
|
59
|
+
|
60
|
+
FileUtils.chown user_name, nil, destination_path
|
61
|
+
FileUtils.chmod 0755, destination_path
|
62
|
+
`#{destination_path} start`
|
63
|
+
`chkconfig -a sqsd-linux`
|
64
|
+
end
|
65
|
+
end
|
66
|
+
|
67
|
+
puts "Finished"
|
data/bin/sqstat
ADDED
@@ -0,0 +1,74 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
|
3
|
+
# $: << File.join(File.dirname(__FILE__),'..','lib')
|
4
|
+
|
5
|
+
require 'scbi_queue_system'
|
6
|
+
require 'optparse'
|
7
|
+
|
8
|
+
DEFAULT_INTERVAL=10
|
9
|
+
# bucle=ARGV.shift
|
10
|
+
|
11
|
+
|
12
|
+
options = {}
|
13
|
+
OptionParser.new do |opts|
|
14
|
+
opts.banner = "Usage: #{File.basename($0)} [options]"
|
15
|
+
|
16
|
+
options[:loop]=false
|
17
|
+
options[:interval]=DEFAULT_INTERVAL
|
18
|
+
|
19
|
+
opts.on("-l", "--loop [INTERVAL]", "Run continuously") do |i|
|
20
|
+
options[:loop] = true
|
21
|
+
options[:interval] = i.to_i
|
22
|
+
if options[:interval]==0
|
23
|
+
options[:interval]=DEFAULT_INTERVAL
|
24
|
+
end
|
25
|
+
end
|
26
|
+
|
27
|
+
opts.on("-d", "--done", "Also show done jobs") do |d|
|
28
|
+
options[:done] = d
|
29
|
+
end
|
30
|
+
|
31
|
+
end.parse!
|
32
|
+
|
33
|
+
|
34
|
+
|
35
|
+
# if ARGV.count == 1
|
36
|
+
# puts "#{File.basename($0)} submit_script"
|
37
|
+
# exit -1
|
38
|
+
# end
|
39
|
+
|
40
|
+
def stat(with_done_jobs)
|
41
|
+
queued=QueuedJobList.new
|
42
|
+
running=RunningJobList.new
|
43
|
+
|
44
|
+
puts queued.stats_header
|
45
|
+
puts queued.to_s
|
46
|
+
puts running.to_s
|
47
|
+
puts
|
48
|
+
|
49
|
+
if with_done_jobs
|
50
|
+
done=DoneJobList.new
|
51
|
+
puts "Recently DONE jobs:"
|
52
|
+
puts done.to_s
|
53
|
+
end
|
54
|
+
|
55
|
+
end
|
56
|
+
|
57
|
+
if options[:loop]
|
58
|
+
|
59
|
+
exit_loop=false
|
60
|
+
|
61
|
+
Signal.trap("INT") do
|
62
|
+
puts "Terminating, wait #{options[:interval]} seconds"
|
63
|
+
exit_loop=true
|
64
|
+
end
|
65
|
+
|
66
|
+
begin
|
67
|
+
print "\e[2J\e[f"
|
68
|
+
stat(options[:done])
|
69
|
+
sleep options[:interval]
|
70
|
+
end while !exit_loop
|
71
|
+
else
|
72
|
+
|
73
|
+
stat(options[:done])
|
74
|
+
end
|
data/bin/sqsub
ADDED
@@ -0,0 +1,52 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
|
3
|
+
# $: << File.join(File.dirname(__FILE__),'..','lib')
|
4
|
+
|
5
|
+
require 'scbi_queue_system'
|
6
|
+
|
7
|
+
if ARGV.count == 0
|
8
|
+
puts "#{File.basename($0)} submit_script"
|
9
|
+
exit -1
|
10
|
+
end
|
11
|
+
submit_script=ARGV.shift
|
12
|
+
|
13
|
+
if !File.exists?(submit_script)
|
14
|
+
puts "File #{submit_script} doesn't exists"
|
15
|
+
exit -1
|
16
|
+
end
|
17
|
+
|
18
|
+
cpus=1
|
19
|
+
|
20
|
+
begin
|
21
|
+
cpus=ARGV.shift.to_i
|
22
|
+
if cpus==0
|
23
|
+
cpus=1
|
24
|
+
end
|
25
|
+
rescue
|
26
|
+
cpus=1
|
27
|
+
end
|
28
|
+
|
29
|
+
t=Time.now
|
30
|
+
filename = ("%04d%02d%02d_%02d%02d%02d_%07d" % [t.year,t.month,t.day,t.hour,t.min,t.sec,t.usec])
|
31
|
+
|
32
|
+
filename += ('_' + File.basename(submit_script))
|
33
|
+
|
34
|
+
# copy file to queued
|
35
|
+
original_script=File.read(submit_script)
|
36
|
+
|
37
|
+
# FileUtils.cp(submit_script,File.join(QUEUED_PATH,filename))
|
38
|
+
|
39
|
+
f=File.open(File.join(QUEUED_PATH,filename),'a')
|
40
|
+
|
41
|
+
f.puts '#!/usr/bin/env bash'
|
42
|
+
f.puts "cd \"#{Dir.pwd}\""
|
43
|
+
f.puts "# CWD = #{Dir.pwd}"
|
44
|
+
f.puts "# CPUS = #{cpus}"
|
45
|
+
f.puts "# SCRIPT = #{File.expand_path(submit_script)}"
|
46
|
+
f.puts "# RUN_SCRIPT = #{filename}"
|
47
|
+
f.puts original_script
|
48
|
+
f.close
|
49
|
+
|
50
|
+
`chmod +x #{File.join(QUEUED_PATH,filename)}`
|
51
|
+
|
52
|
+
puts "File #{filename} submitted"
|
@@ -0,0 +1,57 @@
|
|
1
|
+
$:.unshift(File.dirname(__FILE__)) unless
|
2
|
+
$:.include?(File.dirname(__FILE__)) || $:.include?(File.expand_path(File.dirname(__FILE__)))
|
3
|
+
|
4
|
+
$:<<File.join(File.dirname(__FILE__), File.basename(__FILE__,File.extname(__FILE__)))
|
5
|
+
|
6
|
+
|
7
|
+
require 'fileutils'
|
8
|
+
require 'json'
|
9
|
+
|
10
|
+
# load internal config
|
11
|
+
INTERNAL_CONFIG_PATH=File.join(File.dirname(__FILE__), File.basename(__FILE__,File.extname(__FILE__)),'internal_config','internal_config.json')
|
12
|
+
|
13
|
+
if File.exists?(INTERNAL_CONFIG_PATH)
|
14
|
+
config_txt=File.read(INTERNAL_CONFIG_PATH)
|
15
|
+
internal_config=JSON::parse(config_txt, :symbolize_names => true)
|
16
|
+
if ENV['SQS_BASE_PATH']
|
17
|
+
internal_config[:base_path] = ENV['SQS_BASE_PATH']
|
18
|
+
end
|
19
|
+
else
|
20
|
+
# default config
|
21
|
+
internal_config={}
|
22
|
+
internal_config[:base_path] = ENV['SQS_BASE_PATH'] || '~/.sqs'
|
23
|
+
|
24
|
+
f=File.open(INTERNAL_CONFIG_PATH,'w')
|
25
|
+
f.puts JSON::pretty_generate(internal_config)
|
26
|
+
f.close
|
27
|
+
end
|
28
|
+
|
29
|
+
JOBS_PATH=File.join(internal_config[:base_path],'sqs','jobs')
|
30
|
+
|
31
|
+
CONFIG_PATH=File.join(internal_config[:base_path],'sqs','config')
|
32
|
+
|
33
|
+
LOGS_PATH=File.join(internal_config[:base_path],'sqs','logs')
|
34
|
+
|
35
|
+
CONFIG_FILE=File.join(CONFIG_PATH,'config.json')
|
36
|
+
|
37
|
+
LOG_FILE=ENV['SQS_LOG_PATH'] || File.join(LOGS_PATH,'sqs.log')
|
38
|
+
|
39
|
+
# create paths
|
40
|
+
FileUtils.mkdir_p([CONFIG_PATH,LOGS_PATH])
|
41
|
+
|
42
|
+
QUEUED_PATH=ENV['SQS_QUEUED_PATH'] || File.join(JOBS_PATH,'queued')
|
43
|
+
DONE_PATH=File.join(JOBS_PATH,'done')
|
44
|
+
RUNNING_PATH=File.join(JOBS_PATH,'running')
|
45
|
+
SENT_PATH=File.join(JOBS_PATH,'sent')
|
46
|
+
|
47
|
+
# create paths
|
48
|
+
FileUtils.mkdir_p([QUEUED_PATH,DONE_PATH,SENT_PATH,RUNNING_PATH])
|
49
|
+
|
50
|
+
require 'running_job_list'
|
51
|
+
require 'queued_job_list'
|
52
|
+
require 'done_job_list'
|
53
|
+
|
54
|
+
|
55
|
+
module ScbiQueueSystem
|
56
|
+
VERSION = '0.0.2'
|
57
|
+
end
|
@@ -0,0 +1,20 @@
|
|
1
|
+
<?xml version="1.0" encoding="UTF-8"?>
|
2
|
+
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
3
|
+
<plist version="1.0">
|
4
|
+
<dict>
|
5
|
+
<key>KeepAlive</key>
|
6
|
+
<true/>
|
7
|
+
<key>Label</key>
|
8
|
+
<string>com.scbi_queue_system.plist</string>
|
9
|
+
<key>OnDemand</key>
|
10
|
+
<false/>
|
11
|
+
<key>ProgramArguments</key>
|
12
|
+
<array>
|
13
|
+
<string>queue_manager.rb</string>
|
14
|
+
</array>
|
15
|
+
<key>RunAtLoad</key>
|
16
|
+
<true/>
|
17
|
+
<key>UserName</key>
|
18
|
+
<string>USER_NAME</string>
|
19
|
+
</dict>
|
20
|
+
</plist>
|
@@ -0,0 +1,84 @@
|
|
1
|
+
#!/bin/sh
|
2
|
+
|
3
|
+
### BEGIN INIT INFO
|
4
|
+
# Provides: SCBI queue system service
|
5
|
+
# Required-Start: $syslog
|
6
|
+
# Required-Stop: $syslog
|
7
|
+
# Default-Start: 3 5
|
8
|
+
# Default-Stop: 0 1 2 4 6
|
9
|
+
# Short-Description: SCBI queue system service
|
10
|
+
# Description: SCBI queue system service
|
11
|
+
#
|
12
|
+
### END INIT INFO
|
13
|
+
|
14
|
+
SCBI_BIN=/usr/bin/queue_manager.rb
|
15
|
+
SQS_USER=USER_NAME
|
16
|
+
|
17
|
+
# Shell functions sourced from /etc/rc.status:
|
18
|
+
# rc_check check and set local and overall rc status
|
19
|
+
# rc_status check and set local and overall rc status
|
20
|
+
# rc_status -v be verbose in local rc status and clear it afterwards
|
21
|
+
# rc_status -v -r ditto and clear both the local and overall rc status
|
22
|
+
# rc_status -s display "skipped" and exit with status 3
|
23
|
+
# rc_status -u display "unused" and exit with status 3
|
24
|
+
# rc_failed set local and overall rc status to failed
|
25
|
+
# rc_failed <num> set local and overall rc status to <num>
|
26
|
+
# rc_reset clear both the local and overall rc status
|
27
|
+
# rc_exit exit appropriate to overall rc status
|
28
|
+
# rc_active checks whether a service is activated by symlinks
|
29
|
+
. /etc/rc.status
|
30
|
+
|
31
|
+
rc_reset
|
32
|
+
case "$1" in
|
33
|
+
start)
|
34
|
+
echo -n "Starting SQS "
|
35
|
+
## Start daemon with startproc(8). If this fails
|
36
|
+
## the return value is set appropriately by startproc.
|
37
|
+
/sbin/startproc -u $SQS_USER $SCBI_BIN
|
38
|
+
|
39
|
+
# Remember status and be verbose
|
40
|
+
rc_status -v
|
41
|
+
;;
|
42
|
+
stop)
|
43
|
+
echo -n "Shutting down SQS "
|
44
|
+
## Stop daemon with killproc(8) and if this fails
|
45
|
+
## killproc sets the return value according to LSB.
|
46
|
+
/sbin/killproc -TERM $SCBI_BIN
|
47
|
+
|
48
|
+
# Remember status and be verbose
|
49
|
+
rc_status -v
|
50
|
+
;;
|
51
|
+
restart)
|
52
|
+
## Stop the service and regardless of whether it was
|
53
|
+
## running or not, start it again.
|
54
|
+
$0 stop
|
55
|
+
$0 start
|
56
|
+
|
57
|
+
# Remember status and be quiet
|
58
|
+
rc_status
|
59
|
+
;;
|
60
|
+
status)
|
61
|
+
echo -n "Checking for service SQS "
|
62
|
+
## Check status with checkproc(8), if process is running
|
63
|
+
## checkproc will return with exit status 0.
|
64
|
+
|
65
|
+
# Return value is slightly different for the status command:
|
66
|
+
# 0 - service up and running
|
67
|
+
# 1 - service dead, but /var/run/ pid file exists
|
68
|
+
# 2 - service dead, but /var/lock/ lock file exists
|
69
|
+
# 3 - service not running (unused)
|
70
|
+
# 4 - service status unknown :-(
|
71
|
+
# 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
|
72
|
+
|
73
|
+
# NOTE: checkproc returns LSB compliant status values.
|
74
|
+
/sbin/checkproc $SCBI_BIN
|
75
|
+
# NOTE: rc_status knows that we called this init script with
|
76
|
+
# "status" option and adapts its messages accordingly.
|
77
|
+
rc_status -v
|
78
|
+
;;
|
79
|
+
*)
|
80
|
+
echo "Usage: $0 {start|stop|status|restart}"
|
81
|
+
exit 1
|
82
|
+
;;
|
83
|
+
esac
|
84
|
+
rc_exit
|
@@ -0,0 +1,112 @@
|
|
1
|
+
class JobList
|
2
|
+
|
3
|
+
attr_accessor :jobs
|
4
|
+
|
5
|
+
def initialize(folder)
|
6
|
+
|
7
|
+
@jobs = list_files(folder)
|
8
|
+
end
|
9
|
+
|
10
|
+
def stats_header
|
11
|
+
res=['']
|
12
|
+
|
13
|
+
res << "#{'Job name'.ljust(40)}\tCPUs\tStatus"
|
14
|
+
|
15
|
+
return res.join("\n")
|
16
|
+
|
17
|
+
end
|
18
|
+
|
19
|
+
def to_s
|
20
|
+
res=[]
|
21
|
+
|
22
|
+
res << "="*80
|
23
|
+
|
24
|
+
@jobs.each do |job|
|
25
|
+
# puts job
|
26
|
+
res << "#{job[:name]}\t#{job[:cpus]}\t#{self.class.to_s[0]}"
|
27
|
+
end
|
28
|
+
|
29
|
+
return res.join("\n")
|
30
|
+
|
31
|
+
end
|
32
|
+
|
33
|
+
|
34
|
+
private
|
35
|
+
|
36
|
+
def parse_job_file(job)
|
37
|
+
h={}
|
38
|
+
|
39
|
+
h[:file] = job
|
40
|
+
text=File.read(job)
|
41
|
+
|
42
|
+
cpus = 1
|
43
|
+
cwd=''
|
44
|
+
script=''
|
45
|
+
run_script=''
|
46
|
+
|
47
|
+
# parse script
|
48
|
+
text.each_line do |line|
|
49
|
+
if line.upcase =~ /^\s*#\s*CPUS\s*=\s*(\d+)/
|
50
|
+
begin
|
51
|
+
cpus = $1.to_i
|
52
|
+
rescue
|
53
|
+
cpus = 1
|
54
|
+
end
|
55
|
+
end
|
56
|
+
|
57
|
+
if line.upcase =~ /^\s*#\s*CWD\s*=\s*(.+)/
|
58
|
+
begin
|
59
|
+
# use without upcase for path
|
60
|
+
line =~ /^\s*#\s*CWD\s*=\s*(.+)/
|
61
|
+
cwd = $1
|
62
|
+
rescue
|
63
|
+
cwd = ''
|
64
|
+
end
|
65
|
+
end
|
66
|
+
|
67
|
+
if line.upcase =~ /^\s*#\s*SCRIPT\s*=\s*(.+)/
|
68
|
+
begin
|
69
|
+
# use without upcase for path
|
70
|
+
line =~ /^\s*#\s*SCRIPT\s*=\s*(.+)/
|
71
|
+
script = $1
|
72
|
+
rescue
|
73
|
+
script = ''
|
74
|
+
end
|
75
|
+
end
|
76
|
+
|
77
|
+
if line.upcase =~ /^\s*#\s*RUN_SCRIPT\s*=\s*(.+)/
|
78
|
+
begin
|
79
|
+
# use without upcase for path
|
80
|
+
line =~ /^\s*#\s*RUN_SCRIPT\s*=\s*(.+)/
|
81
|
+
run_script = $1
|
82
|
+
rescue
|
83
|
+
run_script = ''
|
84
|
+
end
|
85
|
+
end
|
86
|
+
|
87
|
+
|
88
|
+
end
|
89
|
+
|
90
|
+
h[:name] = File.basename(job)
|
91
|
+
h[:cpus] = cpus
|
92
|
+
h[:cwd] = cwd
|
93
|
+
h[:script] = script
|
94
|
+
h[:run_script] = run_script
|
95
|
+
|
96
|
+
return h
|
97
|
+
end
|
98
|
+
|
99
|
+
def list_files(folder)
|
100
|
+
res =[]
|
101
|
+
# puts folder
|
102
|
+
Dir.glob(File.join(folder,'*')).entries.each do |job|
|
103
|
+
h=parse_job_file(job)
|
104
|
+
res << h
|
105
|
+
end
|
106
|
+
|
107
|
+
return res
|
108
|
+
end
|
109
|
+
|
110
|
+
|
111
|
+
|
112
|
+
end
|
@@ -0,0 +1,57 @@
|
|
1
|
+
require 'job_list'
|
2
|
+
|
3
|
+
|
4
|
+
class QueuedJobList < JobList
|
5
|
+
|
6
|
+
def initialize
|
7
|
+
|
8
|
+
super(QUEUED_PATH)
|
9
|
+
|
10
|
+
end
|
11
|
+
|
12
|
+
def self.execute_job(job,machine)
|
13
|
+
|
14
|
+
$LOG.info("Sending job #{job[:name]} to #{machine}")
|
15
|
+
|
16
|
+
job_path=File.join(QUEUED_PATH,job[:name])
|
17
|
+
|
18
|
+
machine_path = File.join(SENT_PATH,machine)
|
19
|
+
|
20
|
+
FileUtils.cp(job_path , File.join(RUNNING_PATH,machine))
|
21
|
+
FileUtils.mv(job_path , machine_path)
|
22
|
+
|
23
|
+
# launch_cmd="ssh #{machine} \"bash #{File.join(machine_path,job[:name])}\""
|
24
|
+
|
25
|
+
|
26
|
+
output_file=File.join(job[:cwd],job[:name]+'.out')
|
27
|
+
error_file=File.join(job[:cwd],job[:name]+'.error')
|
28
|
+
|
29
|
+
# output_file=File.join('/tmp',job[:name]+'.out')
|
30
|
+
# error_file=File.join('/tmp',job[:name]+'.error')
|
31
|
+
copy_cmd = "scp #{File.join(machine_path,job[:name])} #{machine}:#{job[:cwd]}"
|
32
|
+
|
33
|
+
if system(copy_cmd)
|
34
|
+
# launch_cmd = "ssh #{machine} \"nohup bash < #{job[:script]} > #{output_file} 2> #{error_file} & \""
|
35
|
+
script_on_machine = File.join(job[:cwd],job[:name])
|
36
|
+
|
37
|
+
if machine.upcase.index('LOCALHOST')
|
38
|
+
launch_cmd = "nohup #{script_on_machine} </dev/null > #{output_file} 2> #{error_file} &"
|
39
|
+
else
|
40
|
+
launch_cmd = "ssh #{machine} \"nohup #{script_on_machine} </dev/null > #{output_file} 2> #{error_file} &\""
|
41
|
+
end
|
42
|
+
|
43
|
+
$LOG.info("Launch cmd: #{launch_cmd}")
|
44
|
+
if system(launch_cmd)
|
45
|
+
$LOG.info "LAUNCH DONE"
|
46
|
+
else
|
47
|
+
$LOG.error "LAUNCH FAILED: #{launch_cmd}"
|
48
|
+
end
|
49
|
+
else
|
50
|
+
$LOG.error "SCP FAILED: #{copy_cmd}"
|
51
|
+
end
|
52
|
+
|
53
|
+
|
54
|
+
|
55
|
+
end
|
56
|
+
|
57
|
+
end
|
@@ -0,0 +1,67 @@
|
|
1
|
+
require 'job_list'
|
2
|
+
|
3
|
+
class RunningJobList < JobList
|
4
|
+
|
5
|
+
def initialize
|
6
|
+
super(RUNNING_PATH)
|
7
|
+
end
|
8
|
+
|
9
|
+
|
10
|
+
def running_jobs_in_machine(machine)
|
11
|
+
return list_machine_files(RUNNING_PATH,machine)
|
12
|
+
end
|
13
|
+
|
14
|
+
def running_cpus_in_machine(machine)
|
15
|
+
$LOG.debug(machine)
|
16
|
+
res=0
|
17
|
+
|
18
|
+
running_jobs_in_machine(machine).each do |job|
|
19
|
+
res+=job[:cpus]
|
20
|
+
end
|
21
|
+
|
22
|
+
return res
|
23
|
+
end
|
24
|
+
|
25
|
+
def to_s
|
26
|
+
res=[]
|
27
|
+
|
28
|
+
@jobs.each do |machine,jobs|
|
29
|
+
# puts job
|
30
|
+
jobs.each do |job|
|
31
|
+
res<< "#{job[:name]}\t#{job[:cpus]}\tRUNNING\t #{machine}"
|
32
|
+
end
|
33
|
+
end
|
34
|
+
|
35
|
+
return res.join("\n")
|
36
|
+
|
37
|
+
end
|
38
|
+
|
39
|
+
private
|
40
|
+
|
41
|
+
def list_machine_files(folder, machine_id)
|
42
|
+
res = []
|
43
|
+
Dir.glob(File.join(folder,machine_id,'*')).entries.each do |job|
|
44
|
+
|
45
|
+
h=parse_job_file(job)
|
46
|
+
|
47
|
+
res << h
|
48
|
+
end
|
49
|
+
|
50
|
+
return res
|
51
|
+
end
|
52
|
+
|
53
|
+
# process currently running files
|
54
|
+
def list_files(folder)
|
55
|
+
res = {}
|
56
|
+
|
57
|
+
Dir.glob(File.join(folder,'*')).entries.each do |machine|
|
58
|
+
machine_id=File.basename(machine)
|
59
|
+
|
60
|
+
res[machine_id]= list_machine_files(folder,machine_id)
|
61
|
+
end
|
62
|
+
|
63
|
+
return res
|
64
|
+
end
|
65
|
+
|
66
|
+
|
67
|
+
end
|
data/script/console
ADDED
@@ -0,0 +1,10 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
# File: script/console
|
3
|
+
irb = RUBY_PLATFORM =~ /(:?mswin|mingw)/ ? 'irb.bat' : 'irb'
|
4
|
+
|
5
|
+
libs = " -r irb/completion"
|
6
|
+
# Perhaps use a console_lib to store any extra methods I may want available in the cosole
|
7
|
+
# libs << " -r #{File.dirname(__FILE__) + '/../lib/console_lib/console_logger.rb'}"
|
8
|
+
libs << " -r #{File.dirname(__FILE__) + '/../lib/scbi_queue_system.rb'}"
|
9
|
+
puts "Loading scbi_queue_system gem"
|
10
|
+
exec "#{irb} #{libs} --simple-prompt"
|
data/script/destroy
ADDED
@@ -0,0 +1,14 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
APP_ROOT = File.expand_path(File.join(File.dirname(__FILE__), '..'))
|
3
|
+
|
4
|
+
begin
|
5
|
+
require 'rubigen'
|
6
|
+
rescue LoadError
|
7
|
+
require 'rubygems'
|
8
|
+
require 'rubigen'
|
9
|
+
end
|
10
|
+
require 'rubigen/scripts/destroy'
|
11
|
+
|
12
|
+
ARGV.shift if ['--help', '-h'].include?(ARGV[0])
|
13
|
+
RubiGen::Base.use_component_sources! [:rubygems, :newgem, :newgem_theme, :test_unit]
|
14
|
+
RubiGen::Scripts::Destroy.new.run(ARGV)
|
data/script/generate
ADDED
@@ -0,0 +1,14 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
APP_ROOT = File.expand_path(File.join(File.dirname(__FILE__), '..'))
|
3
|
+
|
4
|
+
begin
|
5
|
+
require 'rubigen'
|
6
|
+
rescue LoadError
|
7
|
+
require 'rubygems'
|
8
|
+
require 'rubigen'
|
9
|
+
end
|
10
|
+
require 'rubigen/scripts/generate'
|
11
|
+
|
12
|
+
ARGV.shift if ['--help', '-h'].include?(ARGV[0])
|
13
|
+
RubiGen::Base.use_component_sources! [:rubygems, :newgem, :newgem_theme, :test_unit]
|
14
|
+
RubiGen::Scripts::Generate.new.run(ARGV)
|
data/test/test_helper.rb
ADDED
metadata
ADDED
@@ -0,0 +1,105 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: scbi_queue_system
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
prerelease:
|
5
|
+
version: 0.0.2
|
6
|
+
platform: ruby
|
7
|
+
authors:
|
8
|
+
- Dario Guerrero
|
9
|
+
autorequire:
|
10
|
+
bindir: bin
|
11
|
+
cert_chain: []
|
12
|
+
|
13
|
+
date: 2011-07-12 00:00:00 Z
|
14
|
+
dependencies:
|
15
|
+
- !ruby/object:Gem::Dependency
|
16
|
+
name: json
|
17
|
+
prerelease: false
|
18
|
+
requirement: &id001 !ruby/object:Gem::Requirement
|
19
|
+
none: false
|
20
|
+
requirements:
|
21
|
+
- - ">="
|
22
|
+
- !ruby/object:Gem::Version
|
23
|
+
version: 1.5.3
|
24
|
+
type: :runtime
|
25
|
+
version_requirements: *id001
|
26
|
+
- !ruby/object:Gem::Dependency
|
27
|
+
name: hoe
|
28
|
+
prerelease: false
|
29
|
+
requirement: &id002 !ruby/object:Gem::Requirement
|
30
|
+
none: false
|
31
|
+
requirements:
|
32
|
+
- - ">="
|
33
|
+
- !ruby/object:Gem::Version
|
34
|
+
version: 2.8.0
|
35
|
+
type: :development
|
36
|
+
version_requirements: *id002
|
37
|
+
description: scbi_queue_system (SQS) handles a simple queue of jobs executions over multiple machines (clustered installation) or your own personal computer.
|
38
|
+
email:
|
39
|
+
- dariogf@gmail.com
|
40
|
+
executables:
|
41
|
+
- queue_manager.rb
|
42
|
+
- sqstat
|
43
|
+
- sqsub
|
44
|
+
- sqs_install_daemon
|
45
|
+
extensions: []
|
46
|
+
|
47
|
+
extra_rdoc_files:
|
48
|
+
- History.txt
|
49
|
+
- Manifest.txt
|
50
|
+
- PostInstall.txt
|
51
|
+
files:
|
52
|
+
- bin/queue_manager.rb
|
53
|
+
- bin/sqstat
|
54
|
+
- bin/sqsub
|
55
|
+
- bin/sqs_install_daemon
|
56
|
+
- History.txt
|
57
|
+
- lib/scbi_queue_system/autolaunch/com.scbi_queue_system.plist
|
58
|
+
- lib/scbi_queue_system/autolaunch/sqsd-linux
|
59
|
+
- lib/scbi_queue_system/done_job_list.rb
|
60
|
+
- lib/scbi_queue_system/internal_config/internal_config.json
|
61
|
+
- lib/scbi_queue_system/job_list.rb
|
62
|
+
- lib/scbi_queue_system/queued_job_list.rb
|
63
|
+
- lib/scbi_queue_system/running_job_list.rb
|
64
|
+
- lib/scbi_queue_system.rb
|
65
|
+
- Manifest.txt
|
66
|
+
- PostInstall.txt
|
67
|
+
- Rakefile
|
68
|
+
- README.rdoc
|
69
|
+
- script/console
|
70
|
+
- script/destroy
|
71
|
+
- script/generate
|
72
|
+
- test/submit_script.sh
|
73
|
+
- test/test_helper.rb
|
74
|
+
- test/test_scbi_queue_system.rb
|
75
|
+
homepage: http://www.scbi.uma.es/downloads
|
76
|
+
licenses: []
|
77
|
+
|
78
|
+
post_install_message: PostInstall.txt
|
79
|
+
rdoc_options:
|
80
|
+
- --main
|
81
|
+
- README.rdoc
|
82
|
+
require_paths:
|
83
|
+
- lib
|
84
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
85
|
+
none: false
|
86
|
+
requirements:
|
87
|
+
- - ">="
|
88
|
+
- !ruby/object:Gem::Version
|
89
|
+
version: "0"
|
90
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
91
|
+
none: false
|
92
|
+
requirements:
|
93
|
+
- - ">="
|
94
|
+
- !ruby/object:Gem::Version
|
95
|
+
version: "0"
|
96
|
+
requirements: []
|
97
|
+
|
98
|
+
rubyforge_project: scbi_queue_system
|
99
|
+
rubygems_version: 1.7.2
|
100
|
+
signing_key:
|
101
|
+
specification_version: 3
|
102
|
+
summary: scbi_queue_system (SQS) handles a simple queue of jobs executions over multiple machines (clustered installation) or your own personal computer.
|
103
|
+
test_files:
|
104
|
+
- test/test_helper.rb
|
105
|
+
- test/test_scbi_queue_system.rb
|