db_sucker 3.0.1 → 3.2.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 401d1701ef72b4acc718017a1bd234328e52efc2563439f2dac10fe566098b9d
4
- data.tar.gz: 0e1d5e4456ae52412fd56b6417d9fb66cf60424a3ea0e922a73db3705a9450c4
3
+ metadata.gz: 41ee20b2f776fdf1308842c417e64fff4b5d51178bb50ee91db5215ee6e6d660
4
+ data.tar.gz: 50999e195fa36291660360afee4e78abd4761a5acabb9ea38bb004e9abb937e0
5
5
  SHA512:
6
- metadata.gz: 89202e7556c6f89156e12e67c4c3899fd12caf542e76bb75548b888bb588945031c5628061672e1a8805cd02338ee5d1e1104b069867aac2bf19a3dd8d0251f6
7
- data.tar.gz: ba11c360337e2ba2952f2e87069b83541d6111ba131e6cd81118026fb26a966b955b75c450bb7caed05aa9923fed18f18fa0baa01c4d359d94b77575d589391f
6
+ metadata.gz: a743447ba674c4213736395e1a6a7d7f0aed5f2305bef00e4cd1ee7903eb4e458a63a55fad1576de818079d9ea3f13f693f7af4f3519b4d7871b7f9f80301d09
7
+ data.tar.gz: 6d9c944d4f1b0420d785882b0c60b2c79c8803d9bdec83b801756997dec8db68a01c747a5b37a8a90533071358697ce63b3748675dc796c285c88a1d9449e8d9
data/CHANGELOG.md CHANGED
@@ -1,3 +1,70 @@
1
+ ## 3.2.1
2
+
3
+ ### Updates
4
+
5
+ * Added means to signal all threads in order to wake them up as they occasionally get stuck for some reason. Press `S` or use `:signal-threads` to execute. Have yet to find the culprint :)
6
+ * Slight UX improvements, silent commands briefly flash the screen, invalid commands ring the bell
7
+
8
+ ### Fixes
9
+
10
+ * don't warn about orphaned concurrent-ruby threads, debug log trace of remaining threads
11
+ * join killed threads to ensure they are dead
12
+ * minor sshdiag fixes/enhancements
13
+
14
+ -------------------
15
+
16
+ ## 3.2.0
17
+
18
+ ### Updates
19
+
20
+ * Added support for native sftp command line utility (see application option `file_transport`) but it
21
+ only works with non-interactive key authentication.
22
+
23
+ ### Fixes
24
+
25
+ * Prevent application from crashing when eval produces non-StandardErrors (e.g. SyntaxErrors)
26
+
27
+ -------------------
28
+
29
+ ## 3.1.1
30
+
31
+ ### Fixes
32
+
33
+ * Prevent a single task (running in main thread) to summon workers (and fail) when getting deferred
34
+
35
+ -------------------
36
+
37
+ ## 3.1.0
38
+
39
+ ### Fixes
40
+
41
+ * Prevent app to run out of consumers when tasks are waiting for defer ready
42
+ * Prevent IO errors on Ruby < 2.3 in uncompress
43
+ * Prevent racing conditions in SSH diagnose task
44
+ * Minor fixes
45
+
46
+ -------------------
47
+
48
+ ## 3.0.3
49
+
50
+ * no changes, fixed my tag screwup
51
+
52
+ -------------------
53
+
54
+ ## 3.0.2
55
+
56
+ * no changes, github tags can't be altered, I screwed up
57
+
58
+ -------------------
59
+
60
+ ## 3.0.1
61
+
62
+ ### Fixes
63
+
64
+ * hotfix for 3.0.0 release
65
+
66
+ -------------------
67
+
1
68
  ## 3.0.0
2
69
 
3
70
  ### Updates
data/README.md CHANGED
@@ -7,7 +7,8 @@ You configure your hosts via an YAML configuration in which you can define multi
7
7
 
8
8
  This tool is meant for pulling live data into your development environment. **It is not designed for backups!** but you might get away with it.
9
9
 
10
- ![screenshot](https://i.imgur.com/EAjWrEd.png)
10
+ [![screenshot](https://i.imgur.com/EAjWrEd.png)](https://www.youtube.com/watch?v=jdhyQzJOIkE)
11
+ [▶ see it in action](https://www.youtube.com/watch?v=jdhyQzJOIkE)
11
12
 
12
13
  ---
13
14
  ## Alpha product (v3 is a rewrite), use at your own risk, always have a backup!
@@ -18,7 +19,7 @@ This tool is meant for pulling live data into your development environment. **It
18
19
  * independent parallel dump / download / import cycle for each table
19
20
  * verifies file integrity via SHA
20
21
  * flashy and colorful curses based interface with keyboard shortcuts
21
- * more status indications than you would ever want (even more if the remote has a somewhat recent `pv` (pipeviewer) installed)
22
+ * more status indications than you would ever want (even more if the remote has `pv` (pipeviewer) >= 1.3.8 installed)
22
23
  * limit concurrency of certain type of tasks (e.g. limit downloads, imports, etc.)
23
24
  * uses more threads than any application should ever use (seriously it's a nightmare)
24
25
 
@@ -31,7 +32,7 @@ Currently `db_sucker` only handles the following data-flow constellation:
31
32
 
32
33
  On the local side you will need:
33
34
  - unixoid OS
34
- - Ruby (>= 2.0, != 2.3.1 see gotchas)
35
+ - Ruby (>= 2.0, != 2.3.0 see [Caveats](#caveats---bugs))
35
36
  - mysql2 gem
36
37
  - MySQL client (`mysql` command will be used for importing)
37
38
 
@@ -41,6 +42,7 @@ On the remote side you will need:
41
42
  - any folder with write permissions (for the temporary dumps)
42
43
  - mysqldump executable
43
44
  - MySQL credentials :)
45
+ - Optional: Install "pv" aka pipeviewer with a version >= 1.3.8 for progress displays on remote tasks
44
46
 
45
47
 
46
48
  ## Installation
@@ -49,9 +51,13 @@ Simple as:
49
51
 
50
52
  $ gem install db_sucker
51
53
 
52
- At the moment you are advised to adjust the MaxSessions limit on your remote SSH server if you run into issues, see Caveats.
54
+ You will need mysql2 as well (it's not a dependency as we might support other DBMS in the future):
53
55
 
54
- You will also need at least one configuration, see Configuration.
56
+ $ gem install mysql2
57
+
58
+ At the moment you are advised to adjust the MaxSessions limit on your remote SSH server if you run into issues, see [Caveats](#caveats---bugs).
59
+
60
+ You will also need at least one configuration, see [Configuration](#configuration-for-sucking---yaml-format).
55
61
 
56
62
 
57
63
  ## Usage
@@ -124,12 +130,12 @@ To get a list of available interface options and shortcuts press `?` or type `:h
124
130
 
125
131
  ## Configuration (application) - Ruby format
126
132
 
127
- DbSucker has a lot of settings and other mechanisms which you can tweak and utilize by creating a `~/.db_sucker/config.rb` file. You can change settings, add hooks or define own actions. For more information please take a look at the [documented example config](https://github.com/2called-chaos/db_sucker/blob/master/doc/config_example.rb) and/or [complete list of all settings](https://github.com/2called-chaos/db_sucker/blob/master/lib/db_sucker/application.rb#L58-L129).
133
+ DbSucker has a lot of settings and other mechanisms which you can tweak and utilize by creating a `~/.db_sucker/config.rb` file. You can change settings, add hooks or define own actions. For more information please take a look at the [documented example config](https://github.com/2called-chaos/db_sucker/blob/master/doc/config_example.rb) and/or [complete list of all settings](https://github.com/2called-chaos/db_sucker/blob/master/lib/db_sucker/application.rb#L60-L134).
128
134
 
129
135
 
130
136
  ## Deferred import
131
137
 
132
- Tables with an uncompressed filesize of over 50MB will be queued up for import. Files smaller than 50MB will be imported concurrently with other tables. When all those have finished the large ones will import one after another. You can skip this behaviour with the `-n` resp. `--no-deffer` option. The threshold is changeable in your `config.rb`, see Configuration.
138
+ Tables with an uncompressed filesize of over 50MB will be queued up for import. Files smaller than 50MB will be imported concurrently with other tables. When all those have finished the large ones will import one after another. You can skip this behaviour with the `-n` resp. `--no-deffer` option. The threshold is changeable in your `config.rb`, see [Configuration](#configuration-application---ruby-format).
133
139
 
134
140
 
135
141
  ## Importer
@@ -159,7 +165,6 @@ Currently there is only the "binary" importer which will use the mysql client bi
159
165
  ### General
160
166
 
161
167
  * Ruby 2.3.0 has a bug that might segfault your ruby if some exceptions occur, this is fixed since 2.3.1 and later
162
- * Consumers that are waiting (e.g. deferred or slot pool) won't release their tasks, if you have to few consumers you might softlock
163
168
 
164
169
  ### SSH errors / MaxSessions
165
170
 
@@ -170,7 +175,7 @@ Under certain conditions the program might softlock when the remote unexpectedly
170
175
  If you get warnings that SSH errors occured (and most likely tasks fail), please do any of the following to prevent the issue:
171
176
 
172
177
  * Raise the MaxSession setting on the remote SSHd server if you can (recommended)
173
- * Lower the amount of slots for concurrent downloads (see Configuration)
178
+ * Lower the amount of slots for concurrent downloads (see [Configuration](#configuration-application---ruby-format))
174
179
  * Lower the amount of consumers (not recommended, use slots instead)
175
180
 
176
181
  You can run basic SSH diagnosis tests with `db_sucker <config_identifier> -a sshdiag`.
data/VERSION CHANGED
@@ -1 +1 @@
1
- 3.0.1
1
+ 3.2.1
@@ -14,6 +14,10 @@ opts[:deferred_threshold] = 50_000_000 # 50 MB
14
14
  opts[:status_format] = :full # used for IO operations, can be one of: none, minimal, full
15
15
  opts[:pv_enabled] = true # disable pv utility autodiscovery (force non-usage)
16
16
 
17
+ # Use native SFTP command for way faster transfer rates, only use with non-interactive key authentication!
18
+ opts[:file_transport] = :ruby
19
+ #opts[:file_transport] = :native
20
+
17
21
  # used to open core dumps (should be a blocking call, e.g. `subl -w' or `mate -w')
18
22
  # MUST be windowed! vim, nano, etc. will not work!
19
23
  opts[:core_dump_editor] = "subl -w"
data/lib/db_sucker.rb CHANGED
@@ -4,7 +4,6 @@ end
4
4
 
5
5
  # stdlib
6
6
  require "benchmark"
7
- require "optparse"
8
7
  require "fileutils"
9
8
  require "thread"
10
9
  require "monitor"
@@ -33,6 +32,7 @@ require "db_sucker/application/sklaven_treiber/log_spool"
33
32
  require "db_sucker/application/sklaven_treiber/worker/io/base"
34
33
  require "db_sucker/application/sklaven_treiber/worker/io/throughput"
35
34
  require "db_sucker/application/sklaven_treiber/worker/io/sftp_download"
35
+ require "db_sucker/application/sklaven_treiber/worker/io/sftp_native_download"
36
36
  require "db_sucker/application/sklaven_treiber/worker/io/file_copy"
37
37
  require "db_sucker/application/sklaven_treiber/worker/io/file_gunzip"
38
38
  require "db_sucker/application/sklaven_treiber/worker/io/file_shasum"
@@ -37,10 +37,17 @@ module DbSucker
37
37
  else "Unhandled exception terminated application!"
38
38
  end
39
39
  ensure
40
+ $core_runtime_exiting = 1
40
41
  app.fire(:core_shutdown)
41
- remain = Thread.list.length
42
- if remain > 1
43
- app.warning "#{remain} threads remain (should be 1)..."
42
+ remain = app.filtered_threads
43
+ if remain.length > 1
44
+ app.error "[WARN] #{remain.length} threads remain (should be 1)..."
45
+ remain.each do |thr|
46
+ app.debug "[THR] #{Thread.main == thr ? "MAIN" : "THREAD"}\t#{thr.alive? ? "ALIVE" : "DEAD"}\t#{thr.inspect}", 10
47
+ thr.backtrace.each do |l|
48
+ app.debug "[THR]\t#{l}", 20
49
+ end
50
+ end
44
51
  else
45
52
  app.debug "1 thread remains..."
46
53
  end
@@ -77,6 +84,12 @@ module DbSucker
77
84
  status_format: :full, # used for IO operations, can be one of: none, minimal, full
78
85
  pv_enabled: true, # disable pv utility autodiscovery (force non-usage)
79
86
 
87
+ # file transport: how to copy files from the remote
88
+ # ruby Use ruby sftp-library (why slow?)
89
+ # native Shell out to native sftp command (we try our best to build the command according to your SSH settings)
90
+ # NOTE: only use with non-interactive key authentication
91
+ file_transport: :ruby,
92
+
80
93
  # sklaven treiber
81
94
  window_enabled: true, # if disabled effectively disables any status progress or window drawing
82
95
  window_draw: true, # wether to refresh screen or not
@@ -1,6 +1,12 @@
1
1
  module DbSucker
2
2
  class Application
3
3
  module Core
4
+ def filtered_threads
5
+ Thread.list.reject do |thr|
6
+ thr.backtrace[0]["gems/concurrent-ruby"] rescue false
7
+ end
8
+ end
9
+
4
10
  def sandboxed &block
5
11
  block.call
6
12
  rescue StandardError => ex
@@ -49,20 +55,22 @@ module DbSucker
49
55
  end
50
56
 
51
57
  # worker info
52
- f.puts "\n\n#{sklaventreiber.workers.length} workers:\n"
53
- sklaventreiber.workers.each do |w|
54
- f.puts "#{"[SSH] " if w.sshing} #{w.descriptive} #{w.state}".strip
55
- end
56
-
57
- # slot pool
58
- f.puts "\n\n#{sklaventreiber.slot_pools.length} slot pools:\n"
59
- sklaventreiber.slot_pools.each do |name, pool|
60
- f.puts "#{name}: #{pool.slots}"
61
- pool.active.each do |thr|
62
- f.puts "\tactive\t #{thr} [#{thr[:itype]}]"
58
+ if sklaventreiber
59
+ f.puts "\n\n#{sklaventreiber.workers.length} workers:\n"
60
+ sklaventreiber.workers.each do |w|
61
+ f.puts "#{"[SSH] " if w.sshing} #{w.descriptive} #{w.state}".strip
63
62
  end
64
- pool.waiting.each do |wthr, tthr|
65
- f.puts "\twaiting\t #{tthr} [#{tthr[:itype]}]"
63
+
64
+ # slot pool
65
+ f.puts "\n\n#{sklaventreiber.slot_pools.length} slot pools:\n"
66
+ sklaventreiber.slot_pools.each do |name, pool|
67
+ f.puts "#{name}: #{pool.slots}"
68
+ pool.active.each do |thr|
69
+ f.puts "\tactive\t #{thr} [#{thr[:itype]}]"
70
+ end
71
+ pool.waiting.each do |wthr, tthr|
72
+ f.puts "\twaiting\t #{tthr} [#{tthr[:itype]}]"
73
+ end
66
74
  end
67
75
  end
68
76
  end
@@ -94,7 +102,7 @@ module DbSucker
94
102
  end.tap do |thr|
95
103
  # set type and priority
96
104
  thr[:itype] = type
97
- thr.priority = @opts[:"tp_#{type}"]
105
+ thr.priority = @opts[:"tp_#{type}"] || 0
98
106
  thr.abort_on_exception = true
99
107
 
100
108
  # add signal methods
@@ -101,55 +101,73 @@ module DbSucker
101
101
  i += 1
102
102
  end
103
103
  end
104
- t.kill
104
+ t.kill.join
105
105
  puts "Thread priority: -#{m[1].abs}..+#{m[2].abs}"
106
106
  end
107
107
 
108
108
  # via -a/--action sshdiag
109
109
  def dispatch_sshdiag
110
- log c("\nPlease wait while we run some tests...\n", :blue)
111
110
  _identifier, _ctn = false, false, false
112
111
  idstr = ARGV.shift
113
112
  varstr = ARGV.shift
114
113
 
115
114
  configful_dispatch(idstr, varstr) do |identifier, ctn, variation, var|
115
+ unless ctn
116
+ abort "This test requires a config identifier with an SSH connection!"
117
+ end
118
+
119
+ log c("\nPlease wait while we run some tests...\n", :blue)
116
120
  _identifier = identifier
117
121
  _ctn = ctn
118
122
  channels = []
123
+ monitor = Monitor.new
119
124
  stop = false
120
125
  maxsessions = :unknown
121
126
  begin
122
127
  t = Thread.new {
123
128
  begin
124
129
  loop do
125
- ctn.loop_ssh(0.1)
130
+ ctn.loop_ssh(0.1) { monitor.synchronize { channels.any? } }
126
131
  end
127
132
  rescue DbSucker::Application::Container::SSH::ChannelOpenFailedError
128
- maxsessions = channels.length - channels.select{|c| c[:open_failed] }.length
129
- stop = true
133
+ monitor.synchronize do
134
+ maxsessions = channels.length - channels.select{|c| c[:open_failed] }.length
135
+ stop = true
136
+ print "!"
137
+ end
130
138
  retry
131
139
  end
132
140
  }
133
141
  250.times do
134
- break if stop
135
- c, r = ctn.blocking_channel_result("sleep 60", blocking: false, channel: true, use_sh: true)
136
- channels << c
142
+ break if monitor.synchronize { stop }
143
+ c, r = ctn.blocking_channel_result("sleep 900", blocking: false, channel: true, use_sh: true)
144
+ monitor.synchronize do
145
+ channels << c
146
+ print "+"
147
+ end
137
148
  sleep 0.1
138
149
  end
139
150
  ensure
140
151
  debug "Stopping sessions (#{channels.length})..."
141
- channels.each_with_index do |c, i|
142
- debug "Channel ##{i+1} #{c[:pid] ? "with PID #{c[:pid]}" : "has no PID"}"
152
+ puts # newline for style
153
+ i = 1
154
+ loop do
155
+ break if monitor.synchronize { channels.empty? }
156
+ c = monitor.synchronize { channels.shift }
157
+ debug "Channel ##{i} #{c[:pid] ? "with PID #{c[:pid]}" : "has no PID"}"
143
158
  ctn.kill_remote_process(c[:pid]) if c[:pid]
159
+ print "-"
160
+ i += 1
144
161
  end
145
- log c("\nSSH MaxSessions: #{c maxsessions, :magenta}", :cyan)
162
+ maxsessions = "#{maxsessions}+" if maxsessions.to_i >= 250
163
+ log c("\n\nSSH MaxSessions: #{c maxsessions, :magenta}", :cyan)
146
164
  log "This value determines how many sessions we can multiplex over a single TCP connection."
147
165
  log "Currently, DbSucker can only utilize one connection, thus this value defines the maxmium concurrency."
148
166
  log "If you get errors you can either"
149
167
  log " * increase the SSHd `MaxSessions' setting on the remote (if you can)"
150
168
  log " * reduce the amount of workers and/or remote slots"
151
169
  log " * fix the mess that is this tool, visit #{c "https://github.com/2called-chaos/db_sucker", :blue}"
152
- t.kill
170
+ t.kill.join
153
171
  end
154
172
  end
155
173
  rescue Net::SSH::AuthenticationFailed => ex
@@ -242,8 +242,24 @@ module DbSucker
242
242
  @threads.each{|t| t[:start] = true; t.signal }
243
243
 
244
244
  # master thread (control)
245
+ additionals = 0
246
+ Thread.current[:summon_workers] = 0
245
247
  while @threads.any?(&:alive?)
246
248
  _control_thread
249
+ Thread.current.sync do
250
+ Thread.current[:summon_workers].times do
251
+ app.debug "Spawned additional worker due to deferred import to prevent softlocks"
252
+ @threads << app.spawn_thread(:sklaventreiber_worker) {|thr|
253
+ begin
254
+ additionals += 1
255
+ thr[:managed_worker] = cnum + additionals
256
+ _queueoff
257
+ rescue Interrupt
258
+ end
259
+ }
260
+ end
261
+ Thread.current[:summon_workers] = 0
262
+ end
247
263
  Thread.current.wait(0.1)
248
264
  end
249
265
  @threads.each(&:join)
@@ -4,6 +4,7 @@ module DbSucker
4
4
  class Worker
5
5
  SlotPoolNotInitializedError = Class.new(::RuntimeError)
6
6
  ChannelFailRetryError = Class.new(::RuntimeError)
7
+ UnknownFileTransportError = Class.new(::RuntimeError)
7
8
 
8
9
  include Core
9
10
  include Accessors
@@ -15,6 +15,12 @@ module DbSucker
15
15
  end
16
16
  end
17
17
 
18
+ def sftp_native_download *args, &block
19
+ IO::SftpNativeDownload.new(self, *args).tap do |op|
20
+ block.try(:call, op)
21
+ end
22
+ end
23
+
18
24
  def file_copy *args, &block
19
25
  IO::FileCopy.new(self, *args).tap do |op|
20
26
  block.try(:call, op)
@@ -42,7 +42,6 @@ module DbSucker
42
42
  ensure
43
43
  @state = :finishing
44
44
  gz.close
45
- @in_file.close
46
45
  @out_file.close
47
46
  end
48
47
 
@@ -63,7 +63,7 @@ module DbSucker
63
63
  end
64
64
  killer.signal.join
65
65
  ensure
66
- killer.kill
66
+ killer.kill.join
67
67
  end
68
68
  end
69
69
  end
@@ -0,0 +1,79 @@
1
+ module DbSucker
2
+ class Application
3
+ class SklavenTreiber
4
+ class Worker
5
+ module IO
6
+ class SftpNativeDownload < Base
7
+ UnknownEventError = Class.new(::RuntimeError)
8
+ attr_reader :downloader
9
+
10
+ def init
11
+ @label = "downloading"
12
+ @entity = "download"
13
+ @throughput.categories << :inet << :inet_down
14
+ end
15
+
16
+ def reset_state
17
+ super
18
+ @downloader = nil
19
+ end
20
+
21
+ def build_sftp_command src, dst
22
+ [].tap{|cmd|
23
+ cmd << %{sftp}
24
+ cmd << %{-P #{@ctn.source["ssh"]["port"]}} if @ctn.source["ssh"]["port"]
25
+ @ctn.ssh_key_files.each {|f| cmd << %{-i "#{f}"} }
26
+ cmd << %{"#{@ctn.source["ssh"]["username"]}@#{@ctn.source["ssh"]["hostname"]}:#{src}"}
27
+ cmd << %{"#{dst}"}
28
+ }.join(" ").strip
29
+ end
30
+
31
+ def download! opts = {}
32
+ opts = opts.reverse_merge(tries: 3, read_size: @read_size, force_new_connection: true)
33
+ cmd = build_sftp_command(@remote, @local)
34
+ prepare_local_destination
35
+
36
+ execute(opts.slice(:tries).merge(sleep_error: 3)) do
37
+ begin
38
+ @state = :init
39
+ @ctn.sftp_start(opts[:force_new_connection]) do |sftp|
40
+ @filesize = sftp.lstat!(@remote).size
41
+ end
42
+
43
+ # status thread
44
+ status_thread = @worker.app.spawn_thread(:sklaventreiber_worker_ctrl) do |thr|
45
+ loop do
46
+ @offset = File.size(@local) if File.exist?(@local)
47
+ break if thr[:stop]
48
+ thr.wait(0.25)
49
+ end
50
+ end
51
+
52
+ @state = :downloading
53
+ debug "Opening process `#{cmd}'"
54
+ Open3.popen2e(cmd, pgroup: true) do |_stdin, _stdouterr, _thread|
55
+ # close & exit status
56
+ _stdin.close_write
57
+ exit_status = _thread.value
58
+ if exit_status == 0
59
+ debug "Process exited (#{exit_status}) `#{cmd}'"
60
+ else
61
+ warning "Process exited (#{exit_status}) `#{cmd}'"
62
+ end
63
+ end
64
+
65
+ status_thread[:stop] = true
66
+ status_thread.signal
67
+ status_thread.join
68
+ ensure
69
+ @state = :finishing
70
+ end
71
+ end
72
+ @state = :done
73
+ end
74
+ end
75
+ end
76
+ end
77
+ end
78
+ end
79
+ end
@@ -66,12 +66,12 @@ module DbSucker
66
66
 
67
67
  def register target
68
68
  sync do
69
- if @instances[target]
69
+ if @instances[target.object_id]
70
70
  raise InstanceAlreadyRegisteredError, "throughput manager cannot register more than once on the same target: `#{target}'"
71
71
  else
72
72
  raise NotImplementedError, "throughput manager requires the target to respond_to?(:filesize)" unless target.respond_to?(:filesize)
73
73
  raise NotImplementedError, "throughput manager requires the target to respond_to?(:offset)" unless target.respond_to?(:offset)
74
- @instances[target] = Instance.new(self, target)
74
+ @instances[target.object_id] = Instance.new(self, target)
75
75
  end
76
76
  end
77
77
  end
@@ -142,11 +142,23 @@ module DbSucker
142
142
  @local_file_compressed = local_tmp_file(File.basename(@remote_file_compressed))
143
143
  @local_files_to_remove << @local_file_compressed
144
144
 
145
- sftp_download(@ctn, @remote_file_compressed => @local_file_compressed) do |dl|
146
- dl.status_format = app.opts[:status_format]
147
- @status = [dl, "yellow"]
148
- dl.abort_if { @should_cancel }
149
- dl.download!
145
+ case app.opts[:file_transport]
146
+ when :ruby
147
+ sftp_download(@ctn, @remote_file_compressed => @local_file_compressed) do |dl|
148
+ dl.status_format = app.opts[:status_format]
149
+ @status = [dl, "yellow"]
150
+ dl.abort_if { @should_cancel }
151
+ dl.download!
152
+ end
153
+ when :native
154
+ sftp_native_download(@ctn, @remote_file_compressed => @local_file_compressed) do |dl|
155
+ dl.status_format = app.opts[:status_format]
156
+ @status = [dl, "yellow"]
157
+ dl.abort_if { @should_cancel }
158
+ dl.download!
159
+ end
160
+ else
161
+ raise UnknownFileTransportError, "Unknown file transport `#{app.opts[:file_transport]}' configured, valid are `ruby' and `native'!"
150
162
  end
151
163
  end
152
164
 
@@ -247,6 +259,9 @@ module DbSucker
247
259
 
248
260
  def _l_wait_for_workers
249
261
  @perform << "l_import_file_deferred"
262
+ unless Thread.current[:managed_worker] == :main
263
+ Thread.main.sync { Thread.main[:summon_workers] += 1 }
264
+ end
250
265
  wait_defer_ready
251
266
  end
252
267
 
@@ -302,7 +317,7 @@ module DbSucker
302
317
  end
303
318
  }
304
319
  else
305
- raise ImporterNotFoundError, "variation `#{cfg.name}/#{name}' defines unknown importer `#{imp}' (in `#{cfg.src}')"
320
+ raise Container::Variation::ImporterNotFoundError, "variation `#{@var.cfg.name}/#{@var.name}' defines unknown importer `#{imp}' (in `#{@var.cfg.src}')"
306
321
  end
307
322
  t.join
308
323
  end
@@ -4,9 +4,6 @@ module DbSucker
4
4
  include Core
5
5
  include Curses
6
6
  COLOR_GRAY = 8
7
- COL1 = 20
8
- COL2 = 25
9
- COL3 = 20
10
7
  OutputHelper.hook(self)
11
8
 
12
9
  attr_reader :app, :sklaventreiber, :keypad, :tick, :spinner_frames
@@ -193,7 +190,7 @@ module DbSucker
193
190
  if opts[:threads]
194
191
  next_line
195
192
  yellow " Threads: "
196
- blue "#{Thread.list.length} ".ljust(COL1, " ")
193
+ blue "#{Thread.list.length}".ljust(3, " ")
197
194
  end
198
195
 
199
196
  if opts[:started]
@@ -292,7 +289,7 @@ module DbSucker
292
289
 
293
290
  puts
294
291
  puts c(" Status: ") << c(sklaventreiber.status[0], sklaventreiber.status[1].presence || "red")
295
- puts c(" Threads: ") << c("#{Thread.list.length} ".ljust(COL1, " "), :blue)
292
+ puts c(" Threads: ") << c("#{Thread.list.length} ", :blue)
296
293
  puts c(" Started: ") << c("#{@app.boot}", :blue) << c(" (") << c(human_seconds(Time.current - app.boot), :blue) << c(")")
297
294
  puts c("Transaction ID: ") << c("#{sklaventreiber.trxid}", :cyan)
298
295
  puts c(" Database: ") << c(t_db || "?", :magenta) << c(" (transferred ") << c(t_total || "?", :blue) << c(" of ") << c(t_done || "?", :blue) << c(" tables)")
@@ -83,6 +83,10 @@ module DbSucker
83
83
  init_pair(Window::COLOR_GRAY, 0, -1)
84
84
  end
85
85
 
86
+ def flashbang
87
+ flash
88
+ end
89
+
86
90
  def set_cursor visibility
87
91
  curs_set(visibility)
88
92
  end
@@ -20,12 +20,14 @@ module DbSucker
20
20
  ["T", "create core dump and open in editor"],
21
21
  ["q", "quit prompt"],
22
22
  ["Q", "same as ctrl-c"],
23
+ ["S", "signal/wakeup all threads"],
23
24
  [":", "main prompt"],
24
25
  ],
25
26
  main_commands: [
26
27
  [["?", %w[h elp]], [], "shows this help"],
27
28
  [[%w[q uit]], [], "quit prompt"],
28
29
  [["q!", "quit!"], [], "same as ctrl-c"],
30
+ [["signal-threads"], [], "signal/wakeup all threads"],
29
31
  [["kill"], [], "(dirty) interrupts all workers"],
30
32
  [["kill!"], [], "(dirty) essentially SIGKILL (no cleanup)"],
31
33
  [["dump"], [], "create and open coredump"],
@@ -50,6 +52,7 @@ module DbSucker
50
52
  when "?" then show_help
51
53
  when "q" then quit_dialog
52
54
  when "Q" then $core_runtime_exiting = 1
55
+ when "S" then signal_threads
53
56
  end
54
57
  end
55
58
  end
@@ -71,6 +74,8 @@ module DbSucker
71
74
  when "eval" then args.any? ? _eval(args.join(" ")) : eval_prompt
72
75
  when "p", "pause" then pause_workers(args)
73
76
  when "r", "resume" then resume_workers(args)
77
+ when "signal-threads" then signal_threads
78
+ else app.print("\a")
74
79
  end
75
80
  end
76
81
  end
@@ -125,6 +130,7 @@ module DbSucker
125
130
 
126
131
  def pause_workers args
127
132
  if args[0].is_a?(String)
133
+ window.flashbang
128
134
  _detect_worker(args.join(" "), &:pause)
129
135
  else
130
136
  prompt!("Usage: :p(ause) <table_name|--all>", color: :yellow)
@@ -133,6 +139,7 @@ module DbSucker
133
139
 
134
140
  def resume_workers args
135
141
  if args[0].is_a?(String)
142
+ window.flashbang
136
143
  _detect_worker(args.join(" "), &:unpause)
137
144
  else
138
145
  prompt!("Usage: :r(esume) <table_name|--all>", color: :yellow)
@@ -141,6 +148,7 @@ module DbSucker
141
148
 
142
149
  def cancel_workers args
143
150
  if args[0].is_a?(String)
151
+ window.flashbang
144
152
  _detect_worker(args.join(" ")) do |wrk|
145
153
  wrk.cancel! "canceled by user"
146
154
  end
@@ -150,16 +158,29 @@ module DbSucker
150
158
  end
151
159
 
152
160
  def kill_ssh_poll
153
- return if sklaventreiber.workers.select{|w| !w.done? || w.sshing }.any?
154
- sklaventreiber.poll.try(:kill)
161
+ if sklaventreiber.workers.select{|w| !w.done? || w.sshing }.any?
162
+ app.print("\a")
163
+ prompt!("Error: cannot kill SSH poll whilst in use", color: :red)
164
+ else
165
+ window.flashbang
166
+ sklaventreiber.poll.try(:kill)
167
+ end
155
168
  end
156
169
 
157
170
  def kill_workers
171
+ window.flashbang
158
172
  Thread.list.each do |thr|
159
173
  thr.raise(Interrupt) if thr[:managed_worker]
160
174
  end
161
175
  end
162
176
 
177
+ def signal_threads
178
+ window.flashbang
179
+ Thread.list.each do |thr|
180
+ thr.signal if thr.respond_to?(:signal)
181
+ end
182
+ end
183
+
163
184
  def kill_app
164
185
  exit!
165
186
  end
@@ -171,4 +192,3 @@ module DbSucker
171
192
  end
172
193
  end
173
194
  end
174
-
@@ -72,7 +72,7 @@ module DbSucker
72
72
  begin
73
73
  f.puts("#{evil}\n\n")
74
74
  f.puts(app.sync{ app.instance_eval(evil) })
75
- rescue StandardError => ex
75
+ rescue Exception => ex
76
76
  f.puts("#{ex.class}: #{ex.message}")
77
77
  ex.backtrace.each {|l| f.puts(" #{l}") }
78
78
  end
@@ -12,7 +12,7 @@ module DbSucker
12
12
  end
13
13
  app.debug "IO-stats: {#{iostats * ", "}}"
14
14
  end
15
- app.dump_core if Thread.list.length > 1
15
+ app.dump_core if app.filtered_threads.length > 1
16
16
  app.debug "RSS: #{app.human_bytes(`ps h -p #{Process.pid} -o rss`.strip.split("\n").last.to_i * 1024)}"
17
17
  end
18
18
 
@@ -1,4 +1,4 @@
1
1
  module DbSucker
2
- VERSION = "3.0.1"
2
+ VERSION = "3.2.1"
3
3
  UPDATE_URL = "https://raw.githubusercontent.com/2called-chaos/db_sucker/master/VERSION"
4
4
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: db_sucker
3
3
  version: !ruby/object:Gem::Version
4
- version: 3.0.1
4
+ version: 3.2.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Sven Pachnit
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2018-02-05 00:00:00.000000000 Z
11
+ date: 2021-02-13 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: curses
@@ -175,6 +175,7 @@ files:
175
175
  - lib/db_sucker/application/sklaven_treiber/worker/io/file_shasum.rb
176
176
  - lib/db_sucker/application/sklaven_treiber/worker/io/pv_wrapper.rb
177
177
  - lib/db_sucker/application/sklaven_treiber/worker/io/sftp_download.rb
178
+ - lib/db_sucker/application/sklaven_treiber/worker/io/sftp_native_download.rb
178
179
  - lib/db_sucker/application/sklaven_treiber/worker/io/throughput.rb
179
180
  - lib/db_sucker/application/sklaven_treiber/worker/routines.rb
180
181
  - lib/db_sucker/application/slot_pool.rb
@@ -209,8 +210,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
209
210
  - !ruby/object:Gem::Version
210
211
  version: '0'
211
212
  requirements: []
212
- rubyforge_project:
213
- rubygems_version: 2.7.3
213
+ rubygems_version: 3.1.2
214
214
  signing_key:
215
215
  specification_version: 4
216
216
  summary: Sucks your remote databases via SSH for local tampering.