pgdexter 0.2.1 → 0.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 739c75ffdf977b9bbe8c29584da2c762f70fe527
4
- data.tar.gz: 5059b53b96e5208146d3fffc44dcf485f40269e6
3
+ metadata.gz: 16f61626f5003a801ec12cda3f663390a57e69d8
4
+ data.tar.gz: 238ee1b24dab3f473accc2ca9a5156fd31e8d19e
5
5
  SHA512:
6
- metadata.gz: c9f071adbd8d2abe21dc454a709ddc2f0b7f9165473eac2a7de29ec4494e6cc17823d30fb000214d4d9f8d7c2cb327bef1a18e90418129868bf9c4011ad3b27c
7
- data.tar.gz: e235bd08981cd3a0a2a75e266045bac412252bbab0a57623894f98e98442eabd0d8aa2923c8641dffbaed083aa1f3d026631f20983c411112502575052977676
6
+ metadata.gz: ccae6266196fc84ea0fc69177b80144f8747996ff0693b6a7492e34d095b2cb400e9ce3299741326c50a5eb267ef48be368384a2ab27553002eb93ac7b6d0922
7
+ data.tar.gz: 944bc27bb1ffff546c2f33aa10aa1b8371ba43ae2be889a6b9445c3d3a93ca36fee0cfd57487134a5ea807e37583f74b1e5e9de7714fa30364ab28f64a5679e8
data/CHANGELOG.md CHANGED
@@ -1,3 +1,11 @@
1
+ ## 0.3.0
2
+
3
+ - Added support for schemas
4
+ - Added support for csv format
5
+ - Added `--analyze` option and do not analyze by default
6
+ - Added `--min-calls` option
7
+ - Fixed debug output when indexes not found
8
+
1
9
  ## 0.2.1
2
10
 
3
11
  - Fixed bad suggestions
data/README.md CHANGED
@@ -12,8 +12,8 @@ First, install [HypoPG](https://github.com/dalibo/hypopg) on your database serve
12
12
 
13
13
  ```sh
14
14
  cd /tmp
15
- curl -L https://github.com/dalibo/hypopg/archive/1.0.0.tar.gz | tar xz
16
- cd hypopg-1.0.0
15
+ curl -L https://github.com/dalibo/hypopg/archive/1.1.0.tar.gz | tar xz
16
+ cd hypopg-1.1.0
17
17
  make
18
18
  make install # may need sudo
19
19
  ```
@@ -45,23 +45,23 @@ tail -F -n +1 <log-file> | dexter <connection-options>
45
45
  This finds slow queries and generates output like:
46
46
 
47
47
  ```
48
- 2017-06-25T17:52:19+00:00 Started
49
- 2017-06-25T17:52:22+00:00 Processing 189 new query fingerprints
50
- 2017-06-25T17:52:22+00:00 Index found: genres_movies (genre_id)
51
- 2017-06-25T17:52:22+00:00 Index found: genres_movies (movie_id)
52
- 2017-06-25T17:52:22+00:00 Index found: movies (title)
53
- 2017-06-25T17:52:22+00:00 Index found: ratings (movie_id)
54
- 2017-06-25T17:52:22+00:00 Index found: ratings (rating)
55
- 2017-06-25T17:52:22+00:00 Index found: ratings (user_id)
56
- 2017-06-25T17:53:22+00:00 Processing 12 new query fingerprints
48
+ Started
49
+ Processing 189 new query fingerprints
50
+ Index found: public.genres_movies (genre_id)
51
+ Index found: public.genres_movies (movie_id)
52
+ Index found: public.movies (title)
53
+ Index found: public.ratings (movie_id)
54
+ Index found: public.ratings (rating)
55
+ Index found: public.ratings (user_id)
56
+ Processing 12 new query fingerprints
57
57
  ```
58
58
 
59
59
  To be safe, Dexter will not create indexes unless you pass the `--create` flag. In this case, you’ll see:
60
60
 
61
61
  ```
62
- 2017-06-25T17:52:22+00:00 Index found: ratings (user_id)
63
- 2017-06-25T17:52:22+00:00 Creating index: CREATE INDEX CONCURRENTLY ON "ratings" ("user_id")
64
- 2017-06-25T17:52:37+00:00 Index created: 15243 ms
62
+ Index found: public.ratings (user_id)
63
+ Creating index: CREATE INDEX CONCURRENTLY ON "public"."ratings" ("user_id")
64
+ Index created: 15243 ms
65
65
  ```
66
66
 
67
67
  ## Connection Options
@@ -84,30 +84,58 @@ and connection strings:
84
84
  host=localhost port=5432 dbname=mydb
85
85
  ```
86
86
 
87
- ## Options
87
+ ## Collecting Queries
88
88
 
89
- Name | Description | Default
90
- --- | --- | ---
91
- exclude | prevent specific tables from being indexed | None
92
- interval | time to wait between processing queries, in seconds | 60
93
- log-level | `debug` gives additional info for suggested indexes<br />`debug2` gives additional info for processed queries<br />`error` suppresses logging | info
94
- log-sql | log SQL statements executed | false
95
- min-time | only process queries consuming a min amount of DB time, in minutes | 0
89
+ There are many ways to collect queries. For real-time indexing, pipe your logfile:
96
90
 
97
- ## Non-Streaming Modes
91
+ ```sh
92
+ tail -F -n +1 <log-file> | dexter <connection-options>
93
+ ```
98
94
 
99
- You can pass a single statement with:
95
+ Pass a single statement with:
100
96
 
101
97
  ```sh
102
98
  dexter <connection-options> -s "SELECT * FROM ..."
103
99
  ```
104
100
 
105
- or files with:
101
+ or pass files:
106
102
 
107
103
  ```sh
108
104
  dexter <connection-options> <file1> <file2>
109
105
  ```
110
106
 
107
+ or use the [pg_stat_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) extension:
108
+
109
+ ```sh
110
+ dexter <connection-options> --pg-stat-statements
111
+ ```
112
+
113
+ ### Collection Options
114
+
115
+ To prevent one-off queries from being indexed, specify a minimum number of calls before a query is considered for indexing
116
+
117
+ ```sh
118
+ dexter --min-calls 100
119
+ ```
120
+
121
+ You can do the same for total time a query has run
122
+
123
+ ```sh
124
+ dexter --min-time 10 # minutes
125
+ ```
126
+
127
+ Specify the format
128
+
129
+ ```sh
130
+ dexter --input-format csv
131
+ ```
132
+
133
+ When steaming logs, specify the time to wait between processing queries
134
+
135
+ ```sh
136
+ dexter --interval 60 # seconds
137
+ ```
138
+
111
139
  ## Examples
112
140
 
113
141
  Ubuntu with PostgreSQL 9.6
@@ -122,6 +150,36 @@ Homebrew on Mac
122
150
  tail -F -n +1 /usr/local/var/postgres/server.log | dexter dbname
123
151
  ```
124
152
 
153
+ ## Tables
154
+
155
+ You can exclude large or write-heavy tables from indexing with:
156
+
157
+ ```sh
158
+ dexter --exclude table1,table2
159
+ ```
160
+
161
+ Alternatively, you can specify which tables to index with:
162
+
163
+ ```sh
164
+ dexter --include table3,table4
165
+ ```
166
+
167
+ ## Debugging
168
+
169
+ See how Dexter is processing queries with:
170
+
171
+ ```sh
172
+ dexter --log-sql --log-level debug2
173
+ ```
174
+
175
+ ## Analyze
176
+
177
+ For best results, make sure your tables have been recently analyzed so statistics are up-to-date. You can ask Dexter to analyze tables it comes across that haven’t been analyzed in the past hour with:
178
+
179
+ ```sh
180
+ dexter --analyze
181
+ ```
182
+
125
183
  ## Hosted Postgres
126
184
 
127
185
  Some hosted providers like Amazon RDS and Heroku do not support the HypoPG extension, which Dexter needs to run. See [how to use Dexter](guides/Hosted-Postgres.md) in these cases.
@@ -130,6 +188,21 @@ Some hosted providers like Amazon RDS and Heroku do not support the HypoPG exten
130
188
 
131
189
  [Here are some ideas](https://github.com/ankane/dexter/issues/1)
132
190
 
191
+ ## Upgrading
192
+
193
+ Run:
194
+
195
+ ```sh
196
+ gem install pgdexter
197
+ ```
198
+
199
+ To use master, run:
200
+
201
+ ```sh
202
+ gem install specific_install
203
+ gem specific_install https://github.com/ankane/dexter.git
204
+ ```
205
+
133
206
  ## Thanks
134
207
 
135
208
  This software wouldn’t be possible without [HypoPG](https://github.com/dalibo/hypopg), which allows you to create hypothetical indexes, and [pg_query](https://github.com/lfittl/pg_query), which allows you to parse and fingerprint queries. A big thanks to Dalibo and Lukas Fittl respectively.
@@ -96,7 +96,7 @@ pg_restore -v -j 8 -x -O --format=d -d dexter_restore /tmp/newout.dir/
96
96
  ### Run Dexter
97
97
 
98
98
  ```sh
99
- dexter dexter_restore postgresql.log*
99
+ dexter dexter_restore postgresql.log* --analyze
100
100
  ```
101
101
 
102
102
  :tada:
data/lib/dexter/client.rb CHANGED
@@ -29,10 +29,13 @@ module Dexter
29
29
  dexter [options]
30
30
 
31
31
  Options:)
32
+ o.boolean "--analyze", "analyze tables that haven't been analyzed in the past hour", default: false
32
33
  o.boolean "--create", "create indexes", default: false
33
34
  o.array "--exclude", "prevent specific tables from being indexed"
34
35
  o.string "--include", "only include specific tables"
36
+ o.string "--input-format", "input format", default: "stderr"
35
37
  o.integer "--interval", "time to wait between processing queries, in seconds", default: 60
38
+ o.float "--min-calls", "only process queries that have been called a certain number of times", default: 0
36
39
  o.float "--min-time", "only process queries that have consumed a certain amount of DB time, in minutes", default: 0
37
40
  o.boolean "--pg-stat-statements", "use pg_stat_statements", default: false, help: false
38
41
  o.boolean "--log-explain", "log explain", default: false, help: false
@@ -63,7 +66,7 @@ Options:)
63
66
 
64
67
  # TODO don't use global var
65
68
  $log_level = options[:log_level].to_s.downcase
66
- abort "Unknown log level" unless ["error", "info", "debug", "debug2"].include?($log_level)
69
+ abort "Unknown log level" unless ["error", "info", "debug", "debug2", "debug3"].include?($log_level)
67
70
 
68
71
  [arguments, options]
69
72
  rescue Slop::Error => e
@@ -5,6 +5,7 @@ module Dexter
5
5
  @new_queries = Set.new
6
6
  @mutex = Mutex.new
7
7
  @min_time = options[:min_time] * 60000 # convert minutes to ms
8
+ @min_calls = options[:min_calls]
8
9
  end
9
10
 
10
11
  def add(query, duration)
@@ -36,7 +37,7 @@ module Dexter
36
37
 
37
38
  queries = []
38
39
  @top_queries.each do |k, v|
39
- if new_queries.include?(k) && v[:total_time] > @min_time
40
+ if new_queries.include?(k) && v[:total_time] >= @min_time && v[:calls] >= @min_calls
40
41
  query = Query.new(v[:query], k)
41
42
  query.total_time = v[:total_time]
42
43
  query.calls = v[:calls]
@@ -0,0 +1,13 @@
1
+ require "csv"
2
+
3
+ module Dexter
4
+ class CsvLogParser < LogParser
5
+ def perform
6
+ CSV.foreach(@logfile.file) do |row|
7
+ if (m = REGEX.match(row[13]))
8
+ process_entry(m[3], m[1].to_f)
9
+ end
10
+ end
11
+ end
12
+ end
13
+ end
@@ -10,6 +10,8 @@ module Dexter
10
10
  @log_sql = options[:log_sql]
11
11
  @log_explain = options[:log_explain]
12
12
  @min_time = options[:min_time] || 0
13
+ @min_calls = options[:min_calls] || 0
14
+ @analyze = options[:analyze]
13
15
  @options = options
14
16
 
15
17
  create_extension unless extension_exists?
@@ -26,24 +28,39 @@ module Dexter
26
28
  # reset hypothetical indexes
27
29
  reset_hypothetical_indexes
28
30
 
29
- # filter queries from other databases and system tables
30
- tables = possible_tables(queries)
31
- queries.each do |query|
32
- query.missing_tables = !query.tables.all? { |t| tables.include?(t) }
33
- end
31
+ tables = Set.new(database_tables)
34
32
 
35
33
  if @include_tables
36
- tables = Set.new(tables.to_a & @include_tables)
34
+ include_set = Set.new(@include_tables)
35
+ tables.keep_if { |t| include_set.include?(t) || include_set.include?(t.split(".")[-1]) }
36
+ end
37
+
38
+ if @exclude_tables.any?
39
+ exclude_set = Set.new(@exclude_tables)
40
+ tables.delete_if { |t| exclude_set.include?(t) || exclude_set.include?(t.split(".")[-1]) }
41
+ end
42
+
43
+ # map tables without schema to schema
44
+ no_schema_tables = {}
45
+ search_path_index = Hash[search_path.map.with_index.to_a]
46
+ tables.group_by { |t| t.split(".")[-1] }.each do |group, t2|
47
+ no_schema_tables[group] = t2.sort_by { |t| search_path_index[t.split(".")[0]] || 1000000 }[0]
37
48
  end
38
49
 
39
- # exclude user specified tables
40
- # TODO exclude write-heavy tables
41
- @exclude_tables.each do |table|
42
- tables.delete(table)
50
+ # filter queries from other databases and system tables
51
+ queries.each do |query|
52
+ # add schema to table if needed
53
+ query.tables = query.tables.map { |t| no_schema_tables[t] || t }
54
+
55
+ # check for missing tables
56
+ query.missing_tables = !query.tables.all? { |t| tables.include?(t) }
43
57
  end
44
58
 
59
+ # set tables
60
+ tables = Set.new(queries.reject(&:missing_tables).flat_map(&:tables))
61
+
45
62
  # analyze tables if needed
46
- analyze_tables(tables) if tables.any?
63
+ analyze_tables(tables) if tables.any? && (@analyze || @log_level == "debug2")
47
64
 
48
65
  # create hypothetical indexes and explain queries
49
66
  candidates = tables.any? ? create_hypothetical_indexes(queries.reject(&:missing_tables), tables) : {}
@@ -81,14 +98,13 @@ module Dexter
81
98
 
82
99
  analyze_stats = execute <<-SQL
83
100
  SELECT
84
- schemaname AS schema,
85
- relname AS table,
101
+ schemaname || '.' || relname AS table,
86
102
  last_analyze,
87
103
  last_autoanalyze
88
104
  FROM
89
105
  pg_stat_user_tables
90
106
  WHERE
91
- relname IN (#{tables.map { |t| quote(t) }.join(", ")})
107
+ schemaname || '.' || relname IN (#{tables.map { |t| quote(t) }.join(", ")})
92
108
  SQL
93
109
 
94
110
  last_analyzed = {}
@@ -97,7 +113,14 @@ module Dexter
97
113
  end
98
114
 
99
115
  tables.each do |table|
100
- if !last_analyzed[table] || last_analyzed[table] < Time.now - 3600
116
+ la = last_analyzed[table]
117
+
118
+ if @log_level == "debug2"
119
+ time_str = la ? la.iso8601 : "Unknown"
120
+ log "Last analyze: #{table} : #{time_str}"
121
+ end
122
+
123
+ if @analyze && (!la || la < Time.now - 3600)
101
124
  statement = "ANALYZE #{quote_ident(table)}"
102
125
  log "Running analyze: #{statement}"
103
126
  execute(statement)
@@ -137,6 +160,7 @@ module Dexter
137
160
  # try to parse out columns
138
161
  possible_columns = Set.new
139
162
  explainable_queries.each do |query|
163
+ log "Finding columns: #{query.statement}" if @log_level == "debug3"
140
164
  find_columns(query.tree).each do |col|
141
165
  last_col = col["fields"].last
142
166
  if last_col["String"]
@@ -296,72 +320,75 @@ module Dexter
296
320
  end
297
321
 
298
322
  def show_and_create_indexes(new_indexes, queries, tables)
323
+ # print summary
299
324
  if new_indexes.any?
300
325
  new_indexes.each do |index|
301
326
  log "Index found: #{index[:table]} (#{index[:columns].join(", ")})"
302
327
  end
328
+ else
329
+ log "No new indexes found"
330
+ end
303
331
 
304
- if @log_level.start_with?("debug")
305
- index_queries = new_indexes.flat_map { |i| i[:queries].sort_by(&:fingerprint) }
306
- if @log_level == "debug2"
307
- fingerprints = Set.new(index_queries.map(&:fingerprint))
308
- index_queries.concat(queries.reject { |q| fingerprints.include?(q.fingerprint) }.sort_by(&:fingerprint))
309
- end
310
- index_queries.each do |query|
311
- log "-" * 80
312
- log "Query #{query.fingerprint}"
313
- log "Total time: #{(query.total_time / 60000.0).round(1)} min, avg time: #{(query.total_time / query.calls.to_f).round} ms, calls: #{query.calls}" if query.total_time
314
- if tables.empty?
315
- log "No candidate tables for indexes"
316
- elsif query.explainable? && !query.high_cost?
317
- log "Low initial cost: #{query.initial_cost}"
318
- elsif query.explainable?
319
- query_indexes = query.indexes || []
320
- log "Start: #{query.costs[0]}"
321
- log "Pass1: #{query.costs[1]} : #{log_indexes(query.pass1_indexes || [])}"
322
- log "Pass2: #{query.costs[2]} : #{log_indexes(query.pass2_indexes || [])}"
323
- log "Final: #{query.new_cost} : #{log_indexes(query_indexes)}"
324
- if query_indexes.any? && !query.suggest_index
325
- log "Need 50% cost savings to suggest index"
326
- end
327
- elsif query.fingerprint == "unknown"
328
- log "Could not parse query"
329
- elsif query.tables.empty?
330
- log "No tables"
331
- elsif query.missing_tables
332
- log "Tables not present in current database"
333
- else
334
- log "Could not run explain"
332
+ # debug info
333
+ if @log_level.start_with?("debug")
334
+ index_queries = new_indexes.flat_map { |i| i[:queries].sort_by(&:fingerprint) }
335
+ if @log_level == "debug2"
336
+ fingerprints = Set.new(index_queries.map(&:fingerprint))
337
+ index_queries.concat(queries.reject { |q| fingerprints.include?(q.fingerprint) }.sort_by(&:fingerprint))
338
+ end
339
+ index_queries.each do |query|
340
+ log "-" * 80
341
+ log "Query #{query.fingerprint}"
342
+ log "Total time: #{(query.total_time / 60000.0).round(1)} min, avg time: #{(query.total_time / query.calls.to_f).round} ms, calls: #{query.calls}" if query.total_time
343
+ if tables.empty?
344
+ log "No candidate tables for indexes"
345
+ elsif query.explainable? && !query.high_cost?
346
+ log "Low initial cost: #{query.initial_cost}"
347
+ elsif query.explainable?
348
+ query_indexes = query.indexes || []
349
+ log "Start: #{query.costs[0]}"
350
+ log "Pass1: #{query.costs[1]} : #{log_indexes(query.pass1_indexes || [])}"
351
+ log "Pass2: #{query.costs[2]} : #{log_indexes(query.pass2_indexes || [])}"
352
+ log "Final: #{query.new_cost} : #{log_indexes(query.suggest_index ? query_indexes : [])}"
353
+ if query_indexes.any? && !query.suggest_index
354
+ log "Need 50% cost savings to suggest index"
335
355
  end
336
- log
337
- log query.statement
338
- log
356
+ elsif query.fingerprint == "unknown"
357
+ log "Could not parse query"
358
+ elsif query.tables.empty?
359
+ log "No tables"
360
+ elsif query.missing_tables
361
+ log "Tables not present in current database"
362
+ else
363
+ log "Could not run explain"
339
364
  end
365
+ log
366
+ log query.statement
367
+ log
340
368
  end
369
+ end
341
370
 
342
- if @create
343
- # 1. create lock
344
- # 2. refresh existing index list
345
- # 3. create indexes that still don't exist
346
- # 4. release lock
347
- with_advisory_lock do
348
- new_indexes.each do |index|
349
- unless index_exists?(index)
350
- statement = "CREATE INDEX CONCURRENTLY ON #{quote_ident(index[:table])} (#{index[:columns].map { |c| quote_ident(c) }.join(", ")})"
351
- log "Creating index: #{statement}"
352
- started_at = Time.now
353
- begin
354
- execute(statement)
355
- log "Index created: #{((Time.now - started_at) * 1000).to_i} ms"
356
- rescue PG::LockNotAvailable => e
357
- log "Could not acquire lock: #{index[:table]}"
358
- end
371
+ # create
372
+ if @create && new_indexes.any?
373
+ # 1. create lock
374
+ # 2. refresh existing index list
375
+ # 3. create indexes that still don't exist
376
+ # 4. release lock
377
+ with_advisory_lock do
378
+ new_indexes.each do |index|
379
+ unless index_exists?(index)
380
+ statement = "CREATE INDEX CONCURRENTLY ON #{quote_ident(index[:table])} (#{index[:columns].map { |c| quote_ident(c) }.join(", ")})"
381
+ log "Creating index: #{statement}"
382
+ started_at = Time.now
383
+ begin
384
+ execute(statement)
385
+ log "Index created: #{((Time.now - started_at) * 1000).to_i} ms"
386
+ rescue PG::LockNotAvailable
387
+ log "Could not acquire lock: #{index[:table]}"
359
388
  end
360
389
  end
361
390
  end
362
391
  end
363
- else
364
- log "No new indexes found"
365
392
  end
366
393
 
367
394
  new_indexes
@@ -417,7 +444,7 @@ module Dexter
417
444
  def database_tables
418
445
  result = execute <<-SQL
419
446
  SELECT
420
- table_name
447
+ table_schema || '.' || table_name AS table_name
421
448
  FROM
422
449
  information_schema.tables
423
450
  WHERE
@@ -439,16 +466,13 @@ module Dexter
439
466
  WHERE
440
467
  datname = current_database()
441
468
  AND total_time >= #{@min_time * 60000}
469
+ AND calls >= #{@min_calls}
442
470
  ORDER BY
443
471
  1
444
472
  SQL
445
473
  result.map { |q| q["query"] }
446
474
  end
447
475
 
448
- def possible_tables(queries)
449
- Set.new(queries.flat_map(&:tables).uniq & database_tables)
450
- end
451
-
452
476
  def with_advisory_lock
453
477
  lock_id = 123456
454
478
  first_time = true
@@ -480,14 +504,13 @@ module Dexter
480
504
  def columns(tables)
481
505
  columns = execute <<-SQL
482
506
  SELECT
483
- table_name,
507
+ table_schema || '.' || table_name AS table_name,
484
508
  column_name,
485
509
  data_type
486
510
  FROM
487
511
  information_schema.columns
488
512
  WHERE
489
- table_schema = 'public' AND
490
- table_name IN (#{tables.map { |t| quote(t) }.join(", ")})
513
+ table_schema || '.' || table_name IN (#{tables.map { |t| quote(t) }.join(", ")})
491
514
  ORDER BY
492
515
  1, 2
493
516
  SQL
@@ -498,8 +521,7 @@ module Dexter
498
521
  def indexes(tables)
499
522
  execute(<<-SQL
500
523
  SELECT
501
- schemaname AS schema,
502
- t.relname AS table,
524
+ schemaname || '.' || t.relname AS table,
503
525
  ix.relname AS name,
504
526
  regexp_replace(pg_get_indexdef(i.indexrelid), '^[^\\(]*\\((.*)\\)$', '\\1') AS columns,
505
527
  regexp_replace(pg_get_indexdef(i.indexrelid), '.* USING ([^ ]*) \\(.*', '\\1') AS using
@@ -512,8 +534,7 @@ module Dexter
512
534
  LEFT JOIN
513
535
  pg_stat_user_indexes ui ON ui.indexrelid = i.indexrelid
514
536
  WHERE
515
- t.relname IN (#{tables.map { |t| quote(t) }.join(", ")}) AND
516
- schemaname IS NOT NULL AND
537
+ schemaname || '.' || t.relname IN (#{tables.map { |t| quote(t) }.join(", ")}) AND
517
538
  indisvalid = 't' AND
518
539
  indexprs IS NULL AND
519
540
  indpred IS NULL
@@ -523,8 +544,12 @@ module Dexter
523
544
  ).map { |v| v["columns"] = v["columns"].sub(") WHERE (", " WHERE ").split(", ").map { |c| unquote(c) }; v }
524
545
  end
525
546
 
547
+ def search_path
548
+ execute("SHOW search_path")[0]["search_path"].split(",").map(&:strip)
549
+ end
550
+
526
551
  def unquote(part)
527
- if part && part.start_with?('"')
552
+ if part && part.start_with?('"') && part.end_with?('"')
528
553
  part[1..-2]
529
554
  else
530
555
  part
@@ -532,7 +557,7 @@ module Dexter
532
557
  end
533
558
 
534
559
  def quote_ident(value)
535
- conn.quote_ident(value)
560
+ value.split(".").map { |v| conn.quote_ident(v) }.join(".")
536
561
  end
537
562
 
538
563
  def quote(value)
@@ -1,6 +1,6 @@
1
1
  module Dexter
2
2
  class LogParser
3
- REGEX = /duration: (\d+\.\d+) ms (statement|execute <unnamed>): (.+)/
3
+ REGEX = /duration: (\d+\.\d+) ms (statement|execute <unnamed>|parse <unnamed>): (.+)/
4
4
  LINE_SEPERATOR = ": ".freeze
5
5
 
6
6
  def initialize(logfile, collector)
@@ -22,7 +22,7 @@ module Dexter
22
22
  end
23
23
  end
24
24
 
25
- if !active_line && m = REGEX.match(line.chomp)
25
+ if !active_line && (m = REGEX.match(line.chomp))
26
26
  duration = m[1].to_f
27
27
  active_line = m[3]
28
28
  end
@@ -5,8 +5,14 @@ module Dexter
5
5
  def initialize(logfile, options)
6
6
  @logfile = logfile
7
7
 
8
- @collector = Collector.new(min_time: options[:min_time])
9
- @log_parser = LogParser.new(logfile, @collector)
8
+ @collector = Collector.new(min_time: options[:min_time], min_calls: options[:min_calls])
9
+ @log_parser =
10
+ if options[:input_format] == "csv"
11
+ CsvLogParser.new(logfile, @collector)
12
+ else
13
+ LogParser.new(logfile, @collector)
14
+ end
15
+
10
16
  @indexer = Indexer.new(options)
11
17
 
12
18
  @starting_interval = 3
data/lib/dexter/query.rb CHANGED
@@ -1,6 +1,7 @@
1
1
  module Dexter
2
2
  class Query
3
3
  attr_reader :statement, :fingerprint, :plans
4
+ attr_writer :tables
4
5
  attr_accessor :missing_tables, :new_cost, :total_time, :calls, :indexes, :suggest_index, :pass1_indexes, :pass2_indexes
5
6
 
6
7
  def initialize(statement, fingerprint = nil)
@@ -13,7 +14,15 @@ module Dexter
13
14
  end
14
15
 
15
16
  def tables
16
- @tables ||= parse ? parse.tables : []
17
+ @tables ||= begin
18
+ parse ? parse.tables : []
19
+ rescue => e
20
+ # possible pg_query bug
21
+ $stderr.puts "Error extracting tables. Please report to https://github.com/ankane/dexter/issues"
22
+ $stderr.puts "#{e.class.name}: #{e.message}"
23
+ $stderr.puts statement
24
+ []
25
+ end
17
26
  end
18
27
 
19
28
  def tree
@@ -1,3 +1,3 @@
1
1
  module Dexter
2
- VERSION = "0.2.1"
2
+ VERSION = "0.3.0"
3
3
  end
data/lib/dexter.rb CHANGED
@@ -10,5 +10,6 @@ require "dexter/client"
10
10
  require "dexter/collector"
11
11
  require "dexter/indexer"
12
12
  require "dexter/log_parser"
13
+ require "dexter/csv_log_parser"
13
14
  require "dexter/processor"
14
15
  require "dexter/query"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pgdexter
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.1
4
+ version: 0.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrew Kane
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2017-09-02 00:00:00.000000000 Z
11
+ date: 2017-12-23 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: slop
@@ -115,6 +115,7 @@ files:
115
115
  - lib/dexter.rb
116
116
  - lib/dexter/client.rb
117
117
  - lib/dexter/collector.rb
118
+ - lib/dexter/csv_log_parser.rb
118
119
  - lib/dexter/indexer.rb
119
120
  - lib/dexter/log_parser.rb
120
121
  - lib/dexter/logging.rb
@@ -141,7 +142,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
141
142
  version: '0'
142
143
  requirements: []
143
144
  rubyforge_project:
144
- rubygems_version: 2.6.11
145
+ rubygems_version: 2.6.13
145
146
  signing_key:
146
147
  specification_version: 4
147
148
  summary: The automatic indexer for Postgres