sequel 5.18.0 → 5.19.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8f181e0ec05e457e72b2517202bbd159a6a066c9dd946201de44576b64bca302
4
- data.tar.gz: f554f6268798f72c667851c6d952f960564e70ebb1e2d3d1208db74bbcd98834
3
+ metadata.gz: 96bc5f24c2b61f558037958fedfa152895eaef5e66b50999e7da139763bf0436
4
+ data.tar.gz: c8376d13d1204c5c2ba969ed18d60c72fdbabd3dd9a7aa8c7729240b33678e84
5
5
  SHA512:
6
- metadata.gz: 474c441a62bb901abaf12968ec62cdd239f9679ce4cb1743cd58e7ec6032cb1e26d128fd49bbe7c3edfce9a0fb63b33975aa6426ebb8ee79f1309a09967ffef5
7
- data.tar.gz: 8717400fcb03235ad51ab208d35e9accca05018c6531da025bbcefb8f4b557345e1afbbfd65862f2c213ca4b05826db72c5ecc6b8abdda5920af83dcd8fe35d9
6
+ metadata.gz: 1f2153dde7f8e53f24ba06dcb45155ed822d3b2a25485d03dcda52df80e2fa35a0b8f42e08cb2854db838585992226a90830ce4b37ef89939c7cbbffbf308abe
7
+ data.tar.gz: af2107662ee35e87fa0089f1a66c8e621c5ab738ff53e1a35fbb7572c5137fae4bce53c2b716ec1b72b873a9415a4458eb96dd01e08be421a1b23f7d3fd4d111
data/CHANGELOG CHANGED
@@ -1,3 +1,19 @@
1
+ === 5.19.0 (2019-04-02)
2
+
3
+ * Use more optimized approach to merging hashes in ruby 2.5+ (jeremyevans)
4
+
5
+ * Use SQLite extended result codes when using ruby-sqlite3 1.4.0+ (jeremyevans)
6
+
7
+ * Recognize additional SQLite extended result codes in the shared sqlite adapter (jeremyevans)
8
+
9
+ * Add Database#rename_enum_value to the pg_enum extension (AlexWayfer) (#1603)
10
+
11
+ * Make Database#drop_table delete constraint validations metadata for that table if using the constraint_validations extension (jeremyevans)
12
+
13
+ * Speed up row fetching in the sqlite adapter (jeremyevans)
14
+
15
+ * Speed up row fetching and type conversion in the sqlanywhere adapter (jeremyevans)
16
+
1
17
  === 5.18.0 (2019-03-01)
2
18
 
3
19
  * Use singleton .call methods on plain objects instead of procs/methods for faster type conversion (jeremyevans)
@@ -187,9 +187,8 @@ For Derby, H2, HSQLDB, JTDS, MySQL, Postgres, SQLite3
187
187
  the adapters can use the `jdbc-*` gem, for the others you need to have the `.jar` in your CLASSPATH
188
188
  or load the Java class manually before calling Sequel.connect.
189
189
 
190
-
191
190
  Note that when using a JDBC adapter, the best way to use Sequel
192
- is via Sequel.connect, NOT Sequel.jdbc. Use the JDBC connection
191
+ is via Sequel.connect using a connection string, NOT Sequel.jdbc. Use the JDBC connection
193
192
  string when connecting, which will be in a different format than
194
193
  the native connection string. The connection string should start
195
194
  with 'jdbc:'. For PostgreSQL, use 'jdbc:postgresql:', and for
@@ -201,6 +200,10 @@ Sequel does no preprocessing of JDBC connection strings, it passes them directly
201
200
  So if you have problems getting a connection string to work, look up the
202
201
  documentation for the JDBC driver.
203
202
 
203
+ The jdbc adapter does not handle common options such as +:host+,
204
+ +:user+, and +:port+. If you must use a hash of options when connecting,
205
+ provide the full JDBC connection string as the :uri option.
206
+
204
207
  Example connection strings:
205
208
 
206
209
  jdbc:sqlite::memory:
@@ -0,0 +1,28 @@
1
+ = New Features
2
+
3
+ * A Database#rename_enum_value method has been added to the pg_enum
4
+ extension. It is supported on PostgreSQL 10+:
5
+
6
+ DB.rename_enum_value(:enum_type, 'old_name', 'new_name')
7
+
8
+ = Other Improvements
9
+
10
+ * The performance of row fetching and type conversion in the
11
+ sqlanywhere adapter has been improved.
12
+
13
+ * The performance of row fetching in the sqlite adapter has been
14
+ improved.
15
+
16
+ * Calling Database#drop_table now drops any constraint validations
17
+ metadata for the table if using the constraint_validations
18
+ extension. However, modifying the table using Database#alter_table
19
+ does not affect the constraint validations metadata.
20
+
21
+ * The sqlite adapter when used with ruby-sqlite3 1.4.0+ now uses
22
+ SQLite extended result codes for a more accurate determination of
23
+ specific database errors types.
24
+
25
+ * Performance for typecasting to decimal and floats has been improved
26
+ slightly.
27
+
28
+ * Performance when merging hashes has been improved slightly.
@@ -268,3 +268,15 @@ the Database, just remove that option. If you are setting:
268
268
  Sequel.single_threaded = true
269
269
 
270
270
  just remove or comment out that code.
271
+
272
+ == JDBC
273
+
274
+ If you are using the jdbc adapter, note that it does not handle separate
275
+ options such as +:host+, +:user+, and +:port+. If you would like to use
276
+ the +:servers+ option when connecting to a JDBC database, each hash value in
277
+ the +servers+ option should contain a +:uri+ key with a JDBC connection string
278
+ for that shard as the value. Example:
279
+
280
+ DB=Sequel.connect('jdbc:postgresql://primary_server/database',
281
+ servers: {read_only: {uri: 'jdbc:postgresql://replica_server/database'}})
282
+
@@ -254,7 +254,9 @@ module Sequel
254
254
  log_connection_yield(sql, conn){stmt.execute(sql)}
255
255
  when :insert
256
256
  log_connection_yield(sql, conn){execute_statement_insert(stmt, sql)}
257
- last_insert_id(conn, Hash[opts].merge!(:stmt=>stmt))
257
+ opts = Hash[opts]
258
+ opts[:stmt] = stmt
259
+ last_insert_id(conn, opts)
258
260
  else
259
261
  log_connection_yield(sql, conn){stmt.executeUpdate(sql)}
260
262
  end
@@ -449,7 +451,10 @@ module Sequel
449
451
  log_connection_yield(msg, conn, args){cps.execute}
450
452
  when :insert
451
453
  log_connection_yield(msg, conn, args){execute_prepared_statement_insert(cps)}
452
- last_insert_id(conn, Hash[opts].merge!(:prepared=>true, :stmt=>cps))
454
+ opts = Hash[opts]
455
+ opts[:prepared] = true
456
+ opts[:stmt] = cps
457
+ last_insert_id(conn, opts)
453
458
  else
454
459
  log_connection_yield(msg, conn, args){cps.executeUpdate}
455
460
  end
@@ -1733,7 +1733,9 @@ module Sequel
1733
1733
  def _import(columns, values, opts=OPTS)
1734
1734
  if @opts[:returning]
1735
1735
  statements = multi_insert_sql(columns, values)
1736
- @db.transaction(Hash[opts].merge!(:server=>@opts[:server])) do
1736
+ trans_opts = Hash[opts]
1737
+ trans_opts[:server] = @opts[:server]
1738
+ @db.transaction(trans_opts) do
1737
1739
  statements.map{|st| returning_fetch_rows(st)}
1738
1740
  end.first.map{|v| v.length == 1 ? v.values.first : v}
1739
1741
  elsif opts[:return] == :primary_key
@@ -323,7 +323,7 @@ module Sequel
323
323
  case sqlite_error_code(exception)
324
324
  when 1299
325
325
  NotNullConstraintViolation
326
- when 2067
326
+ when 1555, 2067, 2579
327
327
  UniqueConstraintViolation
328
328
  when 787
329
329
  ForeignKeyConstraintViolation
@@ -331,6 +331,8 @@ module Sequel
331
331
  CheckConstraintViolation
332
332
  when 19
333
333
  ConstraintViolation
334
+ when 517
335
+ SerializationFailure
334
336
  else
335
337
  super
336
338
  end
@@ -17,21 +17,34 @@ module Sequel
17
17
  end
18
18
  end
19
19
 
20
- tt = Class.new do
21
- def blob(s) ::Sequel::SQL::Blob.new(s) end
22
- def boolean(s) s.to_i != 0 end
23
- def date(s) ::Date.strptime(s) end
24
- def decimal(s) BigDecimal(s) end
25
- def time(s) ::Sequel.string_to_time(s) end
26
- end.new
20
+ boolean = Object.new
21
+ def boolean.call(s)
22
+ s.to_i != 0
23
+ end
24
+
25
+ date = Object.new
26
+ def date.call(s)
27
+ ::Date.strptime(s)
28
+ end
29
+
30
+ decimal = Object.new
31
+ class << decimal
32
+ alias call BigDecimal
33
+ public :call
34
+ end
35
+
36
+ time = Object.new
37
+ def time.call(s)
38
+ ::Sequel.string_to_time(s)
39
+ end
27
40
 
28
41
  SQLANYWHERE_TYPES = {}
29
42
  {
30
- [0, 484] => tt.method(:decimal),
31
- [384] => tt.method(:date),
32
- [388] => tt.method(:time),
33
- [500] => tt.method(:boolean),
34
- [524, 528] => tt.method(:blob)
43
+ [0, 484] => decimal,
44
+ [384] => date,
45
+ [388] => time,
46
+ [500] => boolean,
47
+ [524, 528] => ::Sequel::SQL::Blob
35
48
  }.each do |k,v|
36
49
  k.each{|n| SQLANYWHERE_TYPES[n] = v}
37
50
  end
@@ -153,17 +166,20 @@ module Sequel
153
166
  else
154
167
  cps[type]
155
168
  end
156
- col_infos << [i, output_identifier(name), cp]
169
+ col_infos << [output_identifier(name), cp]
157
170
  end
158
171
 
159
- self.columns = col_infos.map{|a| a[1]}
172
+ self.columns = col_infos.map(&:first)
173
+ max = col_infos.length
160
174
 
161
175
  if rs
162
176
  while api.sqlany_fetch_next(rs) == 1
177
+ i = -1
163
178
  h = {}
164
- col_infos.each do |i, name, cp|
165
- _, v = api.sqlany_get_column(rs, i)
166
- h[name] = cp && v ? cp[v] : v
179
+ while (i+=1) < max
180
+ name, cp = col_infos[i]
181
+ v = api.sqlany_get_column(rs, i)[1]
182
+ h[name] = cp && v ? cp.call(v) : v
167
183
  end
168
184
  yield h
169
185
  end
@@ -5,7 +5,7 @@ require_relative 'shared/sqlite'
5
5
 
6
6
  module Sequel
7
7
  module SQLite
8
- FALSE_VALUES = (%w'0 false f no n' + [0]).freeze
8
+ FALSE_VALUES = (%w'0 false f no n'.each(&:freeze) + [0]).freeze
9
9
 
10
10
  blob = Object.new
11
11
  def blob.call(s)
@@ -77,6 +77,10 @@ module Sequel
77
77
  k.each{|n| SQLITE_TYPES[n] = v}
78
78
  end
79
79
  SQLITE_TYPES.freeze
80
+
81
+ sqlite_version = SQLite3::VERSION.split('.').map(&:to_i)[0..1]
82
+ sqlite_version = sqlite_version[0] * 100 + sqlite_version[1]
83
+ USE_EXTENDED_RESULT_CODES = sqlite_version >= 104
80
84
 
81
85
  class Database < Sequel::Database
82
86
  include ::Sequel::SQLite::DatabaseMethods
@@ -109,6 +113,10 @@ module Sequel
109
113
  sqlite3_opts[:readonly] = typecast_value_boolean(opts[:readonly]) if opts.has_key?(:readonly)
110
114
  db = ::SQLite3::Database.new(opts[:database].to_s, sqlite3_opts)
111
115
  db.busy_timeout(opts.fetch(:timeout, 5000))
116
+
117
+ if USE_EXTENDED_RESULT_CODES
118
+ db.extended_result_codes = true
119
+ end
112
120
 
113
121
  connection_pragmas.each{|s| log_connection_yield(s, db){db.execute_batch(s)}}
114
122
 
@@ -279,13 +287,12 @@ module Sequel
279
287
  Dataset
280
288
  end
281
289
 
282
- # Support SQLite exception codes if ruby-sqlite3 supports them.
283
- # This is disabled by default because ruby-sqlite3 doesn't currently
284
- # support them (returning nil), and even if it did, it doesn't support
285
- # extended error codes, which would lead to worse behavior.
286
- #def sqlite_error_code(exception)
287
- # exception.code if exception.respond_to?(:code)
288
- #end
290
+ if USE_EXTENDED_RESULT_CODES
291
+ # Support SQLite exception codes if ruby-sqlite3 supports them.
292
+ def sqlite_error_code(exception)
293
+ exception.code if exception.respond_to?(:code)
294
+ end
295
+ end
289
296
  end
290
297
 
291
298
  class Dataset < Sequel::Dataset
@@ -318,18 +325,18 @@ module Sequel
318
325
 
319
326
  def fetch_rows(sql)
320
327
  execute(sql) do |result|
321
- i = -1
322
328
  cps = db.conversion_procs
323
329
  type_procs = result.types.map{|t| cps[base_type_name(t)]}
324
- cols = result.columns.map{|c| i+=1; [output_identifier(c), i, type_procs[i]]}
325
- max = i+1
330
+ j = -1
331
+ cols = result.columns.map{|c| [output_identifier(c), type_procs[(j+=1)]]}
326
332
  self.columns = cols.map(&:first)
333
+ max = cols.length
327
334
  result.each do |values|
328
335
  row = {}
329
336
  i = -1
330
337
  while (i += 1) < max
331
- name, id, type_proc = cols[i]
332
- v = values[id]
338
+ name, type_proc = cols[i]
339
+ v = values[i]
333
340
  if type_proc && v
334
341
  v = type_proc.call(v)
335
342
  end
@@ -90,11 +90,6 @@ class Sequel::ConnectionPool
90
90
  # connection object (and server argument if the callable accepts 2 arguments),
91
91
  # useful for customizations that you want to apply to all connections.
92
92
  # :connect_sqls :: An array of sql strings to execute on each new connection, after :after_connect runs.
93
- # :preconnect :: Automatically create the maximum number of connections, so that they don't
94
- # need to be created as needed. This is useful when connecting takes a long time
95
- # and you want to avoid possible latency during runtime.
96
- # Set to :concurrently to create the connections in separate threads. Otherwise
97
- # they'll be created sequentially.
98
93
  def initialize(db, opts=OPTS)
99
94
  @db = db
100
95
  @after_connect = opts[:after_connect]
@@ -106,8 +106,11 @@ module Sequel
106
106
  # :log_connection_info :: Whether connection information should be logged when logging queries.
107
107
  # :log_warn_duration :: The number of elapsed seconds after which queries should be logged at warn level.
108
108
  # :name :: A name to use for the Database object, displayed in PoolTimeout .
109
- # :preconnect :: Whether to setup the maximum number of connections during initialization.
110
- # Can use a value of 'concurrently' to preconnect in separate threads.
109
+ # :preconnect :: Automatically create the maximum number of connections, so that they don't
110
+ # need to be created as needed. This is useful when connecting takes a long time
111
+ # and you want to avoid possible latency during runtime.
112
+ # Set to :concurrently to create the connections in separate threads. Otherwise
113
+ # they'll be created sequentially.
111
114
  # :preconnect_extensions :: Similar to the :extensions option, but loads the extensions before the
112
115
  # connections are made by the :preconnect option.
113
116
  # :quote_identifiers :: Whether to quote identifiers.
@@ -115,7 +118,9 @@ module Sequel
115
118
  # :single_threaded :: Whether to use a single-threaded connection pool.
116
119
  # :sql_log_level :: Method to use to log SQL to a logger, :info by default.
117
120
  #
118
- # All options given are also passed to the connection pool.
121
+ # All options given are also passed to the connection pool. Additional options respected by
122
+ # the connection pool are :after_connect, :connect_sqls, :max_connections, :pool_timeout,
123
+ # :servers, and :servers_hash. See the connection pool documentation for details.
119
124
  def initialize(opts = OPTS)
120
125
  @opts ||= opts
121
126
  @opts = connection_pool_default_options.merge(@opts)
@@ -473,9 +478,7 @@ module Sequel
473
478
 
474
479
  if RUBY_VERSION >= '2.4'
475
480
  # Typecast a string to a BigDecimal
476
- def _typecast_value_string_to_decimal(value)
477
- BigDecimal(value)
478
- end
481
+ alias _typecast_value_string_to_decimal BigDecimal
479
482
  else
480
483
  # :nocov:
481
484
  def _typecast_value_string_to_decimal(value)
@@ -510,9 +513,7 @@ module Sequel
510
513
  end
511
514
 
512
515
  # Typecast the value to a Float
513
- def typecast_value_float(value)
514
- Float(value)
515
- end
516
+ alias typecast_value_float Float
516
517
 
517
518
  # Typecast the value to an Integer
518
519
  def typecast_value_integer(value)
@@ -18,7 +18,7 @@ module Sequel
18
18
  where_all where_each where_single_value
19
19
  METHS
20
20
 
21
- # The clone options to use when retriveing columns for a dataset.
21
+ # The clone options to use when retrieving columns for a dataset.
22
22
  COLUMNS_CLONE_OPTIONS = {:distinct => nil, :limit => 1, :offset=>nil, :where=>nil, :having=>nil, :order=>nil, :row_proc=>nil, :graph=>nil, :eager_graph=>nil}.freeze
23
23
 
24
24
  # Inserts the given argument into the database. Returns self so it
@@ -358,7 +358,7 @@ module Sequel
358
358
 
359
359
  # Inserts values into the associated table. The returned value is generally
360
360
  # the value of the autoincremented primary key for the inserted row, assuming that
361
- # the a single row is inserted and the table has an autoincrementing primary key.
361
+ # a single row is inserted and the table has an autoincrementing primary key.
362
362
  #
363
363
  # +insert+ handles a number of different argument formats:
364
364
  # no arguments or single empty hash :: Uses DEFAULT VALUES
@@ -486,7 +486,7 @@ module Sequel
486
486
  import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts)
487
487
  end
488
488
 
489
- # Yields each row in the dataset, but interally uses multiple queries as needed to
489
+ # Yields each row in the dataset, but internally uses multiple queries as needed to
490
490
  # process the entire result set without keeping all rows in the dataset in memory,
491
491
  # even if the underlying driver buffers all query results in memory.
492
492
  #
@@ -512,7 +512,7 @@ module Sequel
512
512
  # NULLs. Note that some Sequel adapters have optimized implementations that will
513
513
  # use cursors or streaming regardless of the :strategy option used.
514
514
  # :filter_values :: If the strategy: :filter option is used, this option should be a proc
515
- # that accepts the last retreived row for the previous page and an array of
515
+ # that accepts the last retrieved row for the previous page and an array of
516
516
  # ORDER BY expressions, and returns an array of values relating to those
517
517
  # expressions for the last retrieved row. You will need to use this option
518
518
  # if your ORDER BY expressions are not simple columns, if they contain
@@ -971,7 +971,8 @@ module Sequel
971
971
  # separate insert commands for each row. Otherwise, call #multi_insert_sql
972
972
  # and execute each statement it gives separately.
973
973
  def _import(columns, values, opts)
974
- trans_opts = Hash[opts].merge!(:server=>@opts[:server])
974
+ trans_opts = Hash[opts]
975
+ trans_opts[:server] = @opts[:server]
975
976
  if opts[:return] == :primary_key
976
977
  @db.transaction(trans_opts){values.map{|v| insert(columns, v)}}
977
978
  else
@@ -21,7 +21,7 @@ module Sequel
21
21
  raise Error, "cannot call add_graph_aliases on a dataset that has not been called with graph or set_graph_aliases"
22
22
  end
23
23
  columns, graph_aliases = graph_alias_columns(graph_aliases)
24
- select_append(*columns).clone(:graph => Hash[graph].merge!(:column_aliases=>Hash[ga].merge!(graph_aliases).freeze).freeze)
24
+ select_append(*columns).clone(:graph => graph.merge(:column_aliases=>ga.merge(graph_aliases).freeze).freeze)
25
25
  end
26
26
 
27
27
  # Similar to Dataset#join_table, but uses unambiguous aliases for selected
@@ -244,7 +244,7 @@ module Sequel
244
244
  def set_graph_aliases(graph_aliases)
245
245
  columns, graph_aliases = graph_alias_columns(graph_aliases)
246
246
  if graph = opts[:graph]
247
- select(*columns).clone(:graph => Hash[graph].merge!(:column_aliases=>graph_aliases.freeze).freeze)
247
+ select(*columns).clone(:graph => graph.merge(:column_aliases=>graph_aliases.freeze).freeze)
248
248
  else
249
249
  raise Error, "cannot call #set_graph_aliases on an ungraphed dataset"
250
250
  end
@@ -329,7 +329,7 @@ module Sequel
329
329
  # # => {:id=>1}
330
330
  def bind(bind_vars=OPTS)
331
331
  bind_vars = if bv = @opts[:bind_vars]
332
- Hash[bv].merge!(bind_vars).freeze
332
+ bv.merge(bind_vars).freeze
333
333
  else
334
334
  if bind_vars.frozen?
335
335
  bind_vars
@@ -130,6 +130,10 @@
130
130
  # readd all constraints you want to use inside the alter table block,
131
131
  # making no other changes inside the alter_table block.
132
132
  #
133
+ # Dropping a table will automatically delete all constraint validations for
134
+ # that table. However, altering a table (e.g. to drop a column) will not
135
+ # currently make any changes to the constraint validations metadata.
136
+ #
133
137
  # Related module: Sequel::ConstraintValidations
134
138
 
135
139
  #
@@ -264,6 +268,16 @@ module Sequel
264
268
  end
265
269
  end
266
270
 
271
+ # Drop all constraint validations for a table if dropping the table.
272
+ def drop_table(*names)
273
+ names.each do |name|
274
+ if !name.is_a?(Hash) && table_exists?(constraint_validations_table)
275
+ drop_constraint_validations_for(:table=>name)
276
+ end
277
+ end
278
+ super
279
+ end
280
+
267
281
  # Drop the constraint validations table.
268
282
  def drop_constraint_validations_table
269
283
  drop_table(constraint_validations_table)
@@ -17,6 +17,12 @@
17
17
  #
18
18
  # DB.rename_enum(:enum_type_name, :enum_type_another_name)
19
19
  #
20
+ # If you want to rename an enum value, you can use rename_enum_value:
21
+ #
22
+ # DB.rename_enum_value(
23
+ # :enum_type_name, :enum_value_name, :enum_value_another_name
24
+ # )
25
+ #
20
26
  # If you want to drop an enum type, you can use drop_enum:
21
27
  #
22
28
  # DB.drop_enum(:enum_type_name)
@@ -86,26 +92,24 @@ module Sequel
86
92
  elsif v = opts[:after]
87
93
  sql << " AFTER #{literal(v.to_s)}"
88
94
  end
89
- run sql
90
- parse_enum_labels
91
- nil
95
+ _process_enum_change_sql(sql)
92
96
  end
93
97
 
94
98
  # Run the SQL to create an enum type with the given name and values.
95
99
  def create_enum(enum, values)
96
- sql = "CREATE TYPE #{quote_schema_table(enum)} AS ENUM (#{values.map{|v| literal(v.to_s)}.join(', ')})"
97
- run sql
98
- parse_enum_labels
99
- nil
100
+ _process_enum_change_sql("CREATE TYPE #{quote_schema_table(enum)} AS ENUM (#{values.map{|v| literal(v.to_s)}.join(', ')})")
100
101
  end
101
102
 
102
103
  # Run the SQL to rename the enum type with the given name
103
104
  # to the another given name.
104
105
  def rename_enum(enum, new_name)
105
- sql = "ALTER TYPE #{quote_schema_table(enum)} RENAME TO #{quote_schema_table(new_name)}"
106
- run sql
107
- parse_enum_labels
108
- nil
106
+ _process_enum_change_sql("ALTER TYPE #{quote_schema_table(enum)} RENAME TO #{quote_schema_table(new_name)}")
107
+ end
108
+
109
+ # Run the SQL to rename the enum value with the given name
110
+ # to the another given name.
111
+ def rename_enum_value(enum, old_name, new_name)
112
+ _process_enum_change_sql("ALTER TYPE #{quote_schema_table(enum)} RENAME VALUE #{literal(old_name.to_s)} TO #{literal(new_name.to_s)}")
109
113
  end
110
114
 
111
115
  # Run the SQL to drop the enum type with the given name.
@@ -113,14 +117,18 @@ module Sequel
113
117
  # :if_exists :: Do not raise an error if the enum type does not exist
114
118
  # :cascade :: Also drop other objects that depend on the enum type
115
119
  def drop_enum(enum, opts=OPTS)
116
- sql = "DROP TYPE#{' IF EXISTS' if opts[:if_exists]} #{quote_schema_table(enum)}#{' CASCADE' if opts[:cascade]}"
117
- run sql
118
- parse_enum_labels
119
- nil
120
+ _process_enum_change_sql("DROP TYPE#{' IF EXISTS' if opts[:if_exists]} #{quote_schema_table(enum)}#{' CASCADE' if opts[:cascade]}")
120
121
  end
121
122
 
122
123
  private
123
124
 
125
+ # Run the SQL on the database, reparsing the enum labels after it is run.
126
+ def _process_enum_change_sql(sql)
127
+ run(sql)
128
+ parse_enum_labels
129
+ nil
130
+ end
131
+
124
132
  # Parse the pg_enum table to get enum values, and
125
133
  # the pg_type table to get names and array oids for
126
134
  # enums.
@@ -2129,7 +2129,7 @@ module Sequel
2129
2129
  graph_cks = opts[:graph_keys]
2130
2130
  opts[:eager_grapher] ||= proc do |eo|
2131
2131
  ds = eo[:self]
2132
- ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : opts.primary_keys.zip(graph_cks) + conditions, Hash[eo].merge!(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep, :from_self_alias=>eo[:from_self_alias]), &graph_block)
2132
+ ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : opts.primary_keys.zip(graph_cks) + conditions, eo.merge(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep), &graph_block)
2133
2133
  end
2134
2134
 
2135
2135
  return if opts[:read_only]
@@ -2189,7 +2189,7 @@ module Sequel
2189
2189
  graph_block = opts[:graph_block]
2190
2190
  opts[:eager_grapher] ||= proc do |eo|
2191
2191
  ds = eo[:self]
2192
- ds = ds.graph(opts.apply_eager_graph_limit_strategy(eo[:limit_strategy], eager_graph_dataset(opts, eo)), use_only_conditions ? only_conditions : cks.zip(pkcs) + conditions, Hash[eo].merge!(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep, :from_self_alias=>eo[:from_self_alias]), &graph_block)
2192
+ ds = ds.graph(opts.apply_eager_graph_limit_strategy(eo[:limit_strategy], eager_graph_dataset(opts, eo)), use_only_conditions ? only_conditions : cks.zip(pkcs) + conditions, eo.merge(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep), &graph_block)
2193
2193
  # We only load reciprocals for one_to_many associations, as other reciprocals don't make sense
2194
2194
  ds.opts[:eager_graph][:reciprocals][eo[:table_alias]] = opts.reciprocal
2195
2195
  ds
@@ -2908,7 +2908,7 @@ module Sequel
2908
2908
  def eager(*associations)
2909
2909
  opts = @opts[:eager]
2910
2910
  association_opts = eager_options_for_associations(associations)
2911
- opts = opts ? Hash[opts].merge!(association_opts) : association_opts
2911
+ opts = opts ? opts.merge(association_opts) : association_opts
2912
2912
  clone(:eager=>opts.freeze)
2913
2913
  end
2914
2914
 
@@ -1517,7 +1517,7 @@ module Sequel
1517
1517
  # # Sequel::Error raised
1518
1518
  def set_fields(hash, fields, opts=nil)
1519
1519
  opts = if opts
1520
- Hash[model.default_set_fields_options].merge!(opts)
1520
+ model.default_set_fields_options.merge(opts)
1521
1521
  else
1522
1522
  model.default_set_fields_options
1523
1523
  end
@@ -279,7 +279,7 @@ module Sequel
279
279
  # obj.json_serializer_opts(only: :name)
280
280
  # [obj].to_json # => '[{"name":"..."}]'
281
281
  def json_serializer_opts(opts=OPTS)
282
- @json_serializer_opts = Hash[@json_serializer_opts||OPTS].merge!(opts)
282
+ @json_serializer_opts = (@json_serializer_opts||OPTS).merge(opts)
283
283
  end
284
284
 
285
285
  # Return a string in JSON format. Accepts the following
@@ -301,9 +301,9 @@ module Sequel
301
301
  # use an underscored version of the model's name.
302
302
  def to_json(*a)
303
303
  opts = model.json_serializer_opts
304
- opts = Hash[opts].merge!(@json_serializer_opts) if @json_serializer_opts
304
+ opts = opts.merge(@json_serializer_opts) if @json_serializer_opts
305
305
  if (arg_opts = a.first).is_a?(Hash)
306
- opts = Hash[opts].merge!(arg_opts)
306
+ opts = opts.merge(arg_opts)
307
307
  a = []
308
308
  end
309
309
 
@@ -155,7 +155,7 @@ module Sequel
155
155
  def set_nested_attributes(assoc, obj, opts=OPTS)
156
156
  raise(Error, "no association named #{assoc} for #{model.inspect}") unless ref = model.association_reflection(assoc)
157
157
  raise(Error, "nested attributes are not enabled for association #{assoc} for #{model.inspect}") unless meta = ref[:nested_attributes]
158
- meta = Hash[meta].merge!(opts)
158
+ meta = meta.merge(opts)
159
159
  meta[:reflection] = ref
160
160
  if ref.returns_array?
161
161
  nested_attributes_list_setter(meta, obj)
@@ -337,8 +337,10 @@ module Sequel
337
337
  end
338
338
  opts[:eager_loader] ||= proc do |eo|
339
339
  id_map = eo[:id_map]
340
+ eo = Hash[eo]
341
+ eo[:loader] = false
340
342
 
341
- eager_load_results(opts, Hash[eo].merge!(:loader=>false)) do |assoc_record|
343
+ eager_load_results(opts, eo) do |assoc_record|
342
344
  if pks ||= assoc_record.get_column_value(key)
343
345
  pks.each do |pkv|
344
346
  next unless objects = id_map[pkv]
@@ -374,7 +376,7 @@ module Sequel
374
376
 
375
377
  opts[:eager_grapher] ||= proc do |eo|
376
378
  ds = eo[:self]
377
- ds = ds.graph(eager_graph_dataset(opts, eo), conditions, Hash[eo].merge!(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep, :from_self_alias=>eo[:from_self_alias]), &graph_block)
379
+ ds = ds.graph(eager_graph_dataset(opts, eo), conditions, eo.merge(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep), &graph_block)
378
380
  ds
379
381
  end
380
382
 
@@ -433,7 +435,9 @@ module Sequel
433
435
  end
434
436
  end
435
437
 
436
- eager_load_results(opts, Hash[eo].merge!(:id_map=>id_map)) do |assoc_record|
438
+ eo = Hash[eo]
439
+ eo[:id_map] = id_map
440
+ eager_load_results(opts, eo) do |assoc_record|
437
441
  if objects = id_map[assoc_record.get_column_value(pkm)]
438
442
  objects.each do |object|
439
443
  object.associations[name].push(assoc_record)
@@ -466,7 +470,7 @@ module Sequel
466
470
 
467
471
  opts[:eager_grapher] ||= proc do |eo|
468
472
  ds = eo[:self]
469
- ds = ds.graph(eager_graph_dataset(opts, eo), conditions, Hash[eo].merge!(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep, :from_self_alias=>eo[:from_self_alias]), &graph_block)
473
+ ds = ds.graph(eager_graph_dataset(opts, eo), conditions, eo.merge(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep), &graph_block)
470
474
  ds
471
475
  end
472
476
 
@@ -149,7 +149,7 @@ module Sequel
149
149
  # Use a prepared statement to update this model's columns in the database.
150
150
  def _update_without_checking(columns)
151
151
  if use_prepared_statements_for?(:update)
152
- _set_prepared_statement_server(model.send(:prepared_update, columns.keys)).call(Hash[columns].merge!(pk_hash))
152
+ _set_prepared_statement_server(model.send(:prepared_update, columns.keys)).call(columns.merge(pk_hash))
153
153
  else
154
154
  super
155
155
  end
@@ -67,7 +67,7 @@ module Sequel
67
67
  # of free columns.
68
68
  def before_create
69
69
  if v = model.prepared_statements_column_defaults
70
- @values = Hash[v].merge!(values)
70
+ @values = v.merge(values)
71
71
  end
72
72
  super
73
73
  end
@@ -244,7 +244,7 @@ module Sequel
244
244
  def validates_unique(*atts)
245
245
  opts = default_validation_helpers_options(:unique)
246
246
  if atts.last.is_a?(Hash)
247
- opts = Hash[opts].merge!(atts.pop)
247
+ opts = opts.merge(atts.pop)
248
248
  end
249
249
  message = validation_error_message(opts[:message])
250
250
  from_values = opts[:from] == :values
@@ -302,7 +302,7 @@ module Sequel
302
302
  # Merge the given options with the default options for the given type
303
303
  # and call validatable_attributes with the merged options.
304
304
  def validatable_attributes_for_type(type, atts, opts, &block)
305
- validatable_attributes(atts, Hash[default_validation_helpers_options(type)].merge!(opts), &block)
305
+ validatable_attributes(atts, default_validation_helpers_options(type).merge(opts), &block)
306
306
  end
307
307
 
308
308
  # The validation error message to use, as a string. If message
@@ -1472,7 +1472,7 @@ module Sequel
1472
1472
 
1473
1473
  # Return a new function call with the given opts merged into the current opts.
1474
1474
  def with_opts(opts)
1475
- self.class.new!(name, args, Hash[@opts].merge!(opts))
1475
+ self.class.new!(name, args, @opts.merge(opts))
1476
1476
  end
1477
1477
  end
1478
1478
 
@@ -6,7 +6,7 @@ module Sequel
6
6
 
7
7
  # The minor version of Sequel. Bumped for every non-patch level
8
8
  # release, generally around once a month.
9
- MINOR = 18
9
+ MINOR = 19
10
10
 
11
11
  # The tiny version of Sequel. Usually 0, only bumped for bugfix
12
12
  # releases that fix regressions from previous versions.
@@ -3930,6 +3930,14 @@ describe 'PostgreSQL enum types' do
3930
3930
  @db.schema(:test_enumt, :reload=>true).first.last[:enum_values].must_equal @initial_enum_values
3931
3931
  @db.rename_enum(:new_enum, :test_enum)
3932
3932
  end
3933
+
3934
+ it "should rename enum values" do
3935
+ @db.rename_enum_value(:test_enum, :b, :x)
3936
+ new_enum_values = @initial_enum_values
3937
+ new_enum_values[new_enum_values.index('b')] = 'x'
3938
+ @db.schema(:test_enumt, :reload=>true).first.last[:enum_values].must_equal new_enum_values
3939
+ @db.rename_enum_value(:test_enum, :x, :b)
3940
+ end if DB.server_version >= 100000
3933
3941
  end
3934
3942
 
3935
3943
  describe "PostgreSQL stored procedures for datasets" do
@@ -1679,7 +1679,7 @@ describe "Schema Parser" do
1679
1679
  sch = @db.schema(:x)
1680
1680
  sch.must_equal [[:a, {:db_type=>"x", :ruby_default=>nil}], [:b, {:db_type=>"x", :ruby_default=>nil}]]
1681
1681
  sch[0][1][:db_type].must_be_same_as(sch[1][1][:db_type])
1682
- end if RUBY_VERSION >= '2.5' && !defined?(JRUBY_VERSION)
1682
+ end if RUBY_VERSION >= '2.5'
1683
1683
 
1684
1684
  it "should set :auto_increment to true by default if unset and a single integer primary key is used" do
1685
1685
  @db.define_singleton_method(:schema_parse_table){|*| [[:a, {:primary_key=>true, :db_type=>'integer'}]]}
@@ -20,6 +20,7 @@ describe "constraint_validations extension" do
20
20
  before do
21
21
  @db = Sequel.mock
22
22
  @db.extend(Module.new{attr_writer :schema; def schema(table, *) execute("parse schema for #{table}"); @schema; end})
23
+ def @db.table_exists?(_) true; end
23
24
  @db.extension(:constraint_validations)
24
25
  end
25
26
 
@@ -36,13 +37,13 @@ describe "constraint_validations extension" do
36
37
 
37
38
  it "should allow dropping the sequel_constraint_validations table" do
38
39
  @db.drop_constraint_validations_table
39
- @db.sqls.must_equal ["DROP TABLE sequel_constraint_validations"]
40
+ @db.sqls.must_equal ["DELETE FROM sequel_constraint_validations WHERE (table = 'sequel_constraint_validations')", "DROP TABLE sequel_constraint_validations"]
40
41
  end
41
42
 
42
43
  it "should allow dropping the sequel_constraint_validations table with a non-default table name" do
43
44
  @db.constraint_validations_table = :foo
44
45
  @db.drop_constraint_validations_table
45
- @db.sqls.must_equal ["DROP TABLE foo"]
46
+ @db.sqls.must_equal ["DELETE FROM foo WHERE (table = 'foo')", "DROP TABLE foo"]
46
47
  end
47
48
 
48
49
  it "should allow dropping validations for a given table" do
@@ -50,6 +51,23 @@ describe "constraint_validations extension" do
50
51
  @db.sqls.must_equal ["DELETE FROM sequel_constraint_validations WHERE (table = 'foo')"]
51
52
  end
52
53
 
54
+ it "should drop validations for a given table when dropping the table" do
55
+ @db.drop_table(:foo)
56
+ @db.sqls.must_equal ["DELETE FROM sequel_constraint_validations WHERE (table = 'foo')", "DROP TABLE foo"]
57
+
58
+ @db.drop_table(:foo, :if_exists => true)
59
+ @db.sqls.must_equal ["DELETE FROM sequel_constraint_validations WHERE (table = 'foo')", "DROP TABLE IF EXISTS foo"]
60
+
61
+ @db.drop_table?(:foo)
62
+ @db.sqls.must_equal ["DELETE FROM sequel_constraint_validations WHERE (table = 'foo')", "DROP TABLE foo"]
63
+ end
64
+
65
+ it "should not drop validations for a given table if the constraint validations table does not exist" do
66
+ def @db.table_exists?(_) false; end
67
+ @db.drop_table(:foo)
68
+ @db.sqls.must_equal ["DROP TABLE foo"]
69
+ end
70
+
53
71
  it "should allow dropping validations for a given table and column" do
54
72
  @db.drop_constraint_validations_for(:table=>:foo, :column=>:bar)
55
73
  @db.sqls.must_equal ["DELETE FROM sequel_constraint_validations WHERE ((table = 'foo') AND (column = 'bar'))"]
@@ -49,6 +49,11 @@ describe "pg_enum extension" do
49
49
  @db.sqls.first.must_equal "ALTER TYPE sch.foo RENAME TO sch.bar"
50
50
  end
51
51
 
52
+ it "should support #rename_enum_value method for renameing an enum value" do
53
+ @db.rename_enum_value(:foo, :b, :x)
54
+ @db.sqls.first.must_equal "ALTER TYPE foo RENAME VALUE 'b' TO 'x'"
55
+ end
56
+
52
57
  it "should support #drop_enum method for dropping an enum" do
53
58
  @db.drop_enum(:foo)
54
59
  @db.sqls.first.must_equal "DROP TYPE foo"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sequel
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.18.0
4
+ version: 5.19.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jeremy Evans
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2019-03-01 00:00:00.000000000 Z
11
+ date: 2019-04-02 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: minitest
@@ -202,6 +202,7 @@ extra_rdoc_files:
202
202
  - doc/release_notes/5.16.0.txt
203
203
  - doc/release_notes/5.17.0.txt
204
204
  - doc/release_notes/5.18.0.txt
205
+ - doc/release_notes/5.19.0.txt
205
206
  files:
206
207
  - CHANGELOG
207
208
  - MIT-LICENSE
@@ -290,6 +291,7 @@ files:
290
291
  - doc/release_notes/5.16.0.txt
291
292
  - doc/release_notes/5.17.0.txt
292
293
  - doc/release_notes/5.18.0.txt
294
+ - doc/release_notes/5.19.0.txt
293
295
  - doc/release_notes/5.2.0.txt
294
296
  - doc/release_notes/5.3.0.txt
295
297
  - doc/release_notes/5.4.0.txt
@@ -845,7 +847,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
845
847
  - !ruby/object:Gem::Version
846
848
  version: '0'
847
849
  requirements: []
848
- rubygems_version: 3.0.1
850
+ rubygems_version: 3.0.3
849
851
  signing_key:
850
852
  specification_version: 4
851
853
  summary: The Database Toolkit for Ruby