sequel 4.17.0 → 4.18.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 1d6fa8ec8783c8834b984323d9e1c73bef006df5
4
- data.tar.gz: 40e551bc46ec90be6a411ca2eec0e4114394b03d
3
+ metadata.gz: 0fc29f94c84321473e3ed262bd8ac957a265e295
4
+ data.tar.gz: f25fc9635d509858d7275081fe1e5c67cf6db8ce
5
5
  SHA512:
6
- metadata.gz: 06d6af6daccc3ae11f55e17d6b39472509c3ddd90af821e0e6e353dfab49b6b30ea4c213b4596f46f048a6ff3807bcfc9257a13d08d8ccdc9a52d18478e59e6a
7
- data.tar.gz: 6c31bed38421bb9acf75235b01229fe777baeb311fef31554c4b6070454147d37b4a249650e4c6c788602d5550c0db373546af9066a398aa9d01b2882568b810
6
+ metadata.gz: 306ffd681a632fe5aee226016f51f62beb296e7b080d69b3c791690825e370c3e85ebea3002007ee9bc0f371f0d05026a215229364af10c9dcad0cc906f1e2bd
7
+ data.tar.gz: f0c9cf0fe10ce041abd556111a0f845840f2415ad6832a8ddcc2b37b0a4fcff0b538f0da3fbc95f21d7a5995df9bfb61a95bfa48a202686ba4abb4533105658d
data/CHANGELOG CHANGED
@@ -1,3 +1,15 @@
1
+ === 4.18.0 (2015-01-02)
2
+
3
+ * Make Dataset#empty? work when the dataset is ordered by a non-column expression (pete) (#923)
4
+
5
+ * Fix passing a hash value to :eager association option (jeremyevans)
6
+
7
+ * Treat all PG::ConnectionBad exceptions as disconnect errors in the postgres adapter (jeremyevans)
8
+
9
+ * Add :auto_increment key to schema information for primary key columns (jeremyevans) (#919)
10
+
11
+ * Fix handling of schema qualified tables in many_through_many associations (jeremyevans)
12
+
1
13
  === 4.17.0 (2014-12-01)
2
14
 
3
15
  * Fix handling of Sequel::SQL::Blob instances in bound variables in the postgres adapter (jeremyevans) (#917)
@@ -1,4 +1,4 @@
1
- = Dataset Filtering
1
+ = Dataset Filtering
2
2
 
3
3
  Sequel is very flexibile when it comes to filtering records. You can specify your conditions as a custom string, as a string with parameters, as a hash of values to compare against, or as ruby code that Sequel translates into SQL expressions.
4
4
 
@@ -76,29 +76,29 @@ If you need to select multiple items from a dataset, you can supply an array:
76
76
  You can pass a block to where, which is evaluated in a special context:
77
77
 
78
78
  items.where{price * 2 < 50}.sql
79
- #=> "SELECT * FROM items WHERE ((price * 2) < 50)
79
+ #=> "SELECT * FROM items WHERE ((price * 2) < 50)
80
80
 
81
81
  This works for the standard inequality and arithmetic operators:
82
82
 
83
83
  items.where{price + 100 < 200}.sql
84
- #=> "SELECT * FROM items WHERE ((price + 100) < 200)
84
+ #=> "SELECT * FROM items WHERE ((price + 100) < 200)
85
85
 
86
86
  items.where{price - 100 > 200}.sql
87
- #=> "SELECT * FROM items WHERE ((price - 100) > 200)
87
+ #=> "SELECT * FROM items WHERE ((price - 100) > 200)
88
88
 
89
89
  items.where{price * 100 <= 200}.sql
90
- #=> "SELECT * FROM items WHERE ((price * 100) <= 200)
90
+ #=> "SELECT * FROM items WHERE ((price * 100) <= 200)
91
91
 
92
92
  items.where{price / 100 >= 200}.sql
93
- #=> "SELECT * FROM items WHERE ((price / 100) >= 200)
93
+ #=> "SELECT * FROM items WHERE ((price / 100) >= 200)
94
94
 
95
95
  You use the overloaded bitwise and (&) and or (|) operators to combine expressions:
96
96
 
97
97
  items.where{(price + 100 < 200) & (price * 100 <= 200)}.sql
98
- #=> "SELECT * FROM items WHERE (((price + 100) < 200) AND ((price * 100) <= 200))
98
+ #=> "SELECT * FROM items WHERE (((price + 100) < 200) AND ((price * 100) <= 200))
99
99
 
100
100
  items.where{(price - 100 > 200) | (price / 100 >= 200)}.sql
101
- #=> "SELECT * FROM items WHERE (((price - 100) > 200) OR ((price / 100) >= 200))
101
+ #=> "SELECT * FROM items WHERE (((price - 100) > 200) OR ((price / 100) >= 200))
102
102
 
103
103
  To filter by equality, you use the standard hash, which can be combined with other expressions using Sequel.& and Sequel.|:
104
104
 
@@ -124,7 +124,7 @@ You can use the exclude method to exclude conditions:
124
124
  #=> "SELECT * FROM items WHERE NOT active"
125
125
 
126
126
  items.exclude{price / 100 >= 200}.sql
127
- #=> "SELECT * FROM items WHERE ((price / 100) < 200)
127
+ #=> "SELECT * FROM items WHERE ((price / 100) < 200)
128
128
 
129
129
  === Comparing against column references
130
130
 
@@ -163,15 +163,15 @@ Like can also take more than one argument:
163
163
 
164
164
  == String concatenation
165
165
 
166
- You can concatenate SQL strings using Sequel.join:
166
+ You can concatenate SQL strings using Sequel.join:
167
167
 
168
- items.where(Sequel.join([:name, :comment]).like('%acme%')).sql
169
- #=> "SELECT * FROM items WHERE ((name || comment) LIKE 'Acme%' ESCAPE '\')"
168
+ items.where(Sequel.join([:name, :comment]).like('Jo%nice%')).sql
169
+ #=> "SELECT * FROM items WHERE ((name || comment) LIKE 'Jo%nice%' ESCAPE '\')"
170
170
 
171
171
  Sequel.join also takes a join argument:
172
172
 
173
- items.filter(Sequel.join([:name, :comment], ' ').like('%acme%')).sql
174
- #=> "SELECT * FROM items WHERE ((name || ' ' || comment) LIKE 'Acme%' ESCAPE '\')"
173
+ items.filter(Sequel.join([:name, :comment], ':').like('John:%nice%')).sql
174
+ #=> "SELECT * FROM items WHERE ((name || ':' || comment) LIKE 'John:%nice%' ESCAPE '\')"
175
175
 
176
176
  == Filtering using sub-queries
177
177
 
@@ -0,0 +1,36 @@
1
+ = New Features
2
+
3
+ * An :auto_increment key has been added to the schema information for
4
+ primary key columns on JDBC, PostgreSQL, MySQL, MSSQL, DB2, and
5
+ SQLite. This fixes issues in the schema_dumper extension where
6
+ non-auto-incrementing integer primary keys are no longer dumped as
7
+ auto-incrementing.
8
+
9
+ For adapters that don't have specific support for detecting
10
+ auto incrementing primary keys, Sequel now assumes a primary key
11
+ is auto incrementing only if it is not a composite primary key
12
+ and the type contains int (e.g. int, integer, bigint).
13
+
14
+ = Other Improvements
15
+
16
+ * Dataset#empty? now ignores any order on the dataset. Previously,
17
+ calling empty? on a dataset ordered by an alias in the SELECT list
18
+ could raise an exception.
19
+
20
+ * Schema qualified tables are now handled correctly in
21
+ many_through_many associations.
22
+
23
+ * Using a hash as the value for the :eager association option now
24
+ works correctly.
25
+
26
+ * All PG::ConnectionBad exceptions are now treated as disconnect
27
+ errors in the postgres adapter. This should be more robust than
28
+ the previous method of trying to recognize disconnect errors by
29
+ trying to parse the exception message.
30
+
31
+ * Sequel now skips a hash allocation when issuing queries through
32
+ datasets if sharding is not used.
33
+
34
+ * Sequel no longer uses the JDBC schema parsing in the jdbc/sqlserver
35
+ adapter. Instead, it uses the MSSQL schema parsing, which should
36
+ be more accurate than the generic JDBC schema parsing.
@@ -604,7 +604,18 @@ module Sequel
604
604
  schemas = []
605
605
  metadata(:getColumns, nil, schema, table, nil) do |h|
606
606
  next if schema_parse_table_skip?(h, schema)
607
- s = {:type=>schema_column_type(h[:type_name]), :db_type=>h[:type_name], :default=>(h[:column_def] == '' ? nil : h[:column_def]), :allow_null=>(h[:nullable] != 0), :primary_key=>pks.include?(h[:column_name]), :column_size=>h[:column_size], :scale=>h[:decimal_digits]}
607
+ s = {
608
+ :type=>schema_column_type(h[:type_name]),
609
+ :db_type=>h[:type_name],
610
+ :default=>(h[:column_def] == '' ? nil : h[:column_def]),
611
+ :allow_null=>(h[:nullable] != 0),
612
+ :primary_key=>pks.include?(h[:column_name]),
613
+ :column_size=>h[:column_size],
614
+ :scale=>h[:decimal_digits]
615
+ }
616
+ if s[:primary_key]
617
+ s[:auto_increment] = h[:is_autoincrement] == "YES"
618
+ end
608
619
  s[:max_length] = s[:column_size] if s[:type] == :string
609
620
  if s[:db_type] =~ DECIMAL_TYPE_RE && s[:scale] == 0
610
621
  s[:type] = :integer
@@ -2,11 +2,6 @@ Sequel.require 'adapters/shared/mssql'
2
2
 
3
3
  module Sequel
4
4
  module JDBC
5
- class Database
6
- # Alias the generic JDBC version so it can be called directly later
7
- alias jdbc_schema_parse_table schema_parse_table
8
- end
9
-
10
5
  # Database and Dataset instance methods for MSSQL specific
11
6
  # support via JDBC.
12
7
  module MSSQL
@@ -31,12 +26,6 @@ module Sequel
31
26
  end
32
27
  end
33
28
 
34
- # Call the generic JDBC version instead of MSSQL version,
35
- # since the JDBC version handles primary keys.
36
- def schema_parse_table(table, opts=OPTS)
37
- jdbc_schema_parse_table(table, opts)
38
- end
39
-
40
29
  # Primary key indexes appear to start with pk__ on MSSQL
41
30
  def primary_key_index_re
42
31
  PRIMARY_KEY_INDEX_RE
@@ -304,6 +304,7 @@ module Sequel
304
304
  :allow_null => column.nullable?
305
305
  }
306
306
  h[:type] = oracle_column_type(h)
307
+ h[:auto_increment] = h[:type] == :integer if h[:primary_key]
307
308
  h[:max_length] = h[:char_size] if h[:type] == :string
308
309
  table_schema << [m.call(column.name), h]
309
310
  end
@@ -108,13 +108,24 @@ module Sequel
108
108
  # PGconn subclass for connection specific methods used with the
109
109
  # pg, postgres, or postgres-pr driver.
110
110
  class Adapter < ::PGconn
111
+ # The underlying exception classes to reraise as disconnect errors
112
+ # instead of regular database errors.
113
+ DISCONNECT_ERROR_CLASSES = [IOError, Errno::EPIPE, Errno::ECONNRESET]
114
+ if defined?(::PG::ConnectionBad)
115
+ DISCONNECT_ERROR_CLASSES << ::PG::ConnectionBad
116
+ end
117
+
111
118
  disconnect_errors = [
112
119
  'could not receive data from server',
113
120
  'no connection to the server',
114
121
  'connection not open',
115
122
  'terminating connection due to administrator command',
116
- 'PQconsumeInput() SSL SYSCALL error'
123
+ 'PQconsumeInput() '
117
124
  ]
125
+
126
+ # Since exception class based disconnect checking may not work,
127
+ # also trying parsing the exception message to look for disconnect
128
+ # errors.
118
129
  DISCONNECT_ERROR_RE = /\A#{Regexp.union(disconnect_errors)}/
119
130
 
120
131
  self.translate_results = false if respond_to?(:translate_results=)
@@ -124,11 +135,15 @@ module Sequel
124
135
  # are SQL strings.
125
136
  attr_reader(:prepared_statements) if SEQUEL_POSTGRES_USES_PG
126
137
 
127
- # Raise a Sequel::DatabaseDisconnectError if a PGError is raised and
128
- # the connection status cannot be determined or it is not OK.
138
+ # Raise a Sequel::DatabaseDisconnectError if a one of the disconnect
139
+ # error classes is raised, or a PGError is raised and the connection
140
+ # status cannot be determined or it is not OK.
129
141
  def check_disconnect_errors
130
142
  begin
131
143
  yield
144
+ rescue *DISCONNECT_ERROR_CLASSES => e
145
+ disconnect = true
146
+ raise(Sequel.convert_exception_class(e, Sequel::DatabaseDisconnectError))
132
147
  rescue PGError => e
133
148
  disconnect = false
134
149
  begin
@@ -140,9 +155,6 @@ module Sequel
140
155
  disconnect ||= !status_ok
141
156
  disconnect ||= e.message =~ DISCONNECT_ERROR_RE
142
157
  disconnect ? raise(Sequel.convert_exception_class(e, Sequel::DatabaseDisconnectError)) : raise
143
- rescue IOError, Errno::EPIPE, Errno::ECONNRESET => e
144
- disconnect = true
145
- raise(Sequel.convert_exception_class(e, Sequel::DatabaseDisconnectError))
146
158
  ensure
147
159
  block if status_ok && !disconnect
148
160
  end
@@ -40,7 +40,10 @@ module Sequel
40
40
  column[:db_type] << "(#{column[:longlength]},#{column[:scale]})"
41
41
  end
42
42
  column[:allow_null] = column.delete(:nulls) == 'Y'
43
- column[:primary_key] = column.delete(:identity) == 'Y' || !column[:keyseq].nil?
43
+ identity = column.delete(:identity) == 'Y'
44
+ if column[:primary_key] = identity || !column[:keyseq].nil?
45
+ column[:auto_increment] = identity
46
+ end
44
47
  column[:type] = schema_column_type(column[:db_type])
45
48
  column[:max_length] = column[:longlength] if column[:type] == :string
46
49
  [ m.call(column.delete(:name)), column]
@@ -428,11 +428,15 @@ module Sequel
428
428
  m = output_identifier_meth(opts[:dataset])
429
429
  m2 = input_identifier_meth(opts[:dataset])
430
430
  tn = m2.call(table_name.to_s)
431
- table_id = get{object_id(tn)}
431
+ table_id = get(Sequel.function(:object_id, tn))
432
432
  info_sch_sch = opts[:information_schema_schema]
433
433
  inf_sch_qual = lambda{|s| info_sch_sch ? Sequel.qualify(info_sch_sch, s) : Sequel.expr(s)}
434
434
  sys_qual = lambda{|s| info_sch_sch ? Sequel.qualify(info_sch_sch, Sequel.qualify(Sequel.lit(''), s)) : Sequel.expr(s)}
435
435
 
436
+ identity_cols = metadata_dataset.from(:sys__columns).
437
+ where(:object_id=>table_id, :is_identity=>true).
438
+ select_map(:name)
439
+
436
440
  pk_index_id = metadata_dataset.from(sys_qual.call(:sysindexes)).
437
441
  where(:id=>table_id, :indid=>1..254){{(status & 2048)=>2048}}.
438
442
  get(:indid)
@@ -440,16 +444,21 @@ module Sequel
440
444
  join(sys_qual.call(:syscolumns).as(:sc), :id=>:id, :colid=>:colid).
441
445
  where(:sik__id=>table_id, :sik__indid=>pk_index_id).
442
446
  select_order_map(:sc__name)
447
+
443
448
  ds = metadata_dataset.from(inf_sch_qual.call(:information_schema__tables).as(:t)).
444
449
  join(inf_sch_qual.call(:information_schema__columns).as(:c), :table_catalog=>:table_catalog,
445
450
  :table_schema => :table_schema, :table_name => :table_name).
446
451
  select(:column_name___column, :data_type___db_type, :character_maximum_length___max_chars, :column_default___default, :is_nullable___allow_null, :numeric_precision___column_size, :numeric_scale___scale).
447
452
  filter(:c__table_name=>tn)
453
+
448
454
  if schema = opts[:schema]
449
455
  ds.filter!(:c__table_schema=>schema)
450
456
  end
457
+
451
458
  ds.map do |row|
452
- row[:primary_key] = pk_cols.include?(row[:column])
459
+ if row[:primary_key] = pk_cols.include?(row[:column])
460
+ row[:auto_increment] = identity_cols.include?(row[:column])
461
+ end
453
462
  row[:allow_null] = row[:allow_null] == 'YES' ? true : false
454
463
  row[:default] = nil if blank_object?(row[:default])
455
464
  row[:type] = if row[:db_type] =~ DECIMAL_TYPE_RE && row[:scale] == 0
@@ -460,10 +460,12 @@ module Sequel
460
460
  table = SQL::Identifier.new(im.call(table_name))
461
461
  table = SQL::QualifiedIdentifier.new(im.call(opts[:schema]), table) if opts[:schema]
462
462
  metadata_dataset.with_sql("DESCRIBE ?", table).map do |row|
463
- row[:auto_increment] = true if row.delete(:Extra).to_s =~ /auto_increment/io
463
+ extra = row.delete(:Extra)
464
+ if row[:primary_key] = row.delete(:Key) == 'PRI'
465
+ row[:auto_increment] = !!(extra.to_s =~ /auto_increment/io)
466
+ end
464
467
  row[:allow_null] = row.delete(:Null) == 'YES'
465
468
  row[:default] = row.delete(:Default)
466
- row[:primary_key] = row.delete(:Key) == 'PRI'
467
469
  row[:db_type] = row.delete(:Type)
468
470
  row[:type] = schema_column_type(row[:db_type])
469
471
  [m.call(row.delete(:Field)), row]
@@ -1035,12 +1035,6 @@ module Sequel
1035
1035
  "ALTER TABLE #{quote_schema_table(name)} RENAME TO #{quote_identifier(schema_and_table(new_name).last)}"
1036
1036
  end
1037
1037
 
1038
- # PostgreSQL's autoincrementing primary keys are of type integer or bigint
1039
- # using a nextval function call as a default.
1040
- def schema_autoincrementing_primary_key?(schema)
1041
- super && schema[:default] =~ /\Anextval/io
1042
- end
1043
-
1044
1038
  # Recognize PostgreSQL interval type.
1045
1039
  def schema_column_type(db_type)
1046
1040
  case db_type
@@ -1086,6 +1080,9 @@ module Sequel
1086
1080
  row.delete(:db_base_type)
1087
1081
  end
1088
1082
  row[:type] = schema_column_type(row[:db_type])
1083
+ if row[:primary_key]
1084
+ row[:auto_increment] = !!(row[:default] =~ /\Anextval/io)
1085
+ end
1089
1086
  [m.call(row.delete(:name)), row]
1090
1087
  end
1091
1088
  end
@@ -282,6 +282,11 @@ module Sequel
282
282
  end
283
283
  end
284
284
 
285
+ # SQLite does not need to specify AUTOINCREMENT, integer primary keys are automatically auto incrementing.
286
+ def auto_increment_sql
287
+ ''
288
+ end
289
+
285
290
  def begin_new_transaction(conn, opts)
286
291
  mode = opts[:mode] || @transaction_mode
287
292
  sql = TRANSACTION_MODE[mode] or raise Error, "transaction :mode must be one of: :deferred, :immediate, :exclusive, nil"
@@ -457,19 +462,16 @@ module Sequel
457
462
  row.delete(:cid)
458
463
  row[:allow_null] = row.delete(:notnull).to_i == 0
459
464
  row[:default] = row.delete(:dflt_value)
460
- row[:primary_key] = row.delete(:pk).to_i > 0
461
465
  row[:default] = nil if blank_object?(row[:default]) || row[:default] == 'NULL'
462
466
  row[:db_type] = row.delete(:type)
467
+ if row[:primary_key] = row.delete(:pk).to_i > 0
468
+ row[:auto_increment] = row[:db_type].downcase == 'integer'
469
+ end
463
470
  row[:type] = schema_column_type(row[:db_type])
464
471
  row
465
472
  end
466
473
  end
467
474
 
468
- # SQLite treats integer primary keys as autoincrementing (alias of rowid).
469
- def schema_autoincrementing_primary_key?(schema)
470
- super && schema[:db_type].downcase == 'integer'
471
- end
472
-
473
475
  # SQLite supports schema parsing using the table_info PRAGMA, so
474
476
  # parse the output of that into the format Sequel expects.
475
477
  def schema_parse_table(table_name, opts)
@@ -1,5 +1,13 @@
1
1
  module Sequel
2
2
  module EmulateOffsetWithRowNumber
3
+ # If the offset must be emulated with ROW_NUMBER, don't remove any ordering,
4
+ # because it can cause invalid queries to be issued if an offset is required
5
+ # when ordering.
6
+ def empty?
7
+ return super unless emulate_offset_with_row_number?
8
+ get(Sequel::SQL::AliasedExpression.new(1, :one)).nil?
9
+ end
10
+
3
11
  # Emulate OFFSET support with the ROW_NUMBER window function
4
12
  #
5
13
  # The implementation is ugly, cloning the current dataset and modifying
@@ -157,8 +157,20 @@ module Sequel
157
157
 
158
158
  cols = schema_parse_table(table_name, opts)
159
159
  raise(Error, 'schema parsing returned no columns, table probably doesn\'t exist') if cols.nil? || cols.empty?
160
+
161
+ primary_keys = 0
162
+ auto_increment_set = false
163
+ cols.all? do |_,c|
164
+ auto_increment_set = true if c.has_key?(:auto_increment)
165
+ primary_keys += 1 if c[:primary_key]
166
+ end
167
+
160
168
  cols.each do |_,c|
161
169
  c[:ruby_default] = column_schema_to_ruby_default(c[:default], c[:type])
170
+ if c[:primary_key] && !auto_increment_set
171
+ # If adapter didn't set it, assume that integer primary keys are auto incrementing
172
+ c[:auto_increment] = primary_keys == 1 && !!(c[:db_type] =~ /int/io)
173
+ end
162
174
  if !c[:max_length] && c[:type] == :string && (max_length = column_schema_max_length(c[:db_type]))
163
175
  c[:max_length] = max_length
164
176
  end
@@ -323,7 +323,7 @@ module Sequel
323
323
  #
324
324
  # add_constraint(:valid_name, Sequel.like(:name, 'A%'))
325
325
  # # ADD CONSTRAINT valid_name CHECK (name LIKE 'A%' ESCAPE '\')
326
- # add_constraint({:name=>:valid_name, :deferrable=>true}, :num=>1..5)
326
+ # add_constraint({:name=>:valid_name, :deferrable=>true}, Sequel.like(:name, 'A%'))
327
327
  # # ADD CONSTRAINT valid_name CHECK (name LIKE 'A%' ESCAPE '\') DEFERRABLE INITIALLY DEFERRED
328
328
  def add_constraint(name, *args, &block)
329
329
  opts = name.is_a?(Hash) ? name : {:name=>name}
@@ -837,7 +837,7 @@ module Sequel
837
837
 
838
838
  # Return true if the given column schema represents an autoincrementing primary key.
839
839
  def schema_autoincrementing_primary_key?(schema)
840
- !!(schema[:primary_key] && schema[:db_type] =~ /int/io)
840
+ !!(schema[:primary_key] && schema[:auto_increment])
841
841
  end
842
842
 
843
843
  # The dataset to use for proxying certain schema methods.
@@ -146,7 +146,8 @@ module Sequel
146
146
  # DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1
147
147
  # # => false
148
148
  def empty?
149
- get(Sequel::SQL::AliasedExpression.new(1, :one)).nil?
149
+ ds = @opts[:order] ? unordered : self
150
+ ds.get(Sequel::SQL::AliasedExpression.new(1, :one)).nil?
150
151
  end
151
152
 
152
153
  # If a integer argument is given, it is interpreted as a limit, and then returns all
@@ -763,7 +764,7 @@ module Sequel
763
764
  # DB[:table].update(:x=>nil) # UPDATE table SET x = NULL
764
765
  # # => 10
765
766
  #
766
- # DB[:table].update(:x=>:x+1, :y=>0) # UPDATE table SET x = (x + 1), y = 0
767
+ # DB[:table].update(:x=>Sequel.expr(:x)+1, :y=>0) # UPDATE table SET x = (x + 1), y = 0
767
768
  # # => 10
768
769
  def update(values=OPTS, &block)
769
770
  sql = update_sql(values)
@@ -897,13 +898,14 @@ module Sequel
897
898
 
898
899
  # Set the server to use to :default unless it is already set in the passed opts
899
900
  def default_server_opts(opts)
900
- {:server=>@opts[:server] || :default}.merge(opts)
901
+ @db.sharded? ? {:server=>@opts[:server] || :default}.merge(opts) : opts
901
902
  end
902
903
 
903
904
  # Execute the given select SQL on the database using execute. Use the
904
905
  # :read_only server unless a specific server is set.
905
906
  def execute(sql, opts=OPTS, &block)
906
- @db.execute(sql, {:server=>@opts[:server] || :read_only}.merge(opts), &block)
907
+ db = @db
908
+ db.execute(sql, db.sharded? ? {:server=>@opts[:server] || :read_only}.merge(opts) : opts, &block)
907
909
  end
908
910
 
909
911
  # Execute the given SQL on the database using execute_ddl.
@@ -75,7 +75,7 @@ module Sequel
75
75
  ds = ds.order(*self[:order]) if self[:order]
76
76
  ds = ds.limit(*self[:limit]) if self[:limit]
77
77
  ds = ds.limit(1) if limit_to_single_row?
78
- ds = ds.eager(*self[:eager]) if self[:eager]
78
+ ds = ds.eager(self[:eager]) if self[:eager]
79
79
  ds = ds.distinct if self[:distinct]
80
80
  ds
81
81
  end
@@ -1144,12 +1144,10 @@ module Sequel
1144
1144
  alias to_hash values
1145
1145
 
1146
1146
  # Creates new instance and passes the given values to set.
1147
- # If a block is given, yield the instance to the block unless
1148
- # from_db is true.
1147
+ # If a block is given, yield the instance to the block.
1149
1148
  #
1150
1149
  # Arguments:
1151
1150
  # values :: should be a hash to pass to set.
1152
- # from_db :: only for backwards compatibility, forget it exists.
1153
1151
  #
1154
1152
  # Artist.new(:name=>'Bob')
1155
1153
  #
@@ -114,9 +114,14 @@ module Sequel
114
114
 
115
115
  def _associated_dataset
116
116
  ds = associated_class
117
- reverse_edges.each{|t| ds = ds.join(t[:table], Array(t[:left]).zip(Array(t[:right])), :table_alias=>t[:alias], :qualify=>:deep)}
118
- ft = final_reverse_edge
119
- ds.join(ft[:table], Array(ft[:left]).zip(Array(ft[:right])), :table_alias=>ft[:alias], :qualify=>:deep)
117
+ (reverse_edges + [final_reverse_edge]).each do |t|
118
+ h = {:qualify=>:deep}
119
+ if t[:alias] != t[:table]
120
+ h[:table_alias] = t[:alias]
121
+ end
122
+ ds = ds.join(t[:table], Array(t[:left]).zip(Array(t[:right])), h)
123
+ end
124
+ ds
120
125
  end
121
126
 
122
127
  # Make sure to use unique table aliases when lazy loading or eager loading
@@ -3,7 +3,7 @@ module Sequel
3
3
  MAJOR = 4
4
4
  # The minor version of Sequel. Bumped for every non-patch level
5
5
  # release, generally around once a month.
6
- MINOR = 17
6
+ MINOR = 18
7
7
  # The tiny version of Sequel. Usually 0, only bumped for bugfix
8
8
  # releases that fix regressions from previous versions.
9
9
  TINY = 0
@@ -71,11 +71,11 @@ describe "MySQL", '#create_table' do
71
71
 
72
72
  specify "should include an :auto_increment schema attribute if auto incrementing" do
73
73
  @db.create_table(:dolls) do
74
+ primary_key :n4
74
75
  Integer :n2
75
76
  String :n3
76
- Integer :n4, :auto_increment=>true, :unique=>true
77
77
  end
78
- @db.schema(:dolls).map{|k, v| v[:auto_increment]}.should == [nil, nil, true]
78
+ @db.schema(:dolls).map{|k, v| v[:auto_increment]}.should == [true, nil, nil]
79
79
  end
80
80
 
81
81
  specify "should support collate with various other column options" do
@@ -105,9 +105,9 @@ if [:mysql, :mysql2].include?(DB.adapter_scheme)
105
105
  end
106
106
 
107
107
  specify "should consider tinyint(1) datatypes as boolean if set, but not larger tinyints" do
108
- @db.schema(:booltest, :reload=>true).should == [[:b, {:type=>:boolean, :allow_null=>true, :primary_key=>false, :default=>nil, :ruby_default=>nil, :db_type=>"tinyint(1)"}, ], [:i, {:type=>:integer, :allow_null=>true, :primary_key=>false, :default=>nil, :ruby_default=>nil, :db_type=>"tinyint(4)"}, ]]
108
+ @db.schema(:booltest, :reload=>true).map{|_, s| s[:type]}.should == [:boolean, :integer]
109
109
  @db.convert_tinyint_to_bool = false
110
- @db.schema(:booltest, :reload=>true).should == [[:b, {:type=>:integer, :allow_null=>true, :primary_key=>false, :default=>nil, :ruby_default=>nil, :db_type=>"tinyint(1)"}, ], [:i, {:type=>:integer, :allow_null=>true, :primary_key=>false, :default=>nil, :ruby_default=>nil, :db_type=>"tinyint(4)"}, ]]
110
+ @db.schema(:booltest, :reload=>true).map{|_, s| s[:type]}.should == [:integer, :integer]
111
111
  end
112
112
 
113
113
  specify "should return tinyint(1)s as bools and tinyint(4)s as integers when set" do
@@ -252,9 +252,7 @@ describe "A PostgreSQL database" do
252
252
  end
253
253
 
254
254
  specify "should correctly parse the schema" do
255
- @db.schema(:public__testfk, :reload=>true).should == [
256
- [:id, {:type=>:integer, :ruby_default=>nil, :db_type=>"integer", :default=>"nextval('testfk_id_seq'::regclass)", :oid=>23, :primary_key=>true, :allow_null=>false}],
257
- [:i, {:type=>:integer, :ruby_default=>nil, :db_type=>"integer", :default=>nil, :oid=>23, :primary_key=>false, :allow_null=>true}]]
255
+ @db.schema(:public__testfk, :reload=>true).map{|c,s| [c, s[:oid]]}.should == [[:id, 23], [:i, 23]]
258
256
  end
259
257
 
260
258
  specify "should parse foreign keys for tables in a schema" do
@@ -300,7 +298,7 @@ describe "A PostgreSQL database with domain types" do
300
298
  specify "should correctly parse the schema" do
301
299
  sch = @db.schema(:testfk, :reload=>true)
302
300
  sch.first.last.delete(:domain_oid).should be_a_kind_of(Integer)
303
- sch.should == [[:id, {:type=>:decimal, :ruby_default=>nil, :db_type=>"numeric(10,2)", :default=>nil, :oid=>1700, :primary_key=>true, :allow_null=>false, :db_domain_type=>'positive_number'}]]
301
+ sch.first.last[:db_domain_type].should == 'positive_number'
304
302
  end
305
303
  end
306
304
 
@@ -2104,9 +2104,9 @@ end
2104
2104
  describe "Database#schema_autoincrementing_primary_key?" do
2105
2105
  specify "should indicate whether the parsed schema row indicates a primary key" do
2106
2106
  m = Sequel::Database.new.method(:schema_autoincrementing_primary_key?)
2107
- m.call(:primary_key=>true, :db_type=>'integer').should == true
2108
- m.call(:primary_key=>true, :db_type=>'varchar(255)').should == false
2109
- m.call(:primary_key=>false, :db_type=>'integer').should == false
2107
+ m.call(:primary_key=>true, :auto_increment=>true).should == true
2108
+ m.call(:primary_key=>true, :auto_increment=>false).should == false
2109
+ m.call(:primary_key=>false).should == false
2110
2110
  end
2111
2111
  end
2112
2112
 
@@ -1952,6 +1952,15 @@ describe "Dataset#empty?" do
1952
1952
  db.from(:test).filter(false).should be_empty
1953
1953
  db.sqls.should == ["SELECT 1 AS one FROM test WHERE 'f' LIMIT 1"]
1954
1954
  end
1955
+
1956
+ specify "should ignore order" do
1957
+ db = Sequel.mock(:fetch=>proc{|sql| {1=>1}})
1958
+ db.from(:test).should_not be_empty
1959
+ without_order = db.sqls
1960
+ db.from(:test).order(:the_order_column).should_not be_empty
1961
+ with_order = db.sqls
1962
+ without_order.should == with_order
1963
+ end
1955
1964
  end
1956
1965
 
1957
1966
  describe "Dataset#first_source_alias" do
@@ -1544,6 +1544,27 @@ describe "Schema Parser" do
1544
1544
  @sqls.should == ['x', 'x']
1545
1545
  end
1546
1546
 
1547
+ specify "should set :auto_increment to true by default if unset and a single integer primary key is used" do
1548
+ meta_def(@db, :schema_parse_table){|*| [[:a, {:primary_key=>true, :db_type=>'integer'}]]}
1549
+ @db.schema(:x).first.last[:auto_increment].should == true
1550
+ end
1551
+
1552
+ specify "should not set :auto_increment if already set" do
1553
+ meta_def(@db, :schema_parse_table){|*| [[:a, {:primary_key=>true, :db_type=>'integer', :auto_increment=>false}]]}
1554
+ @db.schema(:x).first.last[:auto_increment].should == false
1555
+ end
1556
+
1557
+ specify "should set :auto_increment to false by default if unset and a single nonintegery primary key is used" do
1558
+ meta_def(@db, :schema_parse_table){|*| [[:a, {:primary_key=>true, :db_type=>'varchar'}]]}
1559
+ @db.schema(:x).first.last[:auto_increment].should == false
1560
+ end
1561
+
1562
+ specify "should set :auto_increment to false by default if unset and a composite primary key" do
1563
+ meta_def(@db, :schema_parse_table){|*| [[:a, {:primary_key=>true, :db_type=>'integer'}], [:b, {:primary_key=>true, :db_type=>'integer'}]]}
1564
+ @db.schema(:x).first.last[:auto_increment].should == false
1565
+ @db.schema(:x).last.last[:auto_increment].should == false
1566
+ end
1567
+
1547
1568
  specify "should convert various types of table name arguments" do
1548
1569
  meta_def(@db, :schema_parse_table) do |t, opts|
1549
1570
  [[t, opts]]
@@ -70,6 +70,19 @@ describe Sequel::Model, "many_through_many" do
70
70
  a.first.tags.should == [@c2.load(:id=>4)]
71
71
  end
72
72
 
73
+ it "should handle schema qualified tables" do
74
+ @c1.many_through_many :tags, :through=>[[:myschema__albums_artists, :artist_id, :album_id], [:myschema__albums, :id, :id], [:myschema__albums_tags, :album_id, :tag_id]]
75
+ @c1.load(:id=>1).tags_dataset.sql.should == "SELECT tags.* FROM tags INNER JOIN myschema.albums_tags ON (myschema.albums_tags.tag_id = tags.id) INNER JOIN myschema.albums ON (myschema.albums.id = myschema.albums_tags.album_id) INNER JOIN myschema.albums_artists ON (myschema.albums_artists.album_id = myschema.albums.id) WHERE (myschema.albums_artists.artist_id = 1)"
76
+
77
+ @c1.dataset._fetch = {:id=>1}
78
+ @c2.dataset._fetch = {:id=>4, :x_foreign_key_x=>1}
79
+ a = @c1.eager(:tags).all
80
+ a.should == [@c1.load(:id => 1)]
81
+ DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, myschema.albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN myschema.albums_tags ON (myschema.albums_tags.tag_id = tags.id) INNER JOIN myschema.albums ON (myschema.albums.id = myschema.albums_tags.album_id) INNER JOIN myschema.albums_artists ON (myschema.albums_artists.album_id = myschema.albums.id) WHERE (myschema.albums_artists.artist_id IN (1))"]
82
+
83
+ @c1.eager_graph(:tags).sql.should == 'SELECT artists.id, tags.id AS tags_id, tags.h1, tags.h2 FROM artists LEFT OUTER JOIN myschema.albums_artists AS albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN myschema.albums AS albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN myschema.albums_tags AS albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)'
84
+ end
85
+
73
86
  it "should default to associating to other models in the same scope" do
74
87
  begin
75
88
  class ::AssociationModuleTest
@@ -68,7 +68,7 @@ describe "Sequel::Database dump methods" do
68
68
  @d.meta_def(:schema) do |t, *o|
69
69
  case t
70
70
  when :t1, 't__t1', Sequel.identifier(:t__t1)
71
- [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>false}],
71
+ [[:c1, {:db_type=>'integer', :primary_key=>true, :auto_increment=>true, :allow_null=>false}],
72
72
  [:c2, {:db_type=>'varchar(20)', :allow_null=>true}]]
73
73
  when :t2
74
74
  [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>false}],
@@ -92,7 +92,7 @@ describe "Sequel::Database dump methods" do
92
92
  end
93
93
 
94
94
  it "should dump non-Integer primary key columns with explicit :type" do
95
- @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'bigint', :primary_key=>true, :allow_null=>true}]]}
95
+ @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'bigint', :primary_key=>true, :allow_null=>true, :auto_increment=>true}]]}
96
96
  @d.dump_table_schema(:t6).should == "create_table(:t6) do\n primary_key :c1, :type=>Bignum\nend"
97
97
  end
98
98
 
@@ -104,7 +104,7 @@ describe "Sequel::Database dump methods" do
104
104
  end
105
105
 
106
106
  it "should handle primary keys that are also foreign keys" do
107
- @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true}]]}
107
+ @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true, :auto_increment=>true}]]}
108
108
  @d.meta_def(:supports_foreign_key_parsing?){true}
109
109
  @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2]}]}
110
110
  s = @d.dump_table_schema(:t6)
@@ -126,7 +126,7 @@ describe "Sequel::Database dump methods" do
126
126
  end
127
127
 
128
128
  it "should handle foreign key options in the primary key" do
129
- @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true}]]}
129
+ @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true, :auto_increment=>true}]]}
130
130
  @d.meta_def(:supports_foreign_key_parsing?){true}
131
131
  @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2], :on_delete=>:restrict, :on_update=>:set_null, :deferrable=>true}]}
132
132
  s = @d.dump_table_schema(:t6)
@@ -151,7 +151,7 @@ describe "Sequel::Database dump methods" do
151
151
  end
152
152
 
153
153
  it "should omit foreign key options that are the same as defaults in the primary key" do
154
- @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true}]]}
154
+ @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :allow_null=>true, :auto_increment=>true}]]}
155
155
  @d.meta_def(:supports_foreign_key_parsing?){true}
156
156
  @d.meta_def(:foreign_key_list){|*s| [{:columns=>[:c1], :table=>:t2, :key=>[:c2], :on_delete=>:no_action, :on_update=>:no_action, :deferrable=>false}]}
157
157
  s = @d.dump_table_schema(:t6)
@@ -232,7 +232,7 @@ END_MIG
232
232
  it "should sort table names topologically when dumping a migration with foreign keys" do
233
233
  @d.meta_def(:tables){|o| [:t1, :t2]}
234
234
  @d.meta_def(:schema) do |t|
235
- t == :t1 ? [[:c2, {:db_type=>'integer'}]] : [[:c1, {:db_type=>'integer', :primary_key=>true}]]
235
+ t == :t1 ? [[:c2, {:db_type=>'integer'}]] : [[:c1, {:db_type=>'integer', :primary_key=>true, :auto_increment=>true}]]
236
236
  end
237
237
  @d.meta_def(:supports_foreign_key_parsing?){true}
238
238
  @d.meta_def(:foreign_key_list) do |t|
@@ -284,7 +284,7 @@ END_MIG
284
284
  it "should sort topologically even if the database raises an error when trying to parse foreign keys for a non-existent table" do
285
285
  @d.meta_def(:tables){|o| [:t1, :t2]}
286
286
  @d.meta_def(:schema) do |t|
287
- t == :t1 ? [[:c2, {:db_type=>'integer'}]] : [[:c1, {:db_type=>'integer', :primary_key=>true}]]
287
+ t == :t1 ? [[:c2, {:db_type=>'integer'}]] : [[:c1, {:db_type=>'integer', :primary_key=>true, :auto_increment=>true}]]
288
288
  end
289
289
  @d.meta_def(:supports_foreign_key_parsing?){true}
290
290
  @d.meta_def(:foreign_key_list) do |t|
@@ -763,13 +763,13 @@ END_MIG
763
763
  end
764
764
 
765
765
  it "should use separate primary_key call with non autoincrementable types" do
766
- @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'varchar(8)', :primary_key=>true}]]}
766
+ @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'varchar(8)', :primary_key=>true, :auto_increment=>false}]]}
767
767
  @d.dump_table_schema(:t3).should == "create_table(:t3) do\n String :c1, :size=>8\n \n primary_key [:c1]\nend"
768
768
  @d.dump_table_schema(:t3, :same_db=>true).should == "create_table(:t3) do\n column :c1, \"varchar(8)\"\n \n primary_key [:c1]\nend"
769
769
  end
770
770
 
771
771
  it "should use explicit type for non integer foreign_key types" do
772
- @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'date', :primary_key=>true}]]}
772
+ @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'date', :primary_key=>true, :auto_increment=>false}]]}
773
773
  @d.meta_def(:supports_foreign_key_parsing?){true}
774
774
  @d.meta_def(:foreign_key_list){|t, *a| [{:columns=>[:c1], :table=>:t3, :key=>[:c1]}] if t == :t4}
775
775
  ["create_table(:t4) do\n foreign_key :c1, :t3, :type=>Date, :key=>[:c1]\n \n primary_key [:c1]\nend",
@@ -779,7 +779,7 @@ END_MIG
779
779
  end
780
780
 
781
781
  it "should correctly handing autoincrementing primary keys that are also foreign keys" do
782
- @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true}]]}
782
+ @d.meta_def(:schema){|*s| [[:c1, {:db_type=>'integer', :primary_key=>true, :auto_increment=>true}]]}
783
783
  @d.meta_def(:supports_foreign_key_parsing?){true}
784
784
  @d.meta_def(:foreign_key_list){|t, *a| [{:columns=>[:c1], :table=>:t3, :key=>[:c1]}] if t == :t4}
785
785
  ["create_table(:t4) do\n primary_key :c1, :table=>:t3, :key=>[:c1]\nend",
@@ -88,6 +88,15 @@ describe "Database schema parser" do
88
88
  DB.schema(:items).collect{|k,v| k if v[:primary_key]}.compact.should == [:number1, :number2]
89
89
  end
90
90
 
91
+ cspecify "should parse autoincrementing primary keys from the schema properly", :sqlite, :oracle, :fdbsql do
92
+ DB.create_table!(:items){Integer :number}
93
+ DB.schema(:items).collect{|k,v| k if v[:primary_key] && v[:auto_increment]}.compact.should == []
94
+ DB.create_table!(:items){primary_key :number}
95
+ DB.schema(:items).collect{|k,v| k if v[:primary_key] && v[:auto_increment]}.compact.should == [:number]
96
+ DB.create_table!(:items){Integer :number, :primary_key=>true}
97
+ DB.schema(:items).collect{|k,v| k if v[:primary_key] && v[:auto_increment]}.compact.should == []
98
+ end
99
+
91
100
  specify "should parse NULL/NOT NULL from the schema properly" do
92
101
  DB.create_table!(:items){Integer :number, :null=>true}
93
102
  DB.schema(:items).first.last[:allow_null].should == true
@@ -538,6 +538,21 @@ describe Sequel::Model, "#eager" do
538
538
  DB.sqls.should == []
539
539
  end
540
540
 
541
+ it "should respect :eager with cascaded hash when lazily loading an association" do
542
+ EagerBand.one_to_many :albums, :eager=>{:tracks=>:album}, :clone=>:albums
543
+ a = EagerBand.all
544
+ a.should == [EagerBand.load(:id=>2)]
545
+ DB.sqls.should == ['SELECT * FROM bands']
546
+ a = a.first.albums
547
+ DB.sqls.should == ['SELECT * FROM albums WHERE (albums.band_id = 2)',
548
+ 'SELECT * FROM tracks WHERE (tracks.album_id IN (1))',
549
+ 'SELECT * FROM albums WHERE (albums.id IN (1))']
550
+ a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
551
+ a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id => 1)]
552
+ a.first.tracks.first.album.should == a.first
553
+ DB.sqls.should == []
554
+ end
555
+
541
556
  it "should cascade eagerly loading when the :eager_graph association option is used" do
542
557
  EagerAlbum.dataset._fetch = {:id=>1, :band_id=>2, :tracks_id=>3, :album_id=>1}
543
558
  a = EagerBand.eager(:graph_albums).all
metadata CHANGED
@@ -1,15 +1,71 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sequel
3
3
  version: !ruby/object:Gem::Version
4
- version: 4.17.0
4
+ version: 4.18.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jeremy Evans
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2014-12-01 00:00:00.000000000 Z
12
- dependencies: []
11
+ date: 2015-01-02 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: rspec
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ">="
18
+ - !ruby/object:Gem::Version
19
+ version: '0'
20
+ type: :development
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ">="
25
+ - !ruby/object:Gem::Version
26
+ version: '0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: tzinfo
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: '0'
34
+ type: :development
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ">="
39
+ - !ruby/object:Gem::Version
40
+ version: '0'
41
+ - !ruby/object:Gem::Dependency
42
+ name: activemodel
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - ">="
46
+ - !ruby/object:Gem::Version
47
+ version: '0'
48
+ type: :development
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - ">="
53
+ - !ruby/object:Gem::Version
54
+ version: '0'
55
+ - !ruby/object:Gem::Dependency
56
+ name: nokogiri
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - ">="
60
+ - !ruby/object:Gem::Version
61
+ version: '0'
62
+ type: :development
63
+ prerelease: false
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - ">="
67
+ - !ruby/object:Gem::Version
68
+ version: '0'
13
69
  description: The Database Toolkit for Ruby
14
70
  email: code@jeremyevans.net
15
71
  executables:
@@ -134,6 +190,7 @@ extra_rdoc_files:
134
190
  - doc/release_notes/4.15.0.txt
135
191
  - doc/release_notes/4.16.0.txt
136
192
  - doc/release_notes/4.17.0.txt
193
+ - doc/release_notes/4.18.0.txt
137
194
  files:
138
195
  - CHANGELOG
139
196
  - MIT-LICENSE
@@ -238,6 +295,7 @@ files:
238
295
  - doc/release_notes/4.15.0.txt
239
296
  - doc/release_notes/4.16.0.txt
240
297
  - doc/release_notes/4.17.0.txt
298
+ - doc/release_notes/4.18.0.txt
241
299
  - doc/release_notes/4.2.0.txt
242
300
  - doc/release_notes/4.3.0.txt
243
301
  - doc/release_notes/4.4.0.txt
@@ -736,7 +794,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
736
794
  version: '0'
737
795
  requirements: []
738
796
  rubyforge_project:
739
- rubygems_version: 2.4.4
797
+ rubygems_version: 2.4.5
740
798
  signing_key:
741
799
  specification_version: 4
742
800
  summary: The Database Toolkit for Ruby