sequel 5.6.0 → 5.7.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (107) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG +30 -5099
  3. data/Rakefile +1 -1
  4. data/doc/opening_databases.rdoc +0 -2
  5. data/doc/postgresql.rdoc +31 -0
  6. data/doc/querying.rdoc +2 -2
  7. data/doc/release_notes/5.7.0.txt +108 -0
  8. data/doc/testing.rdoc +1 -0
  9. data/lib/sequel/adapters/jdbc/derby.rb +1 -1
  10. data/lib/sequel/adapters/jdbc/oracle.rb +11 -0
  11. data/lib/sequel/adapters/postgres.rb +1 -0
  12. data/lib/sequel/adapters/shared/postgres.rb +117 -13
  13. data/lib/sequel/connection_pool/sharded_threaded.rb +7 -6
  14. data/lib/sequel/connection_pool/threaded.rb +6 -6
  15. data/lib/sequel/core.rb +20 -0
  16. data/lib/sequel/database/logging.rb +3 -2
  17. data/lib/sequel/database/schema_generator.rb +1 -2
  18. data/lib/sequel/dataset/actions.rb +15 -5
  19. data/lib/sequel/extensions/connection_expiration.rb +3 -3
  20. data/lib/sequel/extensions/connection_validator.rb +3 -3
  21. data/lib/sequel/extensions/integer64.rb +30 -0
  22. data/lib/sequel/extensions/migration.rb +2 -3
  23. data/lib/sequel/plugins/pg_array_associations.rb +5 -3
  24. data/lib/sequel/plugins/validate_associated.rb +18 -0
  25. data/lib/sequel/version.rb +1 -1
  26. data/spec/adapters/mssql_spec.rb +6 -6
  27. data/spec/adapters/mysql_spec.rb +1 -1
  28. data/spec/adapters/oracle_spec.rb +15 -1
  29. data/spec/adapters/postgres_spec.rb +78 -1
  30. data/spec/adapters/spec_helper.rb +3 -1
  31. data/spec/bin_spec.rb +1 -0
  32. data/spec/core/dataset_spec.rb +10 -0
  33. data/spec/extensions/integer64_spec.rb +22 -0
  34. data/spec/extensions/pg_array_associations_spec.rb +14 -2
  35. data/spec/extensions/spec_helper.rb +1 -0
  36. data/spec/integration/associations_test.rb +4 -4
  37. data/spec/integration/dataset_test.rb +2 -0
  38. data/spec/integration/spec_helper.rb +5 -11
  39. data/spec/model/spec_helper.rb +1 -0
  40. metadata +35 -165
  41. data/doc/release_notes/1.0.txt +0 -38
  42. data/doc/release_notes/1.1.txt +0 -143
  43. data/doc/release_notes/1.3.txt +0 -101
  44. data/doc/release_notes/1.4.0.txt +0 -53
  45. data/doc/release_notes/1.5.0.txt +0 -155
  46. data/doc/release_notes/2.0.0.txt +0 -298
  47. data/doc/release_notes/2.1.0.txt +0 -271
  48. data/doc/release_notes/2.10.0.txt +0 -328
  49. data/doc/release_notes/2.11.0.txt +0 -215
  50. data/doc/release_notes/2.12.0.txt +0 -534
  51. data/doc/release_notes/2.2.0.txt +0 -253
  52. data/doc/release_notes/2.3.0.txt +0 -88
  53. data/doc/release_notes/2.4.0.txt +0 -106
  54. data/doc/release_notes/2.5.0.txt +0 -137
  55. data/doc/release_notes/2.6.0.txt +0 -157
  56. data/doc/release_notes/2.7.0.txt +0 -166
  57. data/doc/release_notes/2.8.0.txt +0 -171
  58. data/doc/release_notes/2.9.0.txt +0 -97
  59. data/doc/release_notes/3.0.0.txt +0 -221
  60. data/doc/release_notes/3.1.0.txt +0 -406
  61. data/doc/release_notes/3.10.0.txt +0 -286
  62. data/doc/release_notes/3.11.0.txt +0 -254
  63. data/doc/release_notes/3.12.0.txt +0 -304
  64. data/doc/release_notes/3.13.0.txt +0 -210
  65. data/doc/release_notes/3.14.0.txt +0 -118
  66. data/doc/release_notes/3.15.0.txt +0 -78
  67. data/doc/release_notes/3.16.0.txt +0 -45
  68. data/doc/release_notes/3.17.0.txt +0 -58
  69. data/doc/release_notes/3.18.0.txt +0 -120
  70. data/doc/release_notes/3.19.0.txt +0 -67
  71. data/doc/release_notes/3.2.0.txt +0 -268
  72. data/doc/release_notes/3.20.0.txt +0 -41
  73. data/doc/release_notes/3.21.0.txt +0 -87
  74. data/doc/release_notes/3.22.0.txt +0 -39
  75. data/doc/release_notes/3.23.0.txt +0 -172
  76. data/doc/release_notes/3.24.0.txt +0 -420
  77. data/doc/release_notes/3.25.0.txt +0 -88
  78. data/doc/release_notes/3.26.0.txt +0 -88
  79. data/doc/release_notes/3.27.0.txt +0 -82
  80. data/doc/release_notes/3.28.0.txt +0 -304
  81. data/doc/release_notes/3.29.0.txt +0 -459
  82. data/doc/release_notes/3.3.0.txt +0 -192
  83. data/doc/release_notes/3.30.0.txt +0 -135
  84. data/doc/release_notes/3.31.0.txt +0 -146
  85. data/doc/release_notes/3.32.0.txt +0 -202
  86. data/doc/release_notes/3.33.0.txt +0 -157
  87. data/doc/release_notes/3.34.0.txt +0 -671
  88. data/doc/release_notes/3.35.0.txt +0 -144
  89. data/doc/release_notes/3.36.0.txt +0 -245
  90. data/doc/release_notes/3.37.0.txt +0 -338
  91. data/doc/release_notes/3.38.0.txt +0 -234
  92. data/doc/release_notes/3.39.0.txt +0 -237
  93. data/doc/release_notes/3.4.0.txt +0 -325
  94. data/doc/release_notes/3.40.0.txt +0 -73
  95. data/doc/release_notes/3.41.0.txt +0 -155
  96. data/doc/release_notes/3.42.0.txt +0 -74
  97. data/doc/release_notes/3.43.0.txt +0 -105
  98. data/doc/release_notes/3.44.0.txt +0 -152
  99. data/doc/release_notes/3.45.0.txt +0 -179
  100. data/doc/release_notes/3.46.0.txt +0 -122
  101. data/doc/release_notes/3.47.0.txt +0 -270
  102. data/doc/release_notes/3.48.0.txt +0 -477
  103. data/doc/release_notes/3.5.0.txt +0 -510
  104. data/doc/release_notes/3.6.0.txt +0 -366
  105. data/doc/release_notes/3.7.0.txt +0 -179
  106. data/doc/release_notes/3.8.0.txt +0 -151
  107. data/doc/release_notes/3.9.0.txt +0 -233
@@ -1,420 +0,0 @@
1
- = Prepared Statement Plugins
2
-
3
- * The prepared_statements plugin makes Sequel::Model classes use
4
- prepared statements for creating, updating, and destroying model
5
- instances, as well as looking up model objects by primary key.
6
- With this plugin, all of the following will use prepared
7
- statements:
8
-
9
- Artist.plugin :prepared_statements
10
- Artist.create(:name=>'Foo')
11
- a = Artist[1]
12
- a.update(:name=>'Bar')
13
- a.destroy
14
-
15
- * The prepared_statements_safe plugin reduces the number of
16
- prepared statements that can be created by doing two things. First,
17
- it makes the INSERT statements used when creating instances to use
18
- as many columns as possible, setting specific values for all
19
- columns with parseable default values. Second, it changes
20
- save_changes to just use save, saving all columns instead of just
21
- the changed ones.
22
-
23
- The reason for this plugin is that Sequel's default behavior of
24
- using only the values specifically set when creating instances
25
- and having update only set changed columns by default can lead
26
- to a large number of prepared statements being created.
27
-
28
- For prepared statements to be used, each set of columns in the
29
- insert and update statements needs to have its own prepared
30
- statement. If you have a table with 1 primary key column and
31
- 4 other columns, you can have up to 2^4 = 16 prepared statements
32
- created, one for each subset of the 4 columns. If you have 1
33
- primary key column and 20 other columns, there are over a million
34
- subsets, and you could hit your database limit for prepared
35
- statements (a denial of service attack).
36
-
37
- Using the prepared_statements_safe plugin mitigates this
38
- issue by reducing the number of columns that may or may not be
39
- present in the query, in many cases making sure that each model
40
- will only have a single INSERT and a single UPDATE prepared
41
- statement.
42
-
43
- * The prepared_statements_associations plugin allows normal
44
- association method calls to use prepared statements if possible.
45
- For example:
46
-
47
- Artist.plugin :prepared_statements_associations
48
- Artist.many_to_one :albums
49
- Artist[1].albums
50
-
51
- Will use a prepared statement to return the albums for that artist.
52
- This plugin works for all supported association types. There are
53
- some associations (filtered and custom associations) that Sequel
54
- cannot currently use a prepared statement reliably, for those
55
- Sequel will use a regular query.
56
-
57
- * The prepared_statements_with_pk plugin allows the new
58
- Dataset#with_pk method (explained below) to use prepared statements.
59
- For example:
60
-
61
- Artist.plugin :prepared_statements_with_pk
62
- Artist.filter(...).with_pk(1)
63
-
64
- Will use a prepared statement for this query. The most benefit
65
- from prepared statements come from queries that are expensive to
66
- parse and plan but quick to execute, so using this plugin with
67
- a complex filter can in certain cases yield significant performance
68
- improvements.
69
-
70
- However, this plugin should be considered unsafe as it is possible
71
- that it will create an unbounded number of prepared statements. It
72
- extracts parameters from the dataset using Dataset#unbind
73
- (explained below), so if your code has conditions that vary per
74
- query but that Dataset#unbind does not handle, an unbounded number
75
- of prepared statements can be created. For example:
76
-
77
- Artist.filter(:a=>params[:b].to_i).with_pk[1]
78
- Artist.exclude{a > params[:b].to_i}.with_pk[1]
79
-
80
- are safe, but:
81
-
82
- Artist.filter(:a=>[1, params[:b].to_i]).with_pk[1]
83
- Artist.exclude{a > params[:b].to_i + 2}.with_pk[1]
84
-
85
- are not. For queries that are not safe, Dataset#with_pk should
86
- not be used with this plugin, you should switch to looking up by
87
- primary key manually (for a regular query):
88
-
89
- Artist.filter(:a=>[1, params[:b].to_i])[:id=>1]
90
-
91
- or using the prepared statement API to create a custom prepared
92
- statement:
93
-
94
- # PS = {}
95
- PS[:name] ||= Artist.filter(:a=>[1, :$b], :id=>:$id).
96
- prepare(:select, :name)
97
- PS[:name].call(:b=>params[:b].to_i, :id=>1)
98
-
99
- = Other New Features
100
-
101
- * Filtering by associations got a lot more powerful. Sequel 3.23.0
102
- introduced filtering by associations:
103
-
104
- Album.filter(:artist=>artist)
105
-
106
- This capability is much expanded in 3.24.0, allowing you to
107
- exclude by associations:
108
-
109
- Album.exclude(:artist=>artist)
110
-
111
- This will match all albums not by that artist.
112
-
113
- You can also filter or exclude by multiple associated objects:
114
-
115
- Album.filter(:artist=>[artist1, artist2])
116
- Album.exclude(:artist=>[artist1, artist2])
117
-
118
- The filtered dataset will match all albums by either of those
119
- two artists, and the excluded dataset will match all albums not
120
- by either of those two artists.
121
-
122
- You can also filter or exclude by using a model dataset:
123
-
124
- Album.filter(:artist=>Artist.filter(:name.like('A%'))).all
125
- Album.exclude(:artist=>Artist.filter(:name.like('A%'))).all
126
-
127
- Here the filtered dataset will match all albums where the
128
- associated artist has a name that begins with A, and the excluded
129
- dataset will match all albums where the associated artist does not
130
- have a name that begins with A.
131
-
132
- All of these types of filtering and excluding work with all of
133
- association types that ship with Sequel, even the many_through_many
134
- plugin.
135
-
136
- * Sequel now supports around hooks, which wrap the related before
137
- hook, behavior, and after hook. Like other Sequel hooks, these
138
- are implemented as instance methods. For example, if you wanted
139
- to log DatabaseErrors raised during save:
140
-
141
- class Artist < Sequel::Model
142
- def around_save
143
- super
144
- rescue Sequel::DatabaseError => e
145
- # log the error
146
- raise
147
- end
148
- end
149
-
150
- All around hooks should call super, not yield. If an around hook
151
- doesn't call super or yield, it is treated as a hook failure,
152
- similar to before hooks returning false.
153
-
154
- For around_validation, the return value of super should be whether
155
- the object is valid. For other around hooks, the return value of
156
- super is currently true, but it's possible that will change in the
157
- future.
158
-
159
- * Dataset#with_pk has been added to model datasets that allows you
160
- to find the object with the matching primary key:
161
-
162
- Artist.filter(:name.like('A%')).with_pk(1)
163
-
164
- This should make easier the common case where you want to find
165
- a particular object that is associated to another object:
166
-
167
- Artist[1].albums_dataset.with_pk(2)
168
-
169
- Before, there was no way to do that without manually specifying
170
- the primary key:
171
-
172
- Artist[1].albums_dataset[:id=>2]
173
-
174
- To use a composite primary key with with_pk, you have to provide
175
- an array:
176
-
177
- Artist[1].albums_dataset.with_pk([1, 2])
178
-
179
- * Dataset#[] for model datasets will now call with_pk if given a
180
- single Integer argument. This makes the above case even easier:
181
-
182
- Artist[1].albums_dataset[2]
183
-
184
- Note that for backwards compatibility, this only works for
185
- single integer primary keys. If you have a composite primary key
186
- or a string/varchar primary key, you have to use with_pk.
187
-
188
- * Dataset#unbind has been added, which allows you to take a dataset
189
- that uses static bound values and convert them to placeholders.
190
- Currently, the only cases handled are SQL::ComplexExpression
191
- objects that use a =, !=, <, >, <=, or >= operator where the first
192
- argument is a Symbol, SQL::Indentifier, or
193
- SQL::QualifiedIdentifier, and the second argument is a Numeric,
194
- String, Date, or Time. Dataset#unbind returns a two element array,
195
- where the first element is a modified copy of the receiver, and the
196
- second element is a bound variable hash:
197
-
198
- ds, bv = DB[:table].filter(:a=>1).unbind
199
- ds # DB[:table].filter(:a=>:$a)
200
- bv # {:a=>1}
201
-
202
- The purpose of doing this is that you can then use prepare or call
203
- on the returned dataset with the returned bound variables:
204
-
205
- ds.call(:select, bv)
206
- # SELECT * FROM table WHERE (a = ?); [1]
207
-
208
- ps = ds.prepare(:select, :ps_name)
209
- # PREPARE ps_name AS SELECT * FROM table WHERE (a = ?)
210
- ps.call(bv)
211
- # EXECUTE ps_name(1)
212
-
213
- Basically, Dataset#unbind takes a specific statement and attempts
214
- to turn it into a generic statement, along with the placeholder
215
- values it extracted.
216
-
217
- Unfortunately, Dataset#unbind cannot handle all cases. For
218
- example:
219
-
220
- DB[:table].filter{a + 1 > 10}.unbind
221
-
222
- will not unbind any values. Also, if you have a query with
223
- multiple different values for a variable, it will raise an
224
- UnbindDuplicate exception:
225
-
226
- DB[:table].filter(:a=>1).or(:a=>2).unbind
227
-
228
- * A defaults_setter plugin has been added that makes it easy to
229
- automatically set default values when creating new objects. This
230
- plugin makes Sequel::Model behave more like ActiveRecord in that
231
- new model instances (before saving) will have default values
232
- parsed from the database. Unlike ActiveRecord, only values with
233
- non-NULL defaults are set. Also, Sequel allows you to easily
234
- modify the default values used:
235
-
236
- Album.plugin :default_values
237
- Album.new.values # {:copies_sold => 0}
238
- Album.default_values[:copies_sold] = 42
239
- Album.new.values # {:copies_sold => 42}
240
-
241
- Before, this was commonly done in an after_initialize hook, but
242
- that's slower as it is also called for model instances loaded from
243
- the database.
244
-
245
- * A Database#views method has been added that returns an array
246
- of symbols representing view names in the database. This works
247
- just like Database#tables except it returns views.
248
-
249
- * A Sequel::ASTTransformer class was added that makes it easy to
250
- write custom transformers of Sequel's internal abstract syntax
251
- trees. Dataset#qualify now uses a subclass of ASTTransformer to do
252
- its transformations, as does the new Dataset#unbind.
253
-
254
- = Other Improvements
255
-
256
- * Database#create_table? now uses a single query with IF NOT EXISTS
257
- if the database supports such syntax. Previously, it issued a
258
- SELECT query to determine table existence. Sequel currently
259
- supports this syntax on MySQL, H2, and SQLite 3.3.0+.
260
-
261
- The Database#supports_create_table_if_not_exists? method was added
262
- to allow users to determine whether this syntax is supported.
263
-
264
- * Multiple column IN/NOT IN emulation now works correctly with
265
- model datasets (or other datasets that use a row_proc).
266
-
267
- * You can now correctly invert SQL::Constant instances:
268
-
269
- Sequel::NULL # NULL
270
- ~Sequel::NULL # NOT NULL
271
- Sequel::TRUE # TRUE
272
- ~Sequel::TRUE # FALSE
273
-
274
- * A bug in the association_pks plugin has been fixed in the case
275
- where the associated table had a different primary key column name
276
- than the current table.
277
-
278
- * The emulated prepared statement support now supports nil and false
279
- as bound values.
280
-
281
- * The to_dot extension was refactored for greater readability. The
282
- only change was a small fix in the display for SQL::Subscript
283
- instances.
284
-
285
- * The Dataset#supports_insert_select? method is now available to let
286
- you know if the dataset supports insert_select. You should use
287
- this method instead of respond_to? for checking for insert_select
288
- support.
289
-
290
- * Prepared statements/bound variable can now use a new :insert_select
291
- type for preparing a statement that will insert a row and return
292
- the row inserted, if the dataset supports insert_select.
293
-
294
- * The Model#initialize_set private method now exists for easier plugin
295
- writing. It is only called for new model objects, with the hash
296
- given to initialize. By default, it just calls set.
297
-
298
- * A small bug when creating anonymous subclasses of Sequel::Model on
299
- ruby 1.9 has been fixed.
300
-
301
- * If Thread#kill is used inside a transaction on ruby 1.8 or
302
- rubinius, the transaction is rolled back. This situation is not
303
- handled correctly on JRuby or ruby 1.9, and I'm not sure it's
304
- possible to handle correctly on those implementations.
305
-
306
- * The postgres adapter now supports the
307
- Sequel::Postgres::PG_NAMED_TYPES hash for associating conversion
308
- procs for custom types that don't necessarily have the same type
309
- oid on different databases. This hash uses symbol keys and
310
- proc values:
311
-
312
- Sequel::Postgres::PG_NAMED_TYPES[:interval] = proc{|v| ...}
313
-
314
- The conversion procs now use a separate hash per Database object
315
- instead of a hash shared across all Database objects. You
316
- can now modify the types for a particular Database object, but
317
- you have to use the type oid:
318
-
319
- DB.conversion_procs[42] = proc{|v| ...}
320
-
321
- * On SQLite and MSSQL, literalization of true and false values given
322
- directly to Dataset#filter has been fixed. So the following now
323
- works correctly on those databases:
324
-
325
- DB[:table].filter(true)
326
- DB[:table].filter(false)
327
-
328
- Unfortunately, because SQLite and MSSQL don't have a real boolean
329
- type, these will not work:
330
-
331
- DB[:table].filter{a & true}
332
- DB[:table].filter{a & false}
333
-
334
- You currently have to work around the issue by doing:
335
-
336
- DB[:table].filter{a & Sequel::TRUE}
337
- DB[:table].filter{a & Sequel::FALSE}
338
-
339
- It is possible that a future version of Sequel will remove the need
340
- for this workaround, but that requires having a separate
341
- literalization method specific to filters.
342
-
343
- * The MySQL bit type is no longer treated as a boolean. On MySQL, the
344
- bit type is a bitfield, which is very different than the MSSQL bit
345
- type, which is the closest thing to a boolean on MSSQL.
346
-
347
- * The bool database type is now recognized as a boolean. Some SQLite
348
- databases use bool, such as the ones used in Firefox.
349
-
350
- * SQL_AUTO_IS_NULL=0 is now set by default when connecting to MySQL
351
- using the swift or jdbc adapters. Previously, it was only set by
352
- default when using the mysql or mysql2 adapters.
353
-
354
- * Dataset#limit now works correctly on Access, using the TOP syntax.
355
-
356
- * Dataset#limit now works correctly on DB2, using the FETCH FIRST
357
- syntax.
358
-
359
- * The jdbc mssql subadapter was split into separate subadapters for
360
- sqlserver (using Microsoft's driver) and jtds (using the open
361
- source JTDS driver).
362
-
363
- * The jdbc jtds subadapter now supports converting Java CLOB
364
- objects to ruby strings.
365
-
366
- * Tables from the INFORMATION_SCHEMA are now ignored when parsing
367
- schema on JDBC.
368
-
369
- * The informix adapter has been split into shared/specific parts, and
370
- a jdbc informix subadapter has been added.
371
-
372
- * Dataset#insert_select now works correctly on MSSQL when the core
373
- extensions are disabled.
374
-
375
- * The sqlite adapter now logs when preparing a statement.
376
-
377
- * You no longer need to be a PostgreSQL superuser to run the postgres
378
- adapter specs.
379
-
380
- * The connection pool specs are now about 10 times faster and not
381
- subject to race conditions due to using Queues instead of
382
- sleeping.
383
-
384
- = Backwards Compatibility
385
-
386
- * Model#save no longer calls Model#valid?. It now calls the
387
- Model#_valid? private method that Model#valid? also calls. To mark
388
- a model instance invalid, you should override the Model#validate
389
- method and add validation errors to the object.
390
-
391
- * The BeforeHookFailure exception class has been renamed to
392
- HookFailure since hook failures can now be raised by around hooks
393
- that don't call super. BeforeHookFailure is now an alias to
394
- HookFailure, so no code should break, but you should update your
395
- code to reflect the new name.
396
-
397
- * Any custom argument mappers used for prepared statements now need
398
- to implement the prepared_arg? private instance method and have it
399
- return true.
400
-
401
- * If your databases uses bit as a boolean type and isn't MSSQL, it's
402
- possible that those columns will no longer be treated as booleans.
403
- Please report such an issue on the bugtracker.
404
-
405
- * It is possible that the filtering and excluding by association
406
- datasets will break backwards compatibility in some apps. This can
407
- only occur if you are using a symbol with the same name as an
408
- association with a model dataset whose model is the same as the
409
- associated class. As associations almost never have the same names
410
- as columns, this would require either aliasing or joining to
411
- another table. If for some reason this does break your app, you
412
- can work around it by changing the symbol to an SQL::Identifier or
413
- a literal string.
414
-
415
- * The Sequel::Postgres.use_iso_date_format= method now only affects
416
- future Database objects.
417
-
418
- * On MySQL, Database#tables no longer returns view names, it only
419
- returns table names. You have to use Database#views to get view
420
- names now.
@@ -1,88 +0,0 @@
1
- = New Features
2
-
3
- * drop_table, drop_view, drop_column, and drop_constraint all now
4
- support a :cascade option for using CASCADE.
5
-
6
- DB.drop_table(:tab, :cascade=>true)
7
- # DROP TABLE tab CASCADE
8
-
9
- DB.drop_column(:tab, :col, :cascade=>true)
10
- # ALTER TABLE tab DROP COLUMN col CASCADE
11
-
12
- A few databases support CASCADE for dropping tables and views,
13
- but only PostgreSQL appears to support it for columns and
14
- constraints. Using the :cascade option when the underlying
15
- database doesn't support it will probably result in a
16
- DatabaseError being raised.
17
-
18
- * You can now use datasets as expressions, allowing things such as:
19
-
20
- DB[:table1].select(:column1) > DB[:table2].select(:column2)
21
- # (SELECT column1 FROM table1) > (SELECT column2 FROM table2)
22
-
23
- DB[:table1].select(:column1).cast(Integer)
24
- # CAST((SELECT column1 FROM table1) AS integer)
25
-
26
- * Dataset#select_group has been added for grouping and selecting on
27
- the same columns.
28
-
29
- DB[:a].select_group(:b, :c)
30
- # SELECT b, c FROM a GROUP BY b, c
31
-
32
- * Dataset#exclude_where and #exclude_having methods have been added,
33
- allowing you to specify which clause to affect. #exclude's
34
- behavior is still to add to the HAVING clause if one is present,
35
- and use the WHERE clause otherwise.
36
-
37
- * Dataset#select_all now accepts optional arguments and will select
38
- all columns from those arguments if present:
39
-
40
- DB[:a].select_all(:a)
41
- # SELECT a.* FROM a
42
-
43
- DB.from(:a, :b).select_all(:a, :b)
44
- # SELECT a.*, b.* FROM a, b
45
-
46
- * Dataset#group and #group_and_count now both accept virtual row
47
- blocks:
48
-
49
- DB[:a].select(:b).group{c(d)}
50
- # SELECT b FROM a GROUP BY c(d)
51
-
52
- * If you use a LiteralString as a validation error message,
53
- Errors#full_messages will now not add the related column name to
54
- the start of the error message.
55
-
56
- * Model.set_dataset now accepts SQL::Identifier,
57
- SQL::QualifiedIdentifier, and SQL::AliasedExpression instances,
58
- treating them like Symbols.
59
-
60
- = Other Improvements
61
-
62
- * The association_pks plugin's setter method will now automatically
63
- convert a given array of strings to an array of integers if the
64
- primary key field is an integer field, which should make it easier
65
- to use in web applications.
66
-
67
- * nil bound variable, prepared statement, and stored procedure
68
- arguments are now handled correctly in the JDBC adapter.
69
-
70
- * On 1.9, you can now load plugins even when ::ClassMethods,
71
- ::InstanceMethods, or ::DatasetMethods is defined.
72
-
73
- = Backwards Compatibility
74
-
75
- * The tinytds adapter now only works with tiny_tds 0.4.5 and greater.
76
- Also, if you were using the tinytds adapter with FreeTDS 0.91rc1,
77
- you need to upgrade to FreeTDS 0.91rc2 for it to work. Also, if
78
- you were referencing an entry in the freetds.conf file, you now
79
- need to specify it directly using the :dataserver option when
80
- connecting, the adapter no longer copies the :host option to the
81
- :dataserver option.
82
-
83
- * On postgresql, Sequel now no longer drops tables with CASCADE by
84
- default. You now have to use the :cascade option to drop_table if
85
- you want to use CASCADE.
86
-
87
- * The Database#drop_table_sql private method now takes an additional
88
- options hash argument.