sequel 3.24.1 → 3.25.0

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG CHANGED
@@ -1,3 +1,31 @@
1
+ === 3.25.0 (2011-07-01)
2
+
3
+ * Work with tiny_tds-0.4.5 in the tinytds adapter, older versions are no longer supported (jeremyevans)
4
+
5
+ * Make association_pks plugin typecast provided values to integer if the primary key column type is integer (jeremyevans)
6
+
7
+ * Model.set_dataset now accepts Identifier, QualifiedIdentifier, and AliasedExpression arguments (jeremyevans)
8
+
9
+ * Fix handling of nil values in bound variables and prepared statement and stored procedure arguments in the jdbc adapter (jeremyevans, wei)
10
+
11
+ * Allow treating Datasets as Expressions, e.g. DB[:table1].select(:column1) > DB[:table2].select(:column2) (jeremyevans)
12
+
13
+ * No longer use CASCADE by default when dropping tables on PostgreSQL (jeremyevans)
14
+
15
+ * Support :cascade option to #drop_table, #drop_view, #drop_column, and #drop_constraint for using CASCADE (jeremyevans)
16
+
17
+ * If validation error messages are LiteralStrings, don't add the column name to them in Errors#full_messages (jeremyevans)
18
+
19
+ * Fix bug loading plugins on 1.9 where ::ClassMethods, ::InstanceMethods, or ::DatasetMethods is defined (jeremyevans)
20
+
21
+ * Add Dataset#exclude_where and Dataset#exclude_having methods, so you can force use of having or where clause (jeremyevans)
22
+
23
+ * Allow Dataset#select_all to take table name arguments and select all columns from each given table (jeremyevans)
24
+
25
+ * Add Dataset#select_group method, for selecting and grouping on the same columns (jeremyevans)
26
+
27
+ * Allow Dataset#group and Dataset#group_and_count to accept a virtual row block (jeremyevans)
28
+
1
29
  === 3.24.1 (2011-06-03)
2
30
 
3
31
  * Ignore index creation errors if using create_table? with the IF NOT EXISTS syntax (jeremyevans) (#362)
@@ -247,7 +247,7 @@ example is a tree structure:
247
247
 
248
248
  class Node
249
249
  many_to_one :parent, :class=>self
250
- one_to_many :children :key=>:parent_id, :class=>self
250
+ one_to_many :children, :key=>:parent_id, :class=>self
251
251
  end
252
252
 
253
253
  For many_to_many self_referential associations, it's fairly similar. Here's
@@ -381,7 +381,7 @@ method, you have to pass a proc as an argument:
381
381
  == Filtering By Associations
382
382
 
383
383
  In addition to using the association method to get associated objects, you
384
- can also use associated objects in filters. For example, while to get
384
+ can also use associated objects in filters. For example, to get
385
385
  all albums for a given artist, you would usually do:
386
386
 
387
387
  @artist.albums
@@ -479,7 +479,7 @@ would be bad association names.
479
479
 
480
480
  == Database Schema
481
481
 
482
- Creating an association, doesn't modify the database schema. Sequel
482
+ Creating an association doesn't modify the database schema. Sequel
483
483
  assumes your associations reflect the existing database schema. If not,
484
484
  you should modify your schema before creating the associations.
485
485
 
@@ -74,12 +74,12 @@ Most Dataset methods that users will use can be broken down into two types:
74
74
 
75
75
  Most dataset methods fall into this category, which can be further broken down by the clause they affect:
76
76
 
77
- SELECT:: select, select_all, select_append, select_more
77
+ SELECT:: select, select_all, select_append, select_group, select_more
78
78
  FROM:: from, from_self
79
79
  JOIN:: join, left_join, right_join, full_join, natural_join, natural_left_join, natural_right_join, natural_full_join, cross_join, inner_join, left_outer_join, right_outer_join, full_outer_join, join_table
80
- WHERE:: where, filter, exclude, and, or, grep, invert, unfiltered
81
- GROUP:: group, group_by, group_and_count, ungrouped
82
- HAVING:: having, filter, exclude, and, or, grep, invert, unfiltered
80
+ WHERE:: where, filter, exclude, exclude_where, and, or, grep, invert, unfiltered
81
+ GROUP:: group, group_by, group_and_count, select_group, ungrouped
82
+ HAVING:: having, filter, exclude, exclude_having, and, or, grep, invert, unfiltered
83
83
  ORDER:: order, order_by, order_append, order_prepend, order_more, reverse, reverse_order, unordered
84
84
  LIMIT:: limit, unlimited
85
85
  compounds:: union, intersect, except
data/doc/migration.rdoc CHANGED
@@ -117,7 +117,8 @@ This looks a little weird, but you need to be aware that inside an up or +down+
117
117
  self always refers to the <tt>Sequel::Database</tt> object that the migration is being applied to.
118
118
  Since <tt>Database#[]</tt> creates datasets, using <tt>self[:artists]</tt> inside the +up+ block creates
119
119
  a dataset on the database representing all columns in the +artists+ table, and updates it to set the
120
- +location+ column to <tt>'Sacramento'</tt>.
120
+ +location+ column to <tt>'Sacramento'</tt>. You should avoid referencing the <tt>Sequel::Database</tt>
121
+ object directly in your migration, and always use self to reference it, otherwise you may run into problems.
121
122
 
122
123
  It is possible to use model classes inside migrations, as long as they are loaded into the ruby interpreter,
123
124
  but it's a bad habit as changes to your model classes can then break old migrations, and this breakage is
@@ -897,8 +898,8 @@ artist per album to multiple artists per album:
897
898
  # Insert one row in the albums_artists table
898
899
  # for each row in the albums table where there
899
900
  # is an associated artist
900
- DB[:albums_artists].insert([:album_id, :artist_id],
901
- DB[:albums].select(:id, :artist_id).exclude(:artist_id=>nil))
901
+ self[:albums_artists].insert([:album_id, :artist_id],
902
+ self[:albums].select(:id, :artist_id).exclude(:artist_id=>nil))
902
903
 
903
904
  # Drop the now unnecesssary column from the albums table
904
905
  drop_column :albums, :artist_id
@@ -910,12 +911,12 @@ artist per album to multiple artists per album:
910
911
  # If possible, associate each album with one of the artists
911
912
  # it was associated with. This loses information, but
912
913
  # there's no way around that.
913
- DB[:albums_artists].
914
+ self[:albums_artists].
914
915
  group(:album_id).
915
916
  select{[album_id, max(artist_id).as(artist_id)]}.
916
917
  having{artist_id > 0}.
917
918
  all do |r|
918
- DB[:artists].
919
+ self[:artists].
919
920
  filter(:id=>r[:album_id]).
920
921
  update(:artist_id=>r[:artist_id])
921
922
  end
@@ -355,14 +355,17 @@ Examples:
355
355
 
356
356
  Because the underscore is not a valid character in a URI schema, the adapter
357
357
  is named tinytds instead of tiny_tds. The connection options are passed directly
358
- to tiny_tds, except that the tiny_tds :dataserver and :username options are set to
359
- the Sequel :host and :user options. The :host option should be an entry in the
360
- freetds.conf file, it's not currently possible to a host not present in the
361
- freetds.conf file. Some options that you may want to set are
362
- :login_timeout, :timeout, :appname, and :encoding, see the tiny_tds README for details.
358
+ to tiny_tds, except that the tiny_tds :username option is set to
359
+ the Sequel :user option. If you want to use an entry in the freetds.conf file, you
360
+ should specify the :dataserver option with that name as the value. Some other
361
+ options that you may want to set are :login_timeout, :timeout, :tds_version, :azure,
362
+ :appname, and :encoding, see the tiny_tds README for details.
363
+
363
364
  For highest performance, you should disable any identifier output method when
364
365
  using the tinytds adapter, which probably means disabling any identifier input method
365
366
  as well. The default for Microsoft SQL Server is to :downcase identifiers on output
366
367
  and :upcase them on input, so the highest performance will require changing the setting
367
368
  from the default.
368
369
 
370
+ The Sequel tinytds adapter requires tiny_tds >= 0.4.5, and if you are using FreeTDS
371
+ 0.91, you must at least be using 0.91rc2 (0.91rc1 does not work).
@@ -0,0 +1,88 @@
1
+ = New Features
2
+
3
+ * drop_table, drop_view, drop_column, and drop_constraint all now
4
+ support a :cascade option for using CASCADE.
5
+
6
+ DB.drop_table(:tab, :cascade=>true)
7
+ # DROP TABLE tab CASCADE
8
+
9
+ DB.drop_column(:tab, :col, :cascade=>true)
10
+ # ALTER TABLE tab DROP COLUMN col CASCADE
11
+
12
+ A few databases support CASCADE for dropping tables and views,
13
+ but only PostgreSQL appears to support it for columns and
14
+ constraints. Using the :cascade option when the underlying
15
+ database doesn't support it will probably result in a
16
+ DatabaseError being raised.
17
+
18
+ * You can now use datasets as expressions, allowing things such as:
19
+
20
+ DB[:table1].select(:column1) > DB[:table2].select(:column2)
21
+ # (SELECT column1 FROM table1) > (SELECT column2 FROM table2)
22
+
23
+ DB[:table1].select(:column1).cast(Integer)
24
+ # CAST((SELECT column1 FROM table1) AS integer)
25
+
26
+ * Dataset#select_group has been added for grouping and selecting on
27
+ the same columns.
28
+
29
+ DB[:a].select_group(:b, :c)
30
+ # SELECT b, c FROM a GROUP BY b, c
31
+
32
+ * Dataset#exclude_where and #exclude_having methods have been added,
33
+ allowing you to specify which clause to affect. #exclude's
34
+ behavior is still to add to the HAVING clause if one is present,
35
+ and use the WHERE clause otherwise.
36
+
37
+ * Dataset#select_all now accepts optional arguments and will select
38
+ all columns from those arguments if present:
39
+
40
+ DB[:a].select_all(:a)
41
+ # SELECT a.* FROM a
42
+
43
+ DB.from(:a, :b).select_all(:a, :b)
44
+ # SELECT a.*, b.* FROM a, b
45
+
46
+ * Dataset#group and #group_and_count now both accept virtual row
47
+ blocks:
48
+
49
+ DB[:a].select(:b).group{c(d)}
50
+ # SELECT b FROM a GROUP BY c(d)
51
+
52
+ * If you use a LiteralString as a validation error message,
53
+ Errors#full_messages will now not add the related column name to
54
+ the start of the error message.
55
+
56
+ * Model.set_dataset now accepts SQL::Identifier,
57
+ SQL::QualifiedIdentifier, and SQL::AliasedExpression instances,
58
+ treating them like Symbols.
59
+
60
+ = Other Improvements
61
+
62
+ * The association_pks plugin's setter method will now automatically
63
+ convert a given array of strings to an array of integers if the
64
+ primary key field is an integer field, which should make it easier
65
+ to use in web applications.
66
+
67
+ * nil bound variable, prepared statement, and stored procedure
68
+ arguments are now handled correctly in the JDBC adapter.
69
+
70
+ * On 1.9, you can now load plugins even when ::ClassMethods,
71
+ ::InstanceMethods, or ::DatasetMethods is defined.
72
+
73
+ = Backwards Compatibility
74
+
75
+ * The tinytds adapter now only works with tiny_tds 0.4.5 and greater.
76
+ Also, if you were using the tinytds adapter with FreeTDS 0.91rc1,
77
+ you need to upgrade to FreeTDS 0.91rc2 for it to work. Also, if
78
+ you were referencing an entry in the freetds.conf file, you now
79
+ need to specify it directly using the :dataserver option when
80
+ connecting, the adapter no longer copies the :host option to the
81
+ :dataserver option.
82
+
83
+ * On postgresql, Sequel now no longer drops tables with CASCADE by
84
+ default. You now have to use the :cascade option to drop_table if
85
+ you want to use CASCADE.
86
+
87
+ * The Database#drop_table_sql private method now takes an additional
88
+ options hash argument.
@@ -422,8 +422,8 @@ module Sequel
422
422
  cps.setDouble(i, arg)
423
423
  when TrueClass, FalseClass
424
424
  cps.setBoolean(i, arg)
425
- when nil
426
- cps.setNull(i, JavaSQL::Types::NULL)
425
+ when NilClass
426
+ cps.setString(i, nil)
427
427
  when DateTime
428
428
  cps.setTimestamp(i, java_sql_datetime(arg))
429
429
  when Date
@@ -60,6 +60,12 @@ module Sequel
60
60
 
61
61
  private
62
62
 
63
+ # Use setNull for nil arguments as the default behavior of setString
64
+ # with nil doesn't appear to work correctly on PostgreSQL.
65
+ def set_ps_arg(cps, arg, i)
66
+ arg.nil? ? cps.setNull(i, JavaSQL::Types::NULL) : super
67
+ end
68
+
63
69
  # Extend the adapter with the JDBC PostgreSQL AdapterMethods
64
70
  def setup_connection(conn)
65
71
  conn = super(conn)
@@ -460,11 +460,6 @@ module Sequel
460
460
  "DROP LANGUAGE#{' IF EXISTS' if opts[:if_exists]} #{name}#{' CASCADE' if opts[:cascade]}"
461
461
  end
462
462
 
463
- # Always CASCADE the table drop
464
- def drop_table_sql(name)
465
- "DROP TABLE #{quote_schema_table(name)} CASCADE"
466
- end
467
-
468
463
  # SQL for dropping a trigger from the database.
469
464
  def drop_trigger_sql(table, name, opts={})
470
465
  "DROP TRIGGER#{' IF EXISTS' if opts[:if_exists]} #{name} ON #{quote_schema_table(table)}#{' CASCADE' if opts[:cascade]}"
@@ -11,7 +11,6 @@ module Sequel
11
11
  # :dataserver and :username options.
12
12
  def connect(server)
13
13
  opts = server_opts(server)
14
- opts[:dataserver] = opts[:host]
15
14
  opts[:username] = opts[:user]
16
15
  set_mssql_unicode_strings
17
16
  TinyTds::Client.new(opts)
@@ -39,7 +38,7 @@ module Sequel
39
38
  end
40
39
  yield(r) if block_given?
41
40
  rescue TinyTds::Error => e
42
- raise_error(e, :disconnect=>(c.closed? || (c.respond_to?(:dead?) && c.dead?)))
41
+ raise_error(e, :disconnect=>!c.active?)
43
42
  ensure
44
43
  r.cancel if r && c.sqlsent?
45
44
  end
@@ -329,15 +329,17 @@ module Sequel
329
329
  # Remove a column from the DDL for the table.
330
330
  #
331
331
  # drop_column(:artist_id) # DROP COLUMN artist_id
332
- def drop_column(name)
333
- @operations << {:op => :drop_column, :name => name}
332
+ # drop_column(:artist_id, :cascade=>true) # DROP COLUMN artist_id CASCADE
333
+ def drop_column(name, opts={})
334
+ @operations << {:op => :drop_column, :name => name}.merge(opts)
334
335
  end
335
336
 
336
337
  # Remove a constraint from the DDL for the table.
337
338
  #
338
339
  # drop_constraint(:unique_name) # DROP CONSTRAINT unique_name
339
- def drop_constraint(name)
340
- @operations << {:op => :drop_constraint, :name => name}
340
+ # drop_constraint(:unique_name, :cascade=>true) # DROP CONSTRAINT unique_name CASCADE
341
+ def drop_constraint(name, opts={})
342
+ @operations << {:op => :drop_constraint, :name => name}.merge(opts)
341
343
  end
342
344
 
343
345
  # Remove an index from the DDL for the table.
@@ -159,10 +159,13 @@ module Sequel
159
159
 
160
160
  # Drops one or more tables corresponding to the given names:
161
161
  #
162
+ # DB.drop_table(:posts)
162
163
  # DB.drop_table(:posts, :comments)
164
+ # DB.drop_table(:posts, :comments, :cascade=>true)
163
165
  def drop_table(*names)
166
+ options = names.last.is_a?(Hash) ? names.pop : {}
164
167
  names.each do |n|
165
- execute_ddl(drop_table_sql(n))
168
+ execute_ddl(drop_table_sql(n, options))
166
169
  remove_cached_schema(n)
167
170
  end
168
171
  nil
@@ -171,9 +174,12 @@ module Sequel
171
174
  # Drops one or more views corresponding to the given names:
172
175
  #
173
176
  # DB.drop_view(:cheap_items)
177
+ # DB.drop_view(:cheap_items, :pricey_items)
178
+ # DB.drop_view(:cheap_items, :pricey_items, :cascade=>true)
174
179
  def drop_view(*names)
180
+ options = names.last.is_a?(Hash) ? names.pop : {}
175
181
  names.each do |n|
176
- execute_ddl("DROP VIEW #{quote_schema_table(n)}")
182
+ execute_ddl(drop_view_sql(n, options))
177
183
  remove_cached_schema(n)
178
184
  end
179
185
  nil
@@ -228,7 +234,7 @@ module Sequel
228
234
  when :add_column
229
235
  "ADD COLUMN #{column_definition_sql(op)}"
230
236
  when :drop_column
231
- "DROP COLUMN #{quoted_name}"
237
+ "DROP COLUMN #{quoted_name}#{' CASCADE' if op[:cascade]}"
232
238
  when :rename_column
233
239
  "RENAME COLUMN #{quoted_name} TO #{quote_identifier(op[:new_name])}"
234
240
  when :set_column_type
@@ -244,7 +250,7 @@ module Sequel
244
250
  when :add_constraint
245
251
  "ADD #{constraint_definition_sql(op)}"
246
252
  when :drop_constraint
247
- "DROP CONSTRAINT #{quoted_name}"
253
+ "DROP CONSTRAINT #{quoted_name}#{' CASCADE' if op[:cascade]}"
248
254
  else
249
255
  raise Error, "Unsupported ALTER TABLE operation"
250
256
  end
@@ -391,10 +397,15 @@ module Sequel
391
397
  end
392
398
 
393
399
  # SQL DDL statement to drop the table with the given name.
394
- def drop_table_sql(name)
395
- "DROP TABLE #{quote_schema_table(name)}"
400
+ def drop_table_sql(name, options)
401
+ "DROP TABLE #{quote_schema_table(name)}#{' CASCADE' if options[:cascade]}"
396
402
  end
397
403
 
404
+ # SQL DDL statement to drop a view with the given name.
405
+ def drop_view_sql(name, options)
406
+ "DROP VIEW #{quote_schema_table(name)}#{' CASCADE' if options[:cascade]}"
407
+ end
408
+
398
409
  # Proxy the filter_expr call to the dataset, used for creating constraints.
399
410
  def filter_expr(*args, &block)
400
411
  schema_utility_dataset.literal(schema_utility_dataset.send(:filter_expr, *args, &block))
@@ -26,6 +26,14 @@ module Sequel
26
26
  extend Metaprogramming
27
27
  include Metaprogramming
28
28
  include Enumerable
29
+ include SQL::AliasMethods
30
+ include SQL::BooleanMethods
31
+ include SQL::CastMethods
32
+ include SQL::ComplexExpressionMethods
33
+ include SQL::InequalityMethods
34
+ include SQL::NumericMethods
35
+ include SQL::OrderMethods
36
+ include SQL::StringMethods
29
37
  end
30
38
 
31
39
  require(%w"query actions features graph prepared_statements misc mutation sql", 'dataset')
@@ -45,15 +45,6 @@ module Sequel
45
45
  self == o
46
46
  end
47
47
 
48
- # Return the dataset as an aliased expression with the given alias. You can
49
- # use this as a FROM or JOIN dataset, or as a column if this dataset
50
- # returns a single row and column.
51
- #
52
- # DB.from(DB[:table].as(:b)) # SELECT * FROM (SELECT * FROM table) AS b
53
- def as(aliaz)
54
- ::Sequel::SQL::AliasedExpression.new(self, aliaz)
55
- end
56
-
57
48
  # Yield a dataset for each server in the connection pool that is tied to that server.
58
49
  # Intended for use in sharded environments where all servers need to be modified
59
50
  # with the same data:
@@ -28,10 +28,10 @@ module Sequel
28
28
  JOIN_METHODS = (CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table]
29
29
 
30
30
  # Methods that return modified datasets
31
- QUERY_METHODS = %w'add_graph_aliases and distinct except exclude
31
+ QUERY_METHODS = %w'add_graph_aliases and distinct except exclude exclude_having exclude_where
32
32
  filter for_update from from_self graph grep group group_and_count group_by having intersect invert
33
33
  limit lock_style naked or order order_append order_by order_more order_prepend paginate qualify query
34
- reverse reverse_order select select_all select_append select_more server
34
+ reverse reverse_order select select_all select_append select_group select_more server
35
35
  set_defaults set_graph_aliases set_overrides unfiltered ungraphed ungrouped union
36
36
  unlimited unordered where with with_recursive with_sql'.collect{|x| x.to_sym} + JOIN_METHODS
37
37
 
@@ -103,12 +103,29 @@ module Sequel
103
103
  # DB[:items].exclude(:category => 'software', :id=>3)
104
104
  # # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
105
105
  def exclude(*cond, &block)
106
- clause = (@opts[:having] ? :having : :where)
107
- cond = cond.first if cond.size == 1
108
- cond = filter_expr(cond, &block)
109
- cond = SQL::BooleanExpression.invert(cond)
110
- cond = SQL::BooleanExpression.new(:AND, @opts[clause], cond) if @opts[clause]
111
- clone(clause => cond)
106
+ _filter_or_exclude(true, @opts[:having] ? :having : :where, *cond, &block)
107
+ end
108
+
109
+ # Inverts the given conditions and adds them to the HAVING clause.
110
+ #
111
+ # DB[:items].select_group(:name).exclude_having{count(name) < 2}
112
+ # # SELECT name FROM items GROUP BY name HAVING (count(name) >= 2)
113
+ def exclude_having(*cond, &block)
114
+ _filter_or_exclude(true, :having, *cond, &block)
115
+ end
116
+
117
+ # Inverts the given conditions and adds them to the WHERE clause.
118
+ #
119
+ # DB[:items].select_group(:name).exclude_where(:category => 'software')
120
+ # # SELECT * FROM items WHERE (category != 'software')
121
+ #
122
+ # DB[:items].select_group(:name).
123
+ # exclude_having{count(name) < 2}.
124
+ # exclude_where(:category => 'software')
125
+ # # SELECT name FROM items WHERE (category != 'software')
126
+ # # GROUP BY name HAVING (count(name) >= 2)
127
+ def exclude_where(*cond, &block)
128
+ _filter_or_exclude(true, :where, *cond, &block)
112
129
  end
113
130
 
114
131
  # Returns a copy of the dataset with the given conditions imposed upon it.
@@ -270,21 +287,25 @@ module Sequel
270
287
  end
271
288
 
272
289
  # Returns a copy of the dataset with the results grouped by the value of
273
- # the given columns.
290
+ # the given columns. If a block is given, it is treated
291
+ # as a virtual row block, similar to +filter+.
274
292
  #
275
293
  # DB[:items].group(:id) # SELECT * FROM items GROUP BY id
276
294
  # DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name
277
- def group(*columns)
295
+ # DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b)
296
+ def group(*columns, &block)
297
+ virtual_row_columns(columns, block)
278
298
  clone(:group => (columns.compact.empty? ? nil : columns))
279
299
  end
280
300
 
281
301
  # Alias of group
282
- def group_by(*columns)
283
- group(*columns)
302
+ def group_by(*columns, &block)
303
+ group(*columns, &block)
284
304
  end
285
305
 
286
306
  # Returns a dataset grouped by the given column with count by group.
287
307
  # Column aliases may be supplied, and will be included in the select clause.
308
+ # If a block is given, it is treated as a virtual row block, similar to +filter+.
288
309
  #
289
310
  # Examples:
290
311
  #
@@ -299,8 +320,12 @@ module Sequel
299
320
  # DB[:items].group_and_count(:first_name___name).all
300
321
  # # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name
301
322
  # # => [{:name=>'a', :count=>1}, ...]
302
- def group_and_count(*columns)
303
- group(*columns.map{|c| unaliased_identifier(c)}).select(*(columns + [COUNT_OF_ALL_AS_COUNT]))
323
+ #
324
+ # DB[:items].group_and_count{substr(first_name, 1, 1).as(initial)}.all
325
+ # # SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1)
326
+ # # => [{:initial=>'a', :count=>1}, ...]
327
+ def group_and_count(*columns, &block)
328
+ select_group(*columns, &block).select_more(COUNT_OF_ALL_AS_COUNT)
304
329
  end
305
330
 
306
331
  # Returns a copy of the dataset with the HAVING conditions changed. See #filter for argument types.
@@ -532,7 +557,7 @@ module Sequel
532
557
  # DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC
533
558
  # DB[:items].order(nil) # SELECT * FROM items
534
559
  def order(*columns, &block)
535
- columns += Array(Sequel.virtual_row(&block)) if block
560
+ virtual_row_columns(columns, block)
536
561
  clone(:order => (columns.compact.empty?) ? nil : columns)
537
562
  end
538
563
 
@@ -630,7 +655,7 @@ module Sequel
630
655
  # DB[:items].select(:a, :b) # SELECT a, b FROM items
631
656
  # DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items
632
657
  def select(*columns, &block)
633
- columns += Array(Sequel.virtual_row(&block)) if block
658
+ virtual_row_columns(columns, block)
634
659
  m = []
635
660
  columns.each do |i|
636
661
  i.is_a?(Hash) ? m.concat(i.map{|k, v| SQL::AliasedExpression.new(k,v)}) : m << i
@@ -638,11 +663,19 @@ module Sequel
638
663
  clone(:select => m)
639
664
  end
640
665
 
641
- # Returns a copy of the dataset selecting the wildcard.
666
+ # Returns a copy of the dataset selecting the wildcard if no arguments
667
+ # are given. If arguments are given, treat them as tables and select
668
+ # all columns (using the wildcard) from each table.
642
669
  #
643
670
  # DB[:items].select(:a).select_all # SELECT * FROM items
644
- def select_all
645
- clone(:select => nil)
671
+ # DB[:items].select_all(:items) # SELECT items.* FROM items
672
+ # DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items
673
+ def select_all(*tables)
674
+ if tables.empty?
675
+ clone(:select => nil)
676
+ else
677
+ select(*tables.map{|t| SQL::ColumnAll.new(t)})
678
+ end
646
679
  end
647
680
 
648
681
  # Returns a copy of the dataset with the given columns added
@@ -658,6 +691,20 @@ module Sequel
658
691
  select(*(cur_sel + columns), &block)
659
692
  end
660
693
 
694
+ # Set both the select and group clauses with the given +columns+.
695
+ # Column aliases may be supplied, and will be included in the select clause.
696
+ # This also takes a virtual row block similar to +filter+.
697
+ #
698
+ # DB[:items].select_group(:a, :b)
699
+ # # SELECT a, b FROM items GROUP BY a, b
700
+ #
701
+ # DB[:items].select_group(:c___a){f(c2)}
702
+ # # SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2)
703
+ def select_group(*columns, &block)
704
+ virtual_row_columns(columns, block)
705
+ select(*columns).group(*columns.map{|c| unaliased_identifier(c)})
706
+ end
707
+
661
708
  # Returns a copy of the dataset with the given columns added
662
709
  # to the existing selected columns. If no columns are currently selected
663
710
  # it will just select the columns given.
@@ -840,17 +887,29 @@ module Sequel
840
887
 
841
888
  private
842
889
 
843
- # Internal filter method so it works on either the having or where clauses.
844
- def _filter(clause, *cond, &block)
890
+ # Internal filter/exclude method so it works on either the having or where clauses.
891
+ def _filter_or_exclude(invert, clause, *cond, &block)
845
892
  cond = cond.first if cond.size == 1
846
893
  if cond.respond_to?(:empty?) && cond.empty? && !block
847
894
  clone
848
895
  else
849
896
  cond = filter_expr(cond, &block)
897
+ cond = SQL::BooleanExpression.invert(cond) if invert
850
898
  cond = SQL::BooleanExpression.new(:AND, @opts[clause], cond) if @opts[clause]
851
899
  clone(clause => cond)
852
900
  end
853
901
  end
902
+
903
+ # Internal filter method so it works on either the having or where clauses.
904
+ def _filter(clause, *cond, &block)
905
+ _filter_or_exclude(false, clause, *cond, &block)
906
+ end
907
+
908
+ # Treat the +block+ as a virtual_row block if not +nil+ and
909
+ # add the resulting columns to the +columns+ array (modifies +columns+).
910
+ def virtual_row_columns(columns, block)
911
+ columns.concat(Array(Sequel.virtual_row(&block))) if block
912
+ end
854
913
 
855
914
  # Add the dataset to the list of compounds
856
915
  def compound_clone(type, dataset, opts)