sequel 0.1.7 → 0.1.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/CHANGELOG CHANGED
@@ -1,4 +1,26 @@
1
- *0.1.7*
1
+ === 0.1.8 (2007-07-10)
2
+
3
+ * Implemented Dataset#columns for retrieving the columns in the result set.
4
+
5
+ * Updated Model with changes to how model-associated datasets work.
6
+
7
+ * Beefed-up specs. Coverage is now at 95.0%.
8
+
9
+ * Added support for polymorphic datasets.
10
+
11
+ * The adapter dataset interface was simplified and standardized. Only four methods need be overriden: fetch_rows, update, insert and delete.
12
+
13
+ * The Dataset class was refactored. The bulk of the dataset code was moved into separate modules.
14
+
15
+ * Renamed Dataset#hash_column to Dataset#to_hash.
16
+
17
+ * Added some common pragmas to sqlite adapter.
18
+
19
+ * Added Postgres::Dataset#analyze for EXPLAIN ANALYZE queries.
20
+
21
+ * Fixed broken Postgres::Dataset#explain.
22
+
23
+ === 0.1.7
2
24
 
3
25
  * Removed db.synchronize wrapping calls in sqlite adapter.
4
26
 
@@ -20,7 +42,7 @@
20
42
 
21
43
  * Fixed Symbol#DESC to support qualified notation (thanks Pedro Gutierrez).
22
44
 
23
- *0.1.6*
45
+ === 0.1.6
24
46
 
25
47
  * Fixed Model#method_missing to raise for an invalid attribute.
26
48
 
@@ -42,11 +64,11 @@
42
64
 
43
65
  * Added Dataset#or, pretty nifty.
44
66
 
45
- *0.1.5*
67
+ === 0.1.5
46
68
 
47
69
  * Fixed Dataset#join to support multiple joins. Added #left_outer_join, #right_outer_join, #full_outer_join, #inner_join methods.
48
70
 
49
- *0.1.4*
71
+ === 0.1.4
50
72
 
51
73
  * Added String#split_sql.
52
74
 
@@ -61,13 +83,13 @@
61
83
 
62
84
  * Implemented ODBC adapter.
63
85
 
64
- *0.1.3*
86
+ === 0.1.3
65
87
 
66
88
  * Implemented DBI adapter.
67
89
 
68
90
  * Refactored database connection code. Now handled through Database#connect.
69
91
 
70
- *0.1.2*
92
+ === 0.1.2
71
93
 
72
94
  * The first opened database is automatically assigned to to Model.db.
73
95
 
@@ -89,7 +111,7 @@
89
111
 
90
112
  * Refactored and removed deprecated code in postgres adapter.
91
113
 
92
- *0.1.1*
114
+ ===0.1.1
93
115
 
94
116
  * More documentation for Dataset.
95
117
 
@@ -105,7 +127,7 @@
105
127
 
106
128
  * Cleaned up Dataset API.
107
129
 
108
- *0.1.0*
130
+ === 0.1.0
109
131
 
110
132
  * Changed Database#create_table to only accept a block. Nobody's gonna use the other way.
111
133
 
@@ -127,7 +149,7 @@
127
149
 
128
150
  * Refactored literalization of Time objects.
129
151
 
130
- *0.0.20*
152
+ === 0.0.20
131
153
 
132
154
  * Refactored Dataset where clause construction to use expressions.
133
155
 
@@ -139,7 +161,7 @@
139
161
 
140
162
  * Specs for Database.
141
163
 
142
- *0.0.19*
164
+ === 0.0.19
143
165
 
144
166
  * More specs for Dataset.
145
167
 
@@ -157,7 +179,7 @@
157
179
 
158
180
  * Specs for ConnectionPool.
159
181
 
160
- *0.0.18*
182
+ === 0.0.18
161
183
 
162
184
  * Implemented SequelError and SequelConnectionError classes. ConnectionPool#hold now catches any connection errors and reraises them SequelConnectionError.
163
185
 
@@ -167,7 +189,7 @@
167
189
 
168
190
  * Fixed Dataset#exclude to work correctly (patch and specs by Alex Bradbury.)
169
191
 
170
- *0.0.17*
192
+ === 0.0.17
171
193
 
172
194
  * Fixed Postgres::Database#tables to return table names as symbols (caused problem when using Database#table_exists?).
173
195
 
@@ -183,7 +205,7 @@
183
205
 
184
206
  * Added support for DISTINCT and OFFSET clauses (patches by Alex Bradbury.) Dataset#limit now accepts ranges. Added Dataset#uniq and distinct methods.
185
207
 
186
- *0.0.16*
208
+ === 0.0.16
187
209
 
188
210
  * More documentation.
189
211
 
@@ -197,7 +219,7 @@
197
219
 
198
220
  * Changed Dataset#destroy to return the number of deleted records.
199
221
 
200
- *0.0.15*
222
+ === 0.0.15
201
223
 
202
224
  * Improved Dataset#insert_sql to allow arrays as well as hashes.
203
225
 
@@ -205,7 +227,7 @@
205
227
 
206
228
  * Added Model#id to to return the id column.
207
229
 
208
- *0.0.14*
230
+ === 0.0.14
209
231
 
210
232
  * Fixed Model's attribute accessors (hopefully for the last time).
211
233
 
@@ -213,13 +235,13 @@
213
235
 
214
236
  * Fixed bug in aggregate methods (max, min, etc) for datasets using record classes.
215
237
 
216
- *0.0.13*
238
+ === 0.0.13
217
239
 
218
240
  * Fixed Model#method_missing to do both find, filter and attribute accessors. duh.
219
241
 
220
242
  * Fixed bug in Dataset#literal when quoting arrays of strings (thanks Douglas Koszerek.)
221
243
 
222
- *0.0.12*
244
+ === 0.0.12
223
245
 
224
246
  * Model#save now correctly performs an INSERT for new objects.
225
247
 
@@ -231,7 +253,7 @@
231
253
 
232
254
  * Fixed filtering using nil values (e.g. dataset.filter(:parent_id => nil)).
233
255
 
234
- *0.0.11*
256
+ === 0.0.11
235
257
 
236
258
  * Renamed Model.schema to Model.set_schema and Model.get_schema to Model.schema.
237
259
 
@@ -239,13 +261,13 @@
239
261
 
240
262
  * Removed require 'postgres' in schema.rb (thanks Douglas Koszerek.)
241
263
 
242
- *0.0.10*
264
+ === 0.0.10
243
265
 
244
266
  * Added some examples.
245
267
 
246
268
  * Added Dataset#print method for pretty-printing tables.
247
269
 
248
- *0.0.9*
270
+ === 0.0.9
249
271
 
250
272
  * Fixed Postgres::Database#tables and #locks methods.
251
273
 
@@ -257,7 +279,7 @@
257
279
 
258
280
  * Refactored and DRY'd Dataset#literal and overrides therof. Added support for subqueries in where clause.
259
281
 
260
- *0.0.8*
282
+ === 0.0.8
261
283
 
262
284
  * Fixed Dataset#reverse_order to provide chainability. This method can be called without arguments to invert the current order or with arguments to provide a descending order.
263
285
 
@@ -265,11 +287,11 @@
265
287
 
266
288
  * Refactored insert code in Postgres adapter (in preparation for fetching the last insert id for pre-8.1 versions).
267
289
 
268
- *0.0.7*
290
+ === 0.0.7
269
291
 
270
292
  * Fixed bug in Model.schema, duh!
271
293
 
272
- *0.0.6*
294
+ === 0.0.6
273
295
 
274
296
  * Added Dataset#sql as alias to Dataset#select_sql.
275
297
 
@@ -283,7 +305,7 @@
283
305
 
284
306
  * Implemented SQLite::Database#tables.
285
307
 
286
- *0.0.5*
308
+ === 0.0.5
287
309
 
288
310
  * Added Dataset#[] method. Refactored Model#find and Model#[].
289
311
 
@@ -291,13 +313,13 @@
291
313
 
292
314
  * Added automatic require 'sequel' to all adapters for convenience.
293
315
 
294
- *0.0.4*
316
+ === 0.0.4
295
317
 
296
318
  * Added preliminary MySQL support.
297
319
 
298
320
  * Code cleanup.
299
321
 
300
- *0.0.3*
322
+ === 0.0.3
301
323
 
302
324
  * Add Dataset#sum method.
303
325
 
@@ -307,7 +329,7 @@
307
329
 
308
330
  * Fixed small bug in Dataset#qualified_field_name for better join support.
309
331
 
310
- *0.0.2*
332
+ === 0.0.2
311
333
 
312
334
  * Added Sequel.open as alias to Sequel.connect.
313
335
 
@@ -320,7 +342,7 @@ method for saving them.
320
342
 
321
343
  * Refactored Dataset#first and Dataset#last code. These methods can now accept the number of records to fetch.
322
344
 
323
- *0.0.1*
345
+ === 0.0.1
324
346
 
325
347
  * More documentation for Dataset.
326
348
 
data/README CHANGED
@@ -67,7 +67,7 @@ Sequel also offers convenience methods for extracting data from Datasets, such a
67
67
 
68
68
  Or getting results as a transposed hash, with one column as key and another as value:
69
69
 
70
- middle_east.hash_column(:name, :area) #=> {'Israel' => 20000, 'Greece' => 120000, ...}
70
+ middle_east.to_hash(:name, :area) #=> {'Israel' => 20000, 'Greece' => 120000, ...}
71
71
 
72
72
  Much of Sequel is still undocumented (especially the part relating to model classes). The following section provides examples of common usage. Feel free to explore...
73
73
 
data/Rakefile CHANGED
@@ -6,7 +6,7 @@ require 'fileutils'
6
6
  include FileUtils
7
7
 
8
8
  NAME = "sequel"
9
- VERS = "0.1.7"
9
+ VERS = "0.1.8"
10
10
  CLEAN.include ['**/.*.sw?', 'pkg/*', '.config', 'doc/*', 'coverage/*']
11
11
  RDOC_OPTS = ['--quiet', '--title', "Sequel: Concise ORM for Ruby",
12
12
  "--opname", "index.html",
@@ -46,8 +46,7 @@ spec = Gem::Specification.new do |s|
46
46
  s.add_dependency('metaid')
47
47
  s.required_ruby_version = '>= 1.8.2'
48
48
 
49
- # s.files = %w(COPYING README Rakefile) + Dir.glob("{doc,spec,lib}/**/*")
50
- s.files = %w(COPYING README Rakefile) + Dir.glob("{bin,doc,lib}/**/*")
49
+ s.files = %w(COPYING README Rakefile) + Dir.glob("{bin,doc,spec,lib}/**/*")
51
50
 
52
51
  s.require_path = "lib"
53
52
  s.bindir = "bin"
@@ -81,6 +80,17 @@ Spec::Rake::SpecTask.new('spec') do |t|
81
80
  t.rcov = true
82
81
  end
83
82
 
83
+ desc "Run adapter specs without coverage"
84
+ Spec::Rake::SpecTask.new('spec_adapters') do |t|
85
+ t.spec_files = FileList['spec/adapters/*_spec.rb']
86
+ end
87
+
88
+ desc "Run all specs with coverage"
89
+ Spec::Rake::SpecTask.new('spec_all') do |t|
90
+ t.spec_files = FileList['spec/*_spec.rb', 'spec/adapters/*_spec.rb']
91
+ t.rcov = true
92
+ end
93
+
84
94
  ##############################################################################
85
95
  # Statistics
86
96
  ##############################################################################
@@ -24,7 +24,7 @@ module Sequel
24
24
  end
25
25
 
26
26
  def connect
27
- true # we can't return nil or false, because then pool will block forever
27
+ raise NotImplementedError, "#connect should be overriden by adapters"
28
28
  end
29
29
 
30
30
  def uri
@@ -47,7 +47,7 @@ module Sequel
47
47
 
48
48
  # Returns a blank dataset
49
49
  def dataset
50
- Dataset.new(self)
50
+ Sequel::Dataset.new(self)
51
51
  end
52
52
 
53
53
  # Returns a new dataset with the from method invoked.
@@ -1,6 +1,9 @@
1
1
  require 'time'
2
2
  require 'date'
3
3
 
4
+ require File.join(File.dirname(__FILE__), 'dataset/dataset_sql')
5
+ require File.join(File.dirname(__FILE__), 'dataset/dataset_convenience')
6
+
4
7
  module Sequel
5
8
  # A Dataset represents a view of a the data in a database, constrained by
6
9
  # specific parameters such as filtering conditions, order, etc. Datasets
@@ -22,11 +25,52 @@ module Sequel
22
25
  #
23
26
  # Datasets are Enumerable objects, so they can be manipulated using any
24
27
  # of the Enumerable methods, such as map, inject, etc.
28
+ #
29
+ # === The Dataset Adapter Interface
30
+ #
31
+ # Each adapter should define its own dataset class as a descendant of
32
+ # Sequel::Dataset. The following methods should be overriden by the adapter
33
+ # Dataset class (each method with the stock implementation):
34
+ #
35
+ # # Iterate over the results of the SQL query and call the supplied
36
+ # # block with each record (as a hash).
37
+ # def fetch_rows(sql, &block)
38
+ # @db.synchronize do
39
+ # r = @db.execute(sql)
40
+ # r.each(&block)
41
+ # end
42
+ # end
43
+ #
44
+ # # Insert records.
45
+ # def insert(*values)
46
+ # @db.synchronize do
47
+ # @db.execute(insert_sql(*values)).last_insert_id
48
+ # end
49
+ # end
50
+ #
51
+ # # Update records.
52
+ # def update(values, opts = nil)
53
+ # @db.synchronize do
54
+ # @db.execute(update_sql(values, opts)).affected_rows
55
+ # end
56
+ # end
57
+ #
58
+ # # Delete records.
59
+ # def delete(opts = nil)
60
+ # @db.synchronize do
61
+ # @db.execute(delete_sql(opts)).affected_rows
62
+ # end
63
+ # end
25
64
  class Dataset
26
65
  include Enumerable
66
+ include SQL # in dataset/dataset_sql.rb
67
+ include Convenience # in dataset/dataset_convenience.rb
68
+
69
+ attr_reader :db
70
+ attr_accessor :opts
27
71
 
28
- attr_reader :db, :opts
29
- attr_accessor :model_class
72
+ alias all to_a
73
+ alias size count
30
74
 
31
75
  # Constructs a new instance of a dataset with a database instance, initial
32
76
  # options and an optional record class. Datasets are usually constructed by
@@ -37,715 +81,177 @@ module Sequel
37
81
  #
38
82
  # Sequel::Dataset is an abstract class that is not useful by itself. Each
39
83
  # database adaptor should provide a descendant class of Sequel::Dataset.
40
- def initialize(db, opts = nil, model_class = nil)
84
+ def initialize(db, opts = nil)
41
85
  @db = db
42
86
  @opts = opts || {}
43
- @model_class = model_class
44
87
  end
45
88
 
46
89
  # Returns a new instance of the dataset with with the give options merged.
47
- def dup_merge(opts)
48
- self.class.new(@db, @opts.merge(opts), @model_class)
49
- end
50
-
51
- # Returns a dataset that fetches records as hashes (instead of model
52
- # objects). If no record class is defined for the dataset, self is
53
- # returned.
54
- def naked
55
- @model_class ? self.class.new(@db, opts || @opts.dup) : self
56
- end
57
-
58
- # Returns a valid SQL fieldname as a string. Field names specified as
59
- # symbols can include double underscores to denote a dot separator, e.g.
60
- # :posts__id will be converted into posts.id.
61
- def field_name(field)
62
- field.is_a?(Symbol) ? field.to_field_name : field
63
- end
64
-
65
- QUALIFIED_REGEXP = /(.*)\.(.*)/.freeze
66
-
67
- # Returns a qualified field name (including a table name) if the field
68
- # name isn't already qualified.
69
- def qualified_field_name(field, table)
70
- fn = field_name(field)
71
- fn =~ QUALIFIED_REGEXP ? fn : "#{table}.#{fn}"
72
- end
73
-
74
- WILDCARD = '*'.freeze
75
- COMMA_SEPARATOR = ", ".freeze
76
-
77
- # Converts an array of field names into a comma seperated string of
78
- # field names. If the array is empty, a wildcard (*) is returned.
79
- def field_list(fields)
80
- if fields.empty?
81
- WILDCARD
82
- else
83
- fields.map {|i| field_name(i)}.join(COMMA_SEPARATOR)
84
- end
85
- end
86
-
87
- # Converts an array of sources names into into a comma separated list.
88
- def source_list(source)
89
- if source.nil? || source.empty?
90
- raise SequelError, 'No source specified for query'
91
- end
92
- source.map {|i| i.is_a?(Dataset) ? i.to_table_reference : i}.
93
- join(COMMA_SEPARATOR)
94
- end
95
-
96
- NULL = "NULL".freeze
97
- TIMESTAMP_FORMAT = "TIMESTAMP '%Y-%m-%d %H:%M:%S'".freeze
98
- DATE_FORMAT = "DATE '%Y-%m-%d'".freeze
99
-
100
- # Returns a literal representation of a value to be used as part
101
- # of an SQL expression. The stock implementation supports literalization
102
- # of String (with proper escaping to prevent SQL injections), numbers,
103
- # Symbol (as field references), Array (as a list of literalized values),
104
- # Time (as an SQL TIMESTAMP), Date (as an SQL DATE), Dataset (as a
105
- # subquery) and nil (AS NULL).
106
- #
107
- # dataset.literal("abc'def") #=> "'abc''def'"
108
- # dataset.literal(:items__id) #=> "items.id"
109
- # dataset.literal([1, 2, 3]) => "(1, 2, 3)"
110
- # dataset.literal(DB[:items]) => "(SELECT * FROM items)"
111
- #
112
- # If an unsupported object is given, an exception is raised.
113
- def literal(v)
114
- case v
115
- when ExpressionString: v
116
- when String: "'#{v.gsub(/'/, "''")}'"
117
- when Integer, Float: v.to_s
118
- when NilClass: NULL
119
- when Symbol: v.to_field_name
120
- when Array: v.empty? ? NULL : v.map {|i| literal(i)}.join(COMMA_SEPARATOR)
121
- when Time: v.strftime(TIMESTAMP_FORMAT)
122
- when Date: v.strftime(DATE_FORMAT)
123
- when Dataset: "(#{v.sql})"
124
- else
125
- raise SequelError, "can't express #{v.inspect} as a SQL literal"
126
- end
127
- end
128
-
129
- AND_SEPARATOR = " AND ".freeze
130
-
131
- # Formats an equality expression involving a left value and a right value.
132
- # Equality expressions differ according to the class of the right value.
133
- # The stock implementation supports Range (inclusive and exclusive), Array
134
- # (as a list of values to compare against), Dataset (as a subquery to
135
- # compare against), or a regular value.
136
- #
137
- # dataset.format_eq_expression('id', 1..20) #=>
138
- # "(id >= 1 AND id <= 20)"
139
- # dataset.format_eq_expression('id', [3,6,10]) #=>
140
- # "(id IN (3, 6, 10))"
141
- # dataset.format_eq_expression('id', DB[:items].select(:id)) #=>
142
- # "(id IN (SELECT id FROM items))"
143
- # dataset.format_eq_expression('id', nil) #=>
144
- # "(id IS NULL)"
145
- # dataset.format_eq_expression('id', 3) #=>
146
- # "(id = 3)"
147
- def format_eq_expression(left, right)
148
- case right
149
- when Range:
150
- right.exclude_end? ? \
151
- "(#{left} >= #{right.begin} AND #{left} < #{right.end})" : \
152
- "(#{left} >= #{right.begin} AND #{left} <= #{right.end})"
153
- when Array:
154
- "(#{left} IN (#{literal(right)}))"
155
- when Dataset:
156
- "(#{left} IN (#{right.sql}))"
157
- when NilClass:
158
- "(#{left} IS NULL)"
159
- else
160
- "(#{left} = #{literal(right)})"
161
- end
162
- end
163
-
164
- # Formats an expression comprising a left value, a binary operator and a
165
- # right value. The supported operators are :eql (=), :not (!=), :lt (<),
166
- # :lte (<=), :gt (>), :gte (>=) and :like (LIKE operator). Examples:
167
- #
168
- # dataset.format_expression('price', :gte, 100) #=> "(price >= 100)"
169
- # dataset.format_expression('id', :not, 30) #=> "NOT (id = 30)"
170
- # dataset.format_expression('name', :like, 'abc%') #=>
171
- # "(name LIKE 'abc%')"
172
- #
173
- # If an unsupported operator is given, an exception is raised.
174
- def format_expression(left, op, right)
175
- left = field_name(left)
176
- case op
177
- when :eql:
178
- format_eq_expression(left, right)
179
- when :not:
180
- "NOT #{format_eq_expression(left, right)}"
181
- when :lt:
182
- "(#{left} < #{literal(right)})"
183
- when :lte:
184
- "(#{left} <= #{literal(right)})"
185
- when :gt:
186
- "(#{left} > #{literal(right)})"
187
- when :gte:
188
- "(#{left} >= #{literal(right)})"
189
- when :like:
190
- "(#{left} LIKE #{literal(right)})"
191
- else
192
- raise SequelError, "Invalid operator specified: #{op}"
193
- end
194
- end
195
-
196
- QUESTION_MARK = '?'.freeze
197
-
198
- # Formats a where clause. If parenthesize is true, then the whole
199
- # generated clause will be enclosed in a set of parentheses.
200
- def expression_list(where, parenthesize = false)
201
- case where
202
- when Hash:
203
- parenthesize = false if where.size == 1
204
- fmt = where.map {|i| format_expression(i[0], :eql, i[1])}.
205
- join(AND_SEPARATOR)
206
- when Array:
207
- fmt = where.shift.gsub(QUESTION_MARK) {literal(where.shift)}
208
- when Proc:
209
- fmt = where.to_expressions.map {|e| format_expression(e.left, e.op, e.right)}.
210
- join(AND_SEPARATOR)
211
- else
212
- # if the expression is compound, it should be parenthesized in order for
213
- # things to be predictable (when using #or and #and.)
214
- parenthesize |= where =~ /\).+\(/
215
- fmt = where
216
- end
217
- parenthesize ? "(#{fmt})" : fmt
218
- end
219
-
220
- # Returns a copy of the dataset with the source changed.
221
- def from(*source)
222
- dup_merge(:from => source)
223
- end
224
-
225
- # Returns a copy of the dataset with the selected fields changed.
226
- def select(*fields)
227
- dup_merge(:select => fields)
228
- end
229
-
230
- # Returns a copy of the dataset with the distinct option.
231
- def uniq
232
- dup_merge(:distinct => true)
233
- end
234
- alias distinct uniq
235
-
236
- # Returns a copy of the dataset with the order changed.
237
- def order(*order)
238
- dup_merge(:order => order)
90
+ def clone_merge(opts)
91
+ new_dataset = clone
92
+ new_dataset.set_options(@opts.merge(opts))
93
+ new_dataset
239
94
  end
240
95
 
241
- # Returns a copy of the dataset with the order reversed. If no order is
242
- # given, the existing order is inverted.
243
- def reverse_order(*order)
244
- order(invert_order(order.empty? ? @opts[:order] : order))
96
+ def set_options(opts)
97
+ @opts = opts
98
+ @columns = nil
245
99
  end
246
100
 
247
- DESC_ORDER_REGEXP = /(.*)\sDESC/.freeze
101
+ NOTIMPL_MSG = "This method must be overriden in Sequel adapters".freeze
248
102
 
249
- # Inverts the given order by breaking it into a list of field references
250
- # and inverting them.
251
- #
252
- # dataset.invert_order('id DESC') #=> "id"
253
- # dataset.invert_order('category, price DESC') #=>
254
- # "category DESC, price"
255
- def invert_order(order)
256
- new_order = []
257
- order.each do |f|
258
- f.to_s.split(',').map do |p|
259
- p.strip!
260
- new_order << (p =~ DESC_ORDER_REGEXP ? $1 : p.to_sym.DESC)
261
- end
262
- end
263
- new_order
264
- end
265
-
266
- # Returns a copy of the dataset with the results grouped by the value of
267
- # the given fields
268
- def group(*fields)
269
- dup_merge(:group => fields)
270
- end
271
-
272
- # Returns a copy of the dataset with the given conditions imposed upon it.
273
- # If the query has been grouped, then the conditions are imposed in the
274
- # HAVING clause. If not, then they are imposed in the WHERE clause. Filter
275
- # accepts a Hash (formated into a list of equality expressions), an Array
276
- # (formatted ala ActiveRecord conditions), a String (taken literally), or
277
- # a block that is converted into expressions.
278
- #
279
- # dataset.filter(:id => 3).sql #=>
280
- # "SELECT * FROM items WHERE (id = 3)"
281
- # dataset.filter('price < ?', 100).sql #=>
282
- # "SELECT * FROM items WHERE price < 100"
283
- # dataset.filter('price < 100').sql #=>
284
- # "SELECT * FROM items WHERE price < 100"
285
- # dataset.filter {price < 100}.sql #=>
286
- # "SELECT * FROM items WHERE (price < 100)"
287
- #
288
- # Multiple filter calls can be chained for scoping:
289
- #
290
- # software = dataset.filter(:category => 'software')
291
- # software.filter {price < 100}.sql #=>
292
- # "SELECT * FROM items WHERE (category = 'software') AND (price < 100)"
293
- def filter(*cond, &block)
294
- clause = (@opts[:group] ? :having : :where)
295
- cond = cond.first if cond.size == 1
296
- parenthesize = !(cond.is_a?(Hash) || cond.is_a?(Array))
297
- filter = cond.is_a?(Hash) && cond
298
- if @opts[clause]
299
- if filter && cond.is_a?(Hash)
300
- filter
301
- end
302
- filter =
303
- l = expression_list(@opts[clause])
304
- r = expression_list(block || cond, parenthesize)
305
- dup_merge(clause => "#{l} AND #{r}")
306
- else
307
- dup_merge(:filter => cond, clause => expression_list(block || cond))
308
- end
103
+ # Executes a select query and fetches records, passing each record to the
104
+ # supplied block. Adapters should override this method.
105
+ def fetch_rows(sql, &block)
106
+ # @db.synchronize do
107
+ # r = @db.execute(sql)
108
+ # r.each(&block)
109
+ # end
110
+ raise NotImplementedError, NOTIMPL_MSG
309
111
  end
310
-
311
- # Adds an alternate filter to an existing filter using OR. If no filter
312
- # exists an error is raised.
313
- def or(*cond, &block)
314
- clause = (@opts[:group] ? :having : :where)
315
- cond = cond.first if cond.size == 1
316
- parenthesize = !(cond.is_a?(Hash) || cond.is_a?(Array))
317
- if @opts[clause]
318
- l = expression_list(@opts[clause])
319
- r = expression_list(block || cond, parenthesize)
320
- dup_merge(clause => "#{l} OR #{r}")
321
- else
322
- raise SequelError, "No existing filter found."
323
- end
324
- end
325
-
326
- # Adds an further filter to an existing filter using AND. If no filter
327
- # exists an error is raised. This method is identical to #filter except
328
- # it expects an existing filter.
329
- def and(*cond, &block)
330
- clause = (@opts[:group] ? :having : :where)
331
- unless @opts[clause]
332
- raise SequelError, "No existing filter found."
333
- end
334
- filter(*cond, &block)
335
- end
336
-
337
- # Performs the inverse of Dataset#filter.
338
- #
339
- # dataset.exclude(:category => 'software').sql #=>
340
- # "SELECT * FROM items WHERE NOT (category = 'software')"
341
- def exclude(*cond, &block)
342
- clause = (@opts[:group] ? :having : :where)
343
- cond = cond.first if cond.size == 1
344
- parenthesize = !(cond.is_a?(Hash) || cond.is_a?(Array))
345
- if @opts[clause]
346
- l = expression_list(@opts[clause])
347
- r = expression_list(block || cond, parenthesize)
348
- cond = "#{l} AND NOT #{r}"
349
- else
350
- cond = "NOT #{expression_list(block || cond, true)}"
351
- end
352
- dup_merge(clause => cond)
353
- end
354
-
355
- # Returns a copy of the dataset with the where conditions changed. Raises
356
- # if the dataset has been grouped. See also #filter.
357
- def where(*cond, &block)
358
- if @opts[:group]
359
- raise SequelError, "Can't specify a WHERE clause once the dataset has been grouped"
360
- else
361
- filter(*cond, &block)
362
- end
363
- end
364
-
365
- # Returns a copy of the dataset with the having conditions changed. Raises
366
- # if the dataset has not been grouped. See also #filter
367
- def having(*cond, &block)
368
- unless @opts[:group]
369
- raise SequelError, "Can only specify a HAVING clause on a grouped dataset"
370
- else
371
- filter(*cond, &block)
372
- end
373
- end
374
-
375
- # Adds a UNION clause using a second dataset object. If all is true the
376
- # clause used is UNION ALL, which may return duplicate rows.
377
- def union(dataset, all = false)
378
- dup_merge(:union => dataset, :union_all => all)
112
+
113
+ # Inserts values into the associated table. Adapters should override this
114
+ # method.
115
+ def insert(*values)
116
+ # @db.synchronize do
117
+ # @db.execute(insert_sql(*values)).last_insert_id
118
+ # end
119
+ raise NotImplementedError, NOTIMPL_MSG
379
120
  end
380
-
381
- # Adds an INTERSECT clause using a second dataset object. If all is true
382
- # the clause used is INTERSECT ALL, which may return duplicate rows.
383
- def intersect(dataset, all = false)
384
- dup_merge(:intersect => dataset, :intersect_all => all)
121
+
122
+ # Updates values for the dataset. Adapters should override this method.
123
+ def update(values, opts = nil)
124
+ # @db.synchronize do
125
+ # @db.execute(update_sql(values, opts)).affected_rows
126
+ # end
127
+ raise NotImplementedError, NOTIMPL_MSG
385
128
  end
386
-
387
- # Adds an EXCEPT clause using a second dataset object. If all is true the
388
- # clause used is EXCEPT ALL, which may return duplicate rows.
389
- def except(dataset, all = false)
390
- dup_merge(:except => dataset, :except_all => all)
129
+
130
+ # Deletes the records in the dataset. Adapters should override this method.
131
+ def delete(opts = nil)
132
+ # @db.synchronize do
133
+ # @db.execute(delete_sql(opts)).affected_rows
134
+ # end
135
+ raise NotImplementedError, NOTIMPL_MSG
391
136
  end
392
137
 
393
- JOIN_TYPES = {
394
- :left_outer => 'LEFT OUTER JOIN'.freeze,
395
- :right_outer => 'RIGHT OUTER JOIN'.freeze,
396
- :full_outer => 'FULL OUTER JOIN'.freeze,
397
- :inner => 'INNER JOIN'.freeze
398
- }
399
-
400
- def join_expr(type, table, expr)
401
- join_type = JOIN_TYPES[type || :inner]
402
- unless join_type
403
- raise SequelError, "Invalid join type: #{type}"
404
- end
405
-
406
- join_expr = expr.map do |k, v|
407
- l = qualified_field_name(k, table)
408
- r = qualified_field_name(v, @opts[:last_joined_table] || @opts[:from])
409
- "(#{l} = #{r})"
410
- end.join(AND_SEPARATOR)
411
-
412
- " #{join_type} #{table} ON #{join_expr}"
138
+ # Returns the columns in the result set in their true order. The stock
139
+ # implementation returns the content of @columns. If @columns is nil,
140
+ # a query is performed. Adapters are expected to fill @columns with the
141
+ # column information when a query is performed.
142
+ def columns
143
+ first unless @columns
144
+ @columns || []
413
145
  end
414
146
 
415
- # Returns a joined dataset.
416
- def join_table(type, table, expr)
417
- unless expr.is_a?(Hash)
418
- expr = {expr => :id}
419
- end
420
- clause = join_expr(type, table, expr)
421
- join = @opts[:join] ? @opts[:join] + clause : clause
422
- dup_merge(:join => join, :last_joined_table => table)
147
+ def <<(*args)
148
+ insert(*args)
423
149
  end
424
150
 
425
- def left_outer_join(table, expr); join_table(:left_outer, table, expr); end
426
- def right_outer_join(table, expr); join_table(:right_outer, table, expr); end
427
- def full_outer_join(table, expr); join_table(:full_outer, table, expr); end
428
- def inner_join(table, expr); join_table(:inner, table, expr); end
429
- alias_method :join, :inner_join
430
-
431
- alias_method :all, :to_a
432
-
433
- # Maps field values for each record in the dataset (if a field name is
434
- # given), or performs the stock mapping functionality of Enumerable.
435
- def map(field_name = nil, &block)
436
- if field_name
437
- super() {|r| r[field_name]}
438
- else
439
- super(&block)
440
- end
151
+ # Iterates over the records in the dataset
152
+ def each(opts = nil, &block)
153
+ fetch_rows(select_sql(opts), &block)
441
154
  end
442
155
 
443
- # Returns a hash with one column used as key and another used as value.
444
- def hash_column(key_column, value_column)
445
- inject({}) do |m, r|
446
- m[r[key_column]] = r[value_column]
447
- m
448
- end
156
+ # Returns the the model classes associated with the dataset as a hash.
157
+ def model_classes
158
+ @opts[:models]
449
159
  end
450
160
 
451
- # Inserts the given values into the table.
452
- def <<(values)
453
- insert(values)
161
+ # Returns the column name for the polymorphic key.
162
+ def polymorphic_key
163
+ @opts[:polymorphic_key]
454
164
  end
455
165
 
456
- # Inserts multiple values. If a block is given it is invoked for each
457
- # item in the given array before inserting it.
458
- def insert_multiple(array, &block)
459
- if block
460
- array.each {|i| insert(block[i])}
166
+ # Returns a naked dataset clone - i.e. a dataset that returns records as
167
+ # hashes rather than model objects.
168
+ def naked
169
+ d = clone_merge(:naked => true, :models => nil, :polymorphic_key => nil)
170
+ d.set_model(nil)
171
+ d
172
+ end
173
+
174
+ # Associates the dataset with a model. If
175
+ def set_model(*args)
176
+ if args.empty? || (args.first == nil)
177
+ @opts.merge!(:naked => true, :models => nil, :polymorphic_key => nil)
178
+ extend_with_stock_each
179
+ elsif args.size == 1
180
+ c = args.first
181
+ @opts.merge!(:naked => nil, :models => {nil => c}, :polymorphic_key => nil)
182
+ extend_with_model(c)
183
+ extend_with_destroy
461
184
  else
462
- array.each {|i| insert(i)}
463
- end
464
- end
465
-
466
- EMPTY = ''.freeze
467
- SPACE = ' '.freeze
468
-
469
- # Formats a SELECT statement using the given options and the dataset
470
- # options.
471
- def select_sql(opts = nil)
472
- opts = opts ? @opts.merge(opts) : @opts
473
-
474
- fields = opts[:select]
475
- select_fields = fields ? field_list(fields) : WILDCARD
476
- select_source = source_list(opts[:from])
477
- sql = opts[:distinct] ? \
478
- "SELECT DISTINCT #{select_fields} FROM #{select_source}" : \
479
- "SELECT #{select_fields} FROM #{select_source}"
480
-
481
- if join = opts[:join]
482
- sql << join
483
- end
484
-
485
- if where = opts[:where]
486
- sql << " WHERE #{where}"
487
- end
488
-
489
- if group = opts[:group]
490
- sql << " GROUP BY #{field_list(group)}"
491
- end
492
-
493
- if order = opts[:order]
494
- sql << " ORDER BY #{field_list(order)}"
495
- end
496
-
497
- if having = opts[:having]
498
- sql << " HAVING #{having}"
499
- end
500
-
501
- if limit = opts[:limit]
502
- sql << " LIMIT #{limit}"
503
- if offset = opts[:offset]
504
- sql << " OFFSET #{offset}"
185
+ key, hash = args
186
+ @opts.merge!(:naked => true, :models => hash, :polymorphic_key => key)
187
+ extend_with_polymorphic_model(key, hash)
188
+ extend_with_destroy
189
+ end
190
+ self
191
+ end
192
+
193
+ private
194
+ # Overrides the each method to convert records to model instances.
195
+ def extend_with_model(c)
196
+ meta_def(:model_class) {c}
197
+ m = Module.new do
198
+ def each(opts = nil, &block)
199
+ c = model_class
200
+ if opts && opts[:naked]
201
+ fetch_rows(select_sql(opts), &block)
202
+ else
203
+ fetch_rows(select_sql(opts)) {|r| block.call(c.new(r))}
204
+ end
505
205
  end
506
206
  end
507
-
508
- if union = opts[:union]
509
- sql << (opts[:union_all] ? \
510
- " UNION ALL #{union.sql}" : " UNION #{union.sql}")
511
- elsif intersect = opts[:intersect]
512
- sql << (opts[:intersect_all] ? \
513
- " INTERSECT ALL #{intersect.sql}" : " INTERSECT #{intersect.sql}")
514
- elsif except = opts[:except]
515
- sql << (opts[:except_all] ? \
516
- " EXCEPT ALL #{except.sql}" : " EXCEPT #{except.sql}")
517
- end
518
-
519
- sql
520
- end
521
- alias sql select_sql
522
-
523
- # Formats an INSERT statement using the given values. If a hash is given,
524
- # the resulting statement includes field names. If no values are given,
525
- # the resulting statement includes a DEFAULT VALUES clause.
526
- #
527
- # dataset.insert_sql() #=> 'INSERT INTO items DEFAULT VALUES'
528
- # dataset.insert_sql(1,2,3) #=> 'INSERT INTO items VALUES (1, 2, 3)'
529
- # dataset.insert_sql(:a => 1, :b => 2) #=>
530
- # 'INSERT INTO items (a, b) VALUES (1, 2)'
531
- def insert_sql(*values)
532
- if values.empty?
533
- "INSERT INTO #{@opts[:from]} DEFAULT VALUES"
534
- elsif (values.size == 1) && values[0].is_a?(Hash)
535
- field_list = []
536
- value_list = []
537
- values[0].each do |k, v|
538
- field_list << k
539
- value_list << literal(v)
207
+ extend(m)
208
+ end
209
+
210
+ # Overrides the each method to convert records to polymorphic model
211
+ # instances. The model class is determined according to the value in the
212
+ # key column.
213
+ def extend_with_polymorphic_model(key, hash)
214
+ meta_def(:model_class) {|r| hash[r[key]] || hash[nil]}
215
+ m = Module.new do
216
+ def each(opts = nil, &block)
217
+ if opts && opts[:naked]
218
+ fetch_rows(select_sql(opts), &block)
219
+ else
220
+ fetch_rows(select_sql(opts)) do |r|
221
+ c = model_class(r)
222
+ if c
223
+ block.call(c.new(r))
224
+ else
225
+ raise SequelError, "No matching model class for record (#{polymorphic_key} = #{r[polymorphic_key].inspect})"
226
+ end
227
+ end
228
+ end
540
229
  end
541
- fl = field_list.join(COMMA_SEPARATOR)
542
- vl = value_list.join(COMMA_SEPARATOR)
543
- "INSERT INTO #{@opts[:from]} (#{fl}) VALUES (#{vl})"
544
- else
545
- "INSERT INTO #{@opts[:from]} VALUES (#{literal(values)})"
546
- end
547
- end
548
-
549
- # Formats an UPDATE statement using the given values.
550
- #
551
- # dataset.update_sql(:price => 100, :category => 'software') #=>
552
- # "UPDATE items SET price = 100, category = 'software'"
553
- def update_sql(values, opts = nil)
554
- opts = opts ? @opts.merge(opts) : @opts
555
-
556
- if opts[:group]
557
- raise SequelError, "Can't update a grouped dataset"
558
- elsif (opts[:from].size > 1) or opts[:join]
559
- raise SequelError, "Can't update a joined dataset"
560
- end
561
-
562
- set_list = values.map {|k, v| "#{k} = #{literal(v)}"}.
563
- join(COMMA_SEPARATOR)
564
- sql = "UPDATE #{@opts[:from]} SET #{set_list}"
565
-
566
- if where = opts[:where]
567
- sql << " WHERE #{where}"
568
- end
569
-
570
- sql
571
- end
572
-
573
- # Formats a DELETE statement using the given options and dataset options.
574
- #
575
- # dataset.filter {price >= 100}.delete_sql #=>
576
- # "DELETE FROM items WHERE (price >= 100)"
577
- def delete_sql(opts = nil)
578
- opts = opts ? @opts.merge(opts) : @opts
579
-
580
- if opts[:group]
581
- raise SequelError, "Can't delete from a grouped dataset"
582
- elsif opts[:from].is_a?(Array) && opts[:from].size > 1
583
- raise SequelError, "Can't delete from a joined dataset"
584
- end
585
-
586
- sql = "DELETE FROM #{opts[:from]}"
587
-
588
- if where = opts[:where]
589
- sql << " WHERE #{where}"
590
- end
591
-
592
- sql
593
- end
594
-
595
- # Returns the first record in the dataset.
596
- def single_record(opts = nil)
597
- each(opts) {|r| return r}
598
- nil
599
- end
600
-
601
- # Returns the first value of the first reecord in the dataset.
602
- def single_value(opts = nil)
603
- naked.each(opts) {|r| return r.values.first}
604
- end
605
-
606
- SELECT_COUNT = {:select => ["COUNT(*)"], :order => nil}.freeze
607
-
608
- # Returns the number of records in the dataset.
609
- def count
610
- single_value(SELECT_COUNT).to_i
611
- end
612
- alias size count
613
-
614
- # returns a paginated dataset. The resulting dataset also provides the
615
- # total number of pages (Dataset#page_count) and the current page number
616
- # (Dataset#current_page), as well as Dataset#prev_page and Dataset#next_page
617
- # for implementing pagination controls.
618
- def paginate(page_no, page_size)
619
- total_pages = (count / page_size.to_f).ceil
620
- paginated = limit(page_size, (page_no - 1) * page_size)
621
- paginated.current_page = page_no
622
- paginated.page_count = total_pages
623
- paginated
624
- end
625
-
626
- attr_accessor :page_count, :current_page
627
-
628
- # Returns the previous page number or nil if the current page is the first
629
- def prev_page
630
- current_page > 1 ? (current_page - 1) : nil
631
- end
632
-
633
- # Returns the next page number or nil if the current page is the last page
634
- def next_page
635
- current_page < page_count ? (current_page + 1) : nil
636
- end
637
-
638
- # Returns a table reference for use in the FROM clause. If the dataset has
639
- # only a :from option refering to a single table, only the table name is
640
- # returned. Otherwise a subquery is returned.
641
- def to_table_reference
642
- if opts.keys == [:from] && opts[:from].size == 1
643
- opts[:from].first.to_s
644
- else
645
- "(#{sql})"
646
230
  end
231
+ extend(m)
647
232
  end
648
233
 
649
- # Returns the minimum value for the given field.
650
- def min(field)
651
- single_value(:select => [field.MIN])
652
- end
653
-
654
- # Returns the maximum value for the given field.
655
- def max(field)
656
- single_value(:select => [field.MAX])
657
- end
658
-
659
- # Returns the sum for the given field.
660
- def sum(field)
661
- single_value(:select => [field.SUM])
662
- end
663
-
664
- # Returns the average value for the given field.
665
- def avg(field)
666
- single_value(:select => [field.AVG])
667
- end
668
-
669
- # Returns an EXISTS clause for the dataset.
670
- #
671
- # dataset.exists #=> "EXISTS (SELECT 1 FROM items)"
672
- def exists(opts = nil)
673
- "EXISTS (#{sql({:select => [1]}.merge(opts || {}))})"
674
- end
675
-
676
- # If given an integer, the dataset will contain only the first l results.
677
- # If given a range, it will contain only those at offsets within that
678
- # range. If a second argument is given, it is used as an offset.
679
- def limit(l, o = nil)
680
- if l.is_a? Range
681
- lim = (l.exclude_end? ? l.last - l.first : l.last + 1 - l.first)
682
- dup_merge(:limit => lim, :offset=>l.first)
683
- elsif o
684
- dup_merge(:limit => l, :offset => o)
685
- else
686
- dup_merge(:limit => l)
687
- end
688
- end
689
-
690
- # Returns the first record in the dataset. If the num argument is specified,
691
- # an array is returned with the first <i>num</i> records.
692
- def first(*args)
693
- args = args.empty? ? 1 : (args.size == 1) ? args.first : args
694
- case args
695
- when 1: single_record(:limit => 1)
696
- when Fixnum: limit(args).all
697
- else
698
- filter(args).single_record(:limit => 1)
234
+ # Extends the dataset with a destroy method, that calls destroy for each
235
+ # record in the dataset.
236
+ def extend_with_destroy
237
+ unless respond_to?(:destroy)
238
+ meta_def(:destroy) do
239
+ raise SequelError, 'Dataset not associated with model' unless @opts[:models]
240
+ count = 0
241
+ @db.transaction {each {|r| count += 1; r.destroy}}
242
+ count
243
+ end
699
244
  end
700
245
  end
701
246
 
702
- # Returns the first record matching the condition.
703
- def [](*conditions)
704
- first(*conditions)
705
- end
706
-
707
- def []=(conditions, values)
708
- filter(conditions).update(values)
709
- end
710
-
711
- # Returns the last records in the dataset by inverting the order. If no
712
- # order is given, an exception is raised. If num is not given, the last
713
- # record is returned. Otherwise an array is returned with the last
714
- # <i>num</i> records.
715
- def last(*args)
716
- raise SequelError, 'No order specified' unless
717
- @opts[:order] || (opts && opts[:order])
718
-
719
- args = args.empty? ? 1 : (args.size == 1) ? args.first : args
720
-
721
- case args
722
- when Fixnum:
723
- l = {:limit => args}
724
- opts = {:order => invert_order(@opts[:order])}. \
725
- merge(opts ? opts.merge(l) : l)
726
- if args == 1
727
- single_record(opts)
728
- else
729
- dup_merge(opts).all
247
+ # Restores the stock #each implementation.
248
+ def extend_with_stock_each
249
+ m = Module.new do
250
+ def each(opts = nil, &block)
251
+ fetch_rows(select_sql(opts), &block)
730
252
  end
731
- else
732
- filter(args).last(1)
733
253
  end
734
- end
735
-
736
- # Deletes all records in the dataset one at a time by invoking the destroy
737
- # method of the associated model class.
738
- def destroy
739
- raise SequelError, 'Dataset not associated with model' unless @model_class
740
-
741
- count = 0
742
- @db.transaction {each {|r| count += 1; r.destroy}}
743
- count
744
- end
745
-
746
- # Pretty prints the records in the dataset as plain-text table.
747
- def print(*columns)
748
- Sequel::PrettyTable.print(naked.all, columns.empty? ? nil : columns)
254
+ extend(m)
749
255
  end
750
256
  end
751
257
  end