sequel_pg 1.3.0-x86-mingw32 → 1.5.0-x86-mingw32

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG CHANGED
@@ -1,3 +1,13 @@
1
+ === 1.5.0 (2012-07-02)
2
+
3
+ * Add C-based PostgreSQL array parser, for major speedup in parsing arrays (Dan McClain, jeremyevans)
4
+
5
+ === 1.4.0 (2012-06-01)
6
+
7
+ * Add support for streaming on PostgreSQL 9.2 using PQsetRowProcessor (jeremyevans)
8
+
9
+ * Respect DEBUG environment variable when building (jeremyevans)
10
+
1
11
  === 1.3.0 (2012-04-02)
2
12
 
3
13
  * Build Windows version against PostgreSQL 9.1.1, ruby 1.8.7, and ruby 1.9.2 (previously 9.0.1, 1.8.6, and 1.9.1) (jeremyevans)
@@ -1,4 +1,4 @@
1
- Copyright (c) 2010-2011 Jeremy Evans
1
+ Copyright (c) 2010-2012 Jeremy Evans
2
2
 
3
3
  Permission is hereby granted, free of charge, to any person obtaining a copy
4
4
  of this software and associated documentation files (the "Software"), to
@@ -17,3 +17,31 @@ THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
17
17
  IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
18
18
  CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
19
19
 
20
+
21
+ The original array parsing code (parse_pg_array, read_array) was taken from
22
+ the pg_array_parser library (https://github.com/dockyard/pg_array_parser)
23
+ and has the following license:
24
+
25
+ Copyright (c) 2012 Dan McClain
26
+
27
+ MIT License
28
+
29
+ Permission is hereby granted, free of charge, to any person obtaining
30
+ a copy of this software and associated documentation files (the
31
+ "Software"), to deal in the Software without restriction, including
32
+ without limitation the rights to use, copy, modify, merge, publish,
33
+ distribute, sublicense, and/or sell copies of the Software, and to
34
+ permit persons to whom the Software is furnished to do so, subject to
35
+ the following conditions:
36
+
37
+ The above copyright notice and this permission notice shall be
38
+ included in all copies or substantial portions of the Software.
39
+
40
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
41
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
42
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
43
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
44
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
45
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
46
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
47
+
@@ -38,13 +38,13 @@ Here's an example that uses a modified version of swift's benchmarks
38
38
  sequel #select 0.090000 2.020000 2.110000 2.246688 46.54m
39
39
  sequel_pg #select 0.000000 0.250000 0.250000 0.361999 7.33m
40
40
 
41
- sequel_pg also has code to speed up the map, to_hash, select_hash,
42
- select_map, and select_order_map Dataset methods, which is on by
43
- default. It also has code to speed up the loading of model objects,
44
- which is off by default as it isn't fully compatible. It doesn't
45
- handle overriding Model.call, Model#set_values, or
46
- Model#after_initialize, which may cause problems with the
47
- following plugins that ship with Sequel:
41
+ sequel_pg also has code to speed up the map, to_hash, to_hash_groups,
42
+ select_hash, select_hash_groups, select_map, and select_order_map
43
+ Dataset methods, which is on by default. It also has code to speed
44
+ up the loading of model objects, which is off by default as it isn't
45
+ fully compatible. It doesn't handle overriding Model.call,
46
+ Model#set_values, or Model#after_initialize, which may cause problems
47
+ with the following plugins that ship with Sequel:
48
48
 
49
49
  * class_table_inheritance
50
50
  * force_encoding
@@ -63,6 +63,32 @@ enable the model optimization via:
63
63
  # Specific dataset
64
64
  Artist.dataset.optimize_model_load = true
65
65
 
66
+ == Streaming
67
+
68
+ If you are using PostgreSQL 9.2 or higher on the client, then sequel_pg
69
+ should enable streaming support. This allows you to stream returned
70
+ rows one at a time, instead of collecting the entire result set in
71
+ memory (which is how PostgreSQL works by default). You can check
72
+ if streaming is supported by:
73
+
74
+ Sequel::Postgres.supports_streaming?
75
+
76
+ If streaming is supported, you can load the streaming support into the
77
+ database:
78
+
79
+ require 'sequel_pg/streaming'
80
+ DB.extend Sequel::Postgres::Streaming
81
+
82
+ Then you can call the Dataset#stream method to have the dataset use
83
+ the streaming support:
84
+
85
+ DB[:table].stream.each{|row| ...}
86
+
87
+ If you want to enable streaming for all of a database's datasets, you
88
+ can do the following:
89
+
90
+ DB.extend_datasets Sequel::Postgres::Streaming::AllQueries
91
+
66
92
  == Installing the gem
67
93
 
68
94
  gem install sequel_pg
@@ -1,4 +1,5 @@
1
1
  require 'mkmf'
2
+ $CFLAGS << " -O0 -g -ggdb" if ENV['DEBUG']
2
3
  $CFLAGS << " -Wall " unless RUBY_PLATFORM =~ /solaris/
3
4
  dir_config('pg', ENV["POSTGRES_INCLUDE"] || (IO.popen("pg_config --includedir").readline.chomp rescue nil),
4
5
  ENV["POSTGRES_LIB"] || (IO.popen("pg_config --libdir").readline.chomp rescue nil))
@@ -13,6 +14,7 @@ if enable_config("static-build")
13
14
  end
14
15
 
15
16
  if (have_library('pq') || have_library('libpq') || have_library('ms/libpq')) && have_header('libpq-fe.h')
17
+ have_func 'PQsetRowProcessor'
16
18
  create_makefile("sequel_pg")
17
19
  else
18
20
  puts 'Could not find PostgreSQL build environment (libraries & headers): Makefile not created'
@@ -41,11 +41,23 @@
41
41
  #define SPG_YIELD_KMV_HASH_GROUPS 12
42
42
  #define SPG_YIELD_MKMV_HASH_GROUPS 13
43
43
 
44
+ struct spg_row_proc_info {
45
+ VALUE dataset;
46
+ VALUE block;
47
+ VALUE model;
48
+ VALUE colsyms[SPG_MAX_FIELDS];
49
+ VALUE colconvert[SPG_MAX_FIELDS];
50
+ #if SPG_ENCODING
51
+ int enc_index;
52
+ #endif
53
+ };
54
+
44
55
  static VALUE spg_Sequel;
45
56
  static VALUE spg_Blob;
46
57
  static VALUE spg_BigDecimal;
47
58
  static VALUE spg_Date;
48
59
  static VALUE spg_SQLTime;
60
+ static VALUE spg_PGError;
49
61
 
50
62
  static VALUE spg_sym_utc;
51
63
  static VALUE spg_sym_local;
@@ -102,6 +114,115 @@ static int enc_get_index(VALUE val)
102
114
  }
103
115
  #endif
104
116
 
117
+ static VALUE read_array(int *index, char *c_pg_array_string, int array_string_length, char *word, VALUE converter)
118
+ {
119
+ int word_index = 0;
120
+
121
+ /* The current character in the input string. */
122
+ char c;
123
+
124
+ /* 0: Currently outside a quoted string, current word never quoted
125
+ * 1: Currently inside a quoted string
126
+ * -1: Currently outside a quoted string, current word previously quoted */
127
+ int openQuote = 0;
128
+
129
+ /* Inside quoted input means the next character should be treated literally,
130
+ * instead of being treated as a metacharacter.
131
+ * Outside of quoted input, means that the word shouldn't be pushed to the array,
132
+ * used when the last entry was a subarray (which adds to the array itself). */
133
+ int escapeNext = 0;
134
+
135
+ VALUE array = rb_ary_new();
136
+
137
+ /* Special case the empty array, so it doesn't need to be handled manually inside
138
+ * the loop. */
139
+ if(((*index) < array_string_length) && c_pg_array_string[(*index)] == '}')
140
+ {
141
+ return array;
142
+ }
143
+
144
+ for(;(*index) < array_string_length; ++(*index))
145
+ {
146
+ c = c_pg_array_string[*index];
147
+ if(openQuote < 1)
148
+ {
149
+ if(c == ',' || c == '}')
150
+ {
151
+ if(!escapeNext)
152
+ {
153
+ if(openQuote == 0 && word_index == 4 && !strncmp(word, "NULL", word_index))
154
+ {
155
+ rb_ary_push(array, Qnil);
156
+ }
157
+ else if (RTEST(converter))
158
+ {
159
+ rb_ary_push(array, rb_funcall(converter, spg_id_call, 1, rb_str_new(word, word_index)));
160
+ }
161
+ else
162
+ {
163
+ rb_ary_push(array, rb_str_new(word, word_index));
164
+ }
165
+ }
166
+ if(c == '}')
167
+ {
168
+ return array;
169
+ }
170
+ escapeNext = 0;
171
+ openQuote = 0;
172
+ word_index = 0;
173
+ }
174
+ else if(c == '"')
175
+ {
176
+ openQuote = 1;
177
+ }
178
+ else if(c == '{')
179
+ {
180
+ (*index)++;
181
+ rb_ary_push(array, read_array(index, c_pg_array_string, array_string_length, word, converter));
182
+ escapeNext = 1;
183
+ }
184
+ else
185
+ {
186
+ word[word_index] = c;
187
+ word_index++;
188
+ }
189
+ }
190
+ else if (escapeNext) {
191
+ word[word_index] = c;
192
+ word_index++;
193
+ escapeNext = 0;
194
+ }
195
+ else if (c == '\\')
196
+ {
197
+ escapeNext = 1;
198
+ }
199
+ else if (c == '"')
200
+ {
201
+ openQuote = -1;
202
+ }
203
+ else
204
+ {
205
+ word[word_index] = c;
206
+ word_index++;
207
+ }
208
+ }
209
+
210
+ return array;
211
+ }
212
+
213
+ static VALUE parse_pg_array(VALUE self, VALUE pg_array_string, VALUE converter) {
214
+
215
+ /* convert to c-string, create additional ruby string buffer of
216
+ * the same length, as that will be the worst case. */
217
+ char *c_pg_array_string = StringValueCStr(pg_array_string);
218
+ int array_string_length = RSTRING_LEN(pg_array_string);
219
+ VALUE buf = rb_str_buf_new(array_string_length);
220
+ char *word = RSTRING_PTR(buf);
221
+ int index = 1;
222
+
223
+ return read_array(&index, c_pg_array_string, array_string_length, word, converter);
224
+ }
225
+
105
226
  static VALUE spg_time(const char *s) {
106
227
  VALUE now;
107
228
  int hour, minute, second, tokens;
@@ -399,32 +520,13 @@ static VALUE spg__field_ids(VALUE v, VALUE *colsyms, long nfields) {
399
520
  return pg_columns;
400
521
  }
401
522
 
402
- static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
403
- PGresult *res;
404
- VALUE colsyms[SPG_MAX_FIELDS];
405
- VALUE colconvert[SPG_MAX_FIELDS];
406
- long ntuples;
407
- long nfields;
523
+ static void spg_set_column_info(VALUE self, PGresult *res, VALUE *colsyms, VALUE *colconvert) {
408
524
  long i;
409
525
  long j;
410
- VALUE h;
526
+ long nfields;
411
527
  VALUE conv_procs = 0;
412
- VALUE opts;
413
- VALUE pg_type;
414
- VALUE pg_value;
415
- char type = SPG_YIELD_NORMAL;
416
-
417
- #ifdef SPG_ENCODING
418
- int enc_index;
419
- enc_index = enc_get_index(rres);
420
- #endif
421
528
 
422
- Data_Get_Struct(rres, PGresult, res);
423
- ntuples = PQntuples(res);
424
529
  nfields = PQnfields(res);
425
- if (nfields > SPG_MAX_FIELDS) {
426
- rb_raise(rb_eRangeError, "more than %d columns in query", SPG_MAX_FIELDS);
427
- }
428
530
 
429
531
  for(j=0; j<nfields; j++) {
430
532
  colsyms[j] = rb_funcall(self, spg_id_output_identifier, 1, rb_str_new2(PQfname(res, j)));
@@ -459,6 +561,36 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
459
561
  break;
460
562
  }
461
563
  }
564
+ }
565
+
566
+ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
567
+ PGresult *res;
568
+ VALUE colsyms[SPG_MAX_FIELDS];
569
+ VALUE colconvert[SPG_MAX_FIELDS];
570
+ long ntuples;
571
+ long nfields;
572
+ long i;
573
+ long j;
574
+ VALUE h;
575
+ VALUE opts;
576
+ VALUE pg_type;
577
+ VALUE pg_value;
578
+ char type = SPG_YIELD_NORMAL;
579
+
580
+ #ifdef SPG_ENCODING
581
+ int enc_index;
582
+ enc_index = enc_get_index(rres);
583
+ #endif
584
+
585
+ Data_Get_Struct(rres, PGresult, res);
586
+ ntuples = PQntuples(res);
587
+ nfields = PQnfields(res);
588
+ if (nfields > SPG_MAX_FIELDS) {
589
+ rb_raise(rb_eRangeError, "more than %d columns in query", SPG_MAX_FIELDS);
590
+ }
591
+
592
+ spg_set_column_info(self, res, colsyms, colconvert);
593
+
462
594
  rb_ivar_set(self, spg_id_columns, rb_ary_new4(nfields, colsyms));
463
595
 
464
596
  opts = rb_funcall(self, spg_id_opts, 0);
@@ -675,6 +807,199 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
675
807
  return self;
676
808
  }
677
809
 
810
+ static VALUE spg_supports_streaming_p(VALUE self) {
811
+ return
812
+ #if HAVE_PQSETROWPROCESSOR
813
+ Qtrue;
814
+ #else
815
+ Qfalse;
816
+ #endif
817
+ }
818
+
819
+ #if HAVE_PQSETROWPROCESSOR
820
+ static VALUE spg__rp_value(VALUE self, PGresult* res, const PGdataValue* dvs, int j, VALUE* colconvert
821
+ #ifdef SPG_ENCODING
822
+ , int enc_index
823
+ #endif
824
+ ) {
825
+ const char *v;
826
+ PGdataValue dv = dvs[j];
827
+ VALUE rv;
828
+ size_t l;
829
+ int len = dv.len;
830
+
831
+ if(len < 0) {
832
+ rv = Qnil;
833
+ } else {
834
+ v = dv.value;
835
+
836
+ switch(PQftype(res, j)) {
837
+ case 16: /* boolean */
838
+ rv = *v == 't' ? Qtrue : Qfalse;
839
+ break;
840
+ case 17: /* bytea */
841
+ v = PQunescapeBytea((unsigned char*)v, &l);
842
+ rv = rb_funcall(spg_Blob, spg_id_new, 1, rb_str_new(v, l));
843
+ PQfreemem((char *)v);
844
+ break;
845
+ case 20: /* integer */
846
+ case 21:
847
+ case 22:
848
+ case 23:
849
+ case 26:
850
+ rv = rb_str2inum(rb_str_new(v, len), 10);
851
+ break;
852
+ case 700: /* float */
853
+ case 701:
854
+ if (strncmp("NaN", v, 3) == 0) {
855
+ rv = spg_nan;
856
+ } else if (strncmp("Infinity", v, 8) == 0) {
857
+ rv = spg_pos_inf;
858
+ } else if (strncmp("-Infinity", v, 9) == 0) {
859
+ rv = spg_neg_inf;
860
+ } else {
861
+ rv = rb_float_new(rb_str_to_dbl(rb_str_new(v, len), Qfalse));
862
+ }
863
+ break;
864
+ case 790: /* numeric */
865
+ case 1700:
866
+ rv = rb_funcall(spg_BigDecimal, spg_id_new, 1, rb_str_new(v, len));
867
+ break;
868
+ case 1082: /* date */
869
+ rv = rb_str_new(v, len);
870
+ rv = spg_date(StringValuePtr(rv));
871
+ break;
872
+ case 1083: /* time */
873
+ case 1266:
874
+ rv = rb_str_new(v, len);
875
+ rv = spg_time(StringValuePtr(rv));
876
+ break;
877
+ case 1114: /* timestamp */
878
+ case 1184:
879
+ rv = rb_str_new(v, len);
880
+ rv = spg_timestamp(StringValuePtr(rv), self);
881
+ break;
882
+ case 18: /* char */
883
+ case 25: /* text */
884
+ case 1043: /* varchar*/
885
+ rv = rb_tainted_str_new(v, len);
886
+ #ifdef SPG_ENCODING
887
+ rb_enc_associate_index(rv, enc_index);
888
+ #endif
889
+ break;
890
+ default:
891
+ rv = rb_tainted_str_new(v, len);
892
+ #ifdef SPG_ENCODING
893
+ rb_enc_associate_index(rv, enc_index);
894
+ #endif
895
+ if (colconvert[j] != Qnil) {
896
+ rv = rb_funcall(colconvert[j], spg_id_call, 1, rv);
897
+ }
898
+ }
899
+ }
900
+ return rv;
901
+ }
902
+
903
+ static int spg_row_processor(PGresult *res, const PGdataValue *columns, const char **errmsgp, void *param) {
904
+ long nfields;
905
+ struct spg_row_proc_info *info;
906
+ info = (struct spg_row_proc_info *)param;
907
+ VALUE *colsyms = info->colsyms;
908
+ VALUE *colconvert = info->colconvert;
909
+ VALUE self = info->dataset;
910
+
911
+ switch (PQresultStatus(res))
912
+ {
913
+ case PGRES_TUPLES_OK:
914
+ case PGRES_COPY_OUT:
915
+ case PGRES_COPY_IN:
916
+ #ifdef HAVE_CONST_PGRES_COPY_BOTH
917
+ case PGRES_COPY_BOTH:
918
+ #endif
919
+ case PGRES_EMPTY_QUERY:
920
+ case PGRES_COMMAND_OK:
921
+ break;
922
+ case PGRES_BAD_RESPONSE:
923
+ case PGRES_FATAL_ERROR:
924
+ case PGRES_NONFATAL_ERROR:
925
+ rb_raise(spg_PGError, "error while streaming results");
926
+ default:
927
+ rb_raise(spg_PGError, "unexpected result status while streaming results");
928
+ }
929
+
930
+ nfields = PQnfields(res);
931
+ if(columns == NULL) {
932
+ spg_set_column_info(self, res, colsyms, colconvert);
933
+ rb_ivar_set(self, spg_id_columns, rb_ary_new4(nfields, colsyms));
934
+ } else {
935
+ long j;
936
+ VALUE h, m;
937
+ h = rb_hash_new();
938
+
939
+ for(j=0; j<nfields; j++) {
940
+ rb_hash_aset(h, colsyms[j], spg__rp_value(self, res, columns, j, colconvert
941
+ #ifdef SPG_ENCODING
942
+ , info->enc_index
943
+ #endif
944
+ ));
945
+ }
946
+
947
+ /* optimize_model_load used, return model instance */
948
+ if ((m = info->model)) {
949
+ m = rb_obj_alloc(m);
950
+ rb_ivar_set(m, spg_id_values, h);
951
+ h = m;
952
+ }
953
+
954
+ rb_funcall(info->block, spg_id_call, 1, h);
955
+ }
956
+ return 1;
957
+ }
958
+
959
+ static VALUE spg_unset_row_processor(VALUE rconn) {
960
+ PGconn *conn;
961
+ Data_Get_Struct(rconn, PGconn, conn);
962
+ if ((PQskipResult(conn)) != NULL) {
963
+ /* Results remaining when row processor finished,
964
+ * either because an exception was raised or the iterator
965
+ * exited early, so skip all remaining rows. */
966
+ while(PQgetResult(conn) != NULL) {
967
+ /* Use a separate while loop as PQgetResult is faster than
968
+ * PQskipResult. */
969
+ }
970
+ }
971
+ PQsetRowProcessor(conn, NULL, NULL);
972
+ return Qnil;
973
+ }
974
+
975
+ static VALUE spg_with_row_processor(VALUE self, VALUE rconn, VALUE dataset, VALUE block) {
976
+ struct spg_row_proc_info info;
977
+ PGconn *conn;
978
+ Data_Get_Struct(rconn, PGconn, conn);
979
+ bzero(&info, sizeof(info));
980
+
981
+ info.dataset = dataset;
982
+ info.block = block;
983
+ info.model = 0;
984
+ #if SPG_ENCODING
985
+ info.enc_index = enc_get_index(rconn);
986
+ #endif
987
+
988
+ /* Abuse local variable, detect if optimize_model_load used */
989
+ block = rb_funcall(dataset, spg_id_opts, 0);
990
+ if (rb_type(block) == T_HASH && rb_hash_aref(block, spg_sym__sequel_pg_type) == spg_sym_model) {
991
+ block = rb_hash_aref(block, spg_sym__sequel_pg_value);
992
+ if (rb_type(block) == T_CLASS) {
993
+ info.model = block;
994
+ }
995
+ }
996
+
997
+ PQsetRowProcessor(conn, spg_row_processor, (void*)&info);
998
+ rb_ensure(rb_yield, Qnil, spg_unset_row_processor, rconn);
999
+ return Qnil;
1000
+ }
1001
+ #endif
1002
+
678
1003
  void Init_sequel_pg(void) {
679
1004
  VALUE c, spg_Postgres;
680
1005
  ID cg;
@@ -725,6 +1050,7 @@ void Init_sequel_pg(void) {
725
1050
  spg_BigDecimal = rb_funcall(rb_cObject, cg, 1, rb_str_new2("BigDecimal"));
726
1051
  spg_Date = rb_funcall(rb_cObject, cg, 1, rb_str_new2("Date"));
727
1052
  spg_Postgres = rb_funcall(spg_Sequel, cg, 1, rb_str_new2("Postgres"));
1053
+ spg_PGError = rb_funcall(rb_cObject, cg, 1, rb_str_new2("PGError"));
728
1054
 
729
1055
  spg_nan = rb_eval_string("0.0/0.0");
730
1056
  spg_pos_inf = rb_eval_string("1.0/0.0");
@@ -749,5 +1075,14 @@ void Init_sequel_pg(void) {
749
1075
  rb_define_private_method(c, "yield_hash_rows", spg_yield_hash_rows, 2);
750
1076
  rb_define_private_method(c, "fetch_rows_set_cols", spg_fetch_rows_set_cols, 1);
751
1077
 
1078
+ rb_define_singleton_method(spg_Postgres, "supports_streaming?", spg_supports_streaming_p, 0);
1079
+
1080
+ #if HAVE_PQSETROWPROCESSOR
1081
+ c = rb_funcall(spg_Postgres, cg, 1, rb_str_new2("Database"));
1082
+ rb_define_private_method(c, "with_row_processor", spg_with_row_processor, 3);
1083
+ #endif
1084
+
1085
+ rb_define_singleton_method(spg_Postgres, "parse_pg_array", parse_pg_array, 2);
1086
+
752
1087
  rb_require("sequel_pg/sequel_pg");
753
1088
  }
Binary file
Binary file
@@ -83,3 +83,17 @@ class Sequel::Postgres::Dataset
83
83
  (rp = row_proc).is_a?(Class) && (rp < Sequel::Model) && optimize_model_load && !opts[:use_cursor] && !opts[:graph]
84
84
  end
85
85
  end
86
+
87
+ if defined?(Sequel::Postgres::PGArray)
88
+ # pg_array extension previously loaded
89
+
90
+ class Sequel::Postgres::PGArray::Creator
91
+ # Override Creator to use sequel_pg's C-based parser instead of the pure ruby parser.
92
+ def call(string)
93
+ Sequel::Postgres::PGArray.new(Sequel::Postgres.parse_pg_array(string, @converter), @type)
94
+ end
95
+ end
96
+
97
+ # Remove the pure-ruby parser, no longer needed.
98
+ Sequel::Postgres::PGArray.send(:remove_const, :Parser)
99
+ end
@@ -0,0 +1,82 @@
1
+ unless Sequel::Postgres.respond_to?(:supports_streaming?)
2
+ raise LoadError, "either sequel_pg not loaded, or an old version of sequel_pg loaded"
3
+ end
4
+ unless Sequel::Postgres.supports_streaming?
5
+ raise LoadError, "streaming is not supported by the version of libpq in use"
6
+ end
7
+
8
+ # Database methods necessary to support streaming. You should extend your
9
+ # Database object with this:
10
+ #
11
+ # DB.extend Sequel::Postgres::Streaming
12
+ #
13
+ # Then you can call #stream on your datasets to use the streaming support:
14
+ #
15
+ # DB[:table].stream.each{|row| ...}
16
+ module Sequel::Postgres::Streaming
17
+ # Also extend the database's datasets to support streaming
18
+ def self.extended(db)
19
+ db.extend_datasets(DatasetMethods)
20
+ end
21
+
22
+ private
23
+
24
+ # If streaming is requested, set a row processor while executing
25
+ # the query.
26
+ def _execute(conn, sql, opts={})
27
+ if stream = opts[:stream]
28
+ with_row_processor(conn, *stream){super}
29
+ else
30
+ super
31
+ end
32
+ end
33
+
34
+ # Dataset methods used to implement streaming.
35
+ module DatasetMethods
36
+ # If streaming has been requested and the current dataset
37
+ # can be streamed, request the database use streaming when
38
+ # executing this query.
39
+ def fetch_rows(sql, &block)
40
+ if stream_results?
41
+ execute(sql, :stream=>[self, block])
42
+ else
43
+ super
44
+ end
45
+ end
46
+
47
+ # Return a clone of the dataset that will use streaming to load
48
+ # rows.
49
+ def stream
50
+ clone(:stream=>true)
51
+ end
52
+
53
+ private
54
+
55
+ # Only stream results if streaming has been specifically requested
56
+ # and the query is streamable.
57
+ def stream_results?
58
+ @opts[:stream] && streamable?
59
+ end
60
+
61
+ # Queries using cursors are not streamable, and queries that use
62
+ # the map/select_map/to_hash/to_hash_groups optimizations are not
63
+ # streamable, but other queries are streamable.
64
+ def streamable?
65
+ spgt = (o = @opts)[:_sequel_pg_type]
66
+ (spgt.nil? || spgt == :model) && !o[:cursor]
67
+ end
68
+ end
69
+
70
+ # Extend a database's datasets with this module to enable streaming
71
+ # on all streamable queries:
72
+ #
73
+ # DB.extend_datasets(Sequel::Postgres::Streaming::AllQueries)
74
+ module AllQueries
75
+ private
76
+
77
+ # Always stream results if the query is streamable.
78
+ def stream_results?
79
+ streamable?
80
+ end
81
+ end
82
+ end
metadata CHANGED
@@ -1,13 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sequel_pg
3
3
  version: !ruby/object:Gem::Version
4
- hash: 27
4
+ hash: 3
5
5
  prerelease:
6
6
  segments:
7
7
  - 1
8
- - 3
8
+ - 5
9
9
  - 0
10
- version: 1.3.0
10
+ version: 1.5.0
11
11
  platform: x86-mingw32
12
12
  authors:
13
13
  - Jeremy Evans
@@ -15,7 +15,7 @@ autorequire:
15
15
  bindir: bin
16
16
  cert_chain: []
17
17
 
18
- date: 2012-03-24 00:00:00 Z
18
+ date: 2012-04-07 00:00:00 Z
19
19
  dependencies:
20
20
  - !ruby/object:Gem::Dependency
21
21
  name: pg
@@ -41,12 +41,12 @@ dependencies:
41
41
  requirements:
42
42
  - - ">="
43
43
  - !ruby/object:Gem::Version
44
- hash: 143
44
+ hash: 151
45
45
  segments:
46
46
  - 3
47
- - 34
47
+ - 36
48
48
  - 0
49
- version: 3.34.0
49
+ version: 3.36.0
50
50
  type: :runtime
51
51
  version_requirements: *id002
52
52
  description: |
@@ -72,6 +72,7 @@ files:
72
72
  - ext/sequel_pg/extconf.rb
73
73
  - ext/sequel_pg/sequel_pg.c
74
74
  - lib/sequel_pg/sequel_pg.rb
75
+ - lib/sequel_pg/streaming.rb
75
76
  - lib/1.8/sequel_pg.so
76
77
  - lib/1.9/sequel_pg.so
77
78
  homepage: http://github.com/jeremyevans/sequel_pg