sequel_pg 1.11.0 → 1.17.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c587158ff9ea71f6a0ce019384c317b03b8c1a9c6027f363502e98f148b4a958
4
- data.tar.gz: ad74cf362d21fe64409be4b29536709c5d6d6dbabe3a248a3cad5da5f282124f
3
+ metadata.gz: 61a49fada0595f92d8452ff231ff0d44ffadf59a887671c9d86178d957f95f44
4
+ data.tar.gz: 8969b6f0b9492e2f9edb087b57ef7b13993d61eeaabb9a0291478c1a227448ff
5
5
  SHA512:
6
- metadata.gz: 39fb59374fe67630cea51820851b380f9537ef6769599ee43581855eb99e70c5251d03c70134a0512ad6ce36bc127704dcf152a2113de1461deb2839e19830c5
7
- data.tar.gz: 470e31d71b43e3cad932cad0838805047d8d69013795a28d29b0099460a7a285ae5caae6a3e3950d61b80e30d2668cbf398fdc0dce1e16a1f970226e48c69cdb
6
+ metadata.gz: 7d5f69fd6e5f2a7dd3b067ca423c85ed2d9fc2f3af9ae191a19507ac3635e654699e5dac6c90ba1f910d5619cf8242a039e07752bfccbfed8c2a57a280fe0bb3
7
+ data.tar.gz: 899259457171fafd01c689f234da55255a06a4408ab7e58bcbdf8df94a409884a46ce3e613a2faabe9fbec350b65188d3d8a3e1821832b74d7ff9d145bbb16ef
data/CHANGELOG CHANGED
@@ -1,3 +1,79 @@
1
+ === 1.17.2 (2025-03-14)
2
+
3
+ * Add explicit arguments to PQfreemem casts to avoid compilation issues when using the C23 standard (jeremyevans) (#59)
4
+
5
+ === 1.17.1 (2023-01-05)
6
+
7
+ * Modify LDFLAGS when building on MacOS to allow undefined functions (delphaber) (#53)
8
+
9
+ === 1.17.0 (2022-10-05)
10
+
11
+ * Do not use pgresult_stream_any when using pg <1.4.4, to avoid double free in certain cases (larskanis) (#50)
12
+
13
+ * Support new pgresult_stream_any API in pg 1.4.4 (larskanis) (#50)
14
+
15
+ === 1.16.0 (2022-08-16)
16
+
17
+ * Fix memory leak when using streaming with pg 1.3.4+ (jeremyevans) (#48)
18
+
19
+ * Modify LDFLAGS when building on MacOS arm64 to allow undefined functions (maxsz) (#46)
20
+
21
+ * Adjust many internal C types to fix compilation warnings (jeremyevans)
22
+
23
+ === 1.15.0 (2022-03-16)
24
+
25
+ * Avoid deprecation warning in the pg_streaming extension on pg 1.3+ when streaming a query with bound parameters (jeremyevans)
26
+
27
+ * Use pgresult_stream_any when using pg 1.3.4+ for faster streaming (jeremyevans)
28
+
29
+ * Do not use streaming by default for Dataset#paged_each in the pg_streaming extension (jeremyevans)
30
+
31
+ * Avoid verbose warning if loading sequel_pg after Sequel pg_array extension (jeremyevans)
32
+
33
+ === 1.14.0 (2020-09-22)
34
+
35
+ * Reduce stack memory usage for result sets with 64 or fewer columns (jeremyevans)
36
+
37
+ * Support result sets with more than 256 columns by default (jeremyevans) (#39)
38
+
39
+ === 1.13.0 (2020-04-13)
40
+
41
+ * Allow overriding of inet/cidr type conversion using conversion procs (beanieboi, jeremyevans) (#36, #37)
42
+
43
+ === 1.12.5 (2020-03-23)
44
+
45
+ * Fix offset calculation for timestamptz types when datetime_class is DateTime and using local application timezone (jeremyevans)
46
+
47
+ * Fix wrong method call when parsing timestamptz types when datetime_class is Time and using utc database timezone and local application timezone (jeremyevans)
48
+
49
+ === 1.12.4 (2020-01-02)
50
+
51
+ * Work with pg 1.2.1+ (jeremyevans)
52
+
53
+ === 1.12.3 (2020-01-02)
54
+
55
+ * Warn and do not load sequel_pg if pg >1.2 is used (jeremyevans)
56
+
57
+ * Avoid verbose warnings on Ruby 2.7 due to tainting (jeremyevans)
58
+
59
+ === 1.12.2 (2019-06-06)
60
+
61
+ * Avoid use of pkg_config as it breaks compilation in some environments (jeremyevans) (#33)
62
+
63
+ === 1.12.1 (2019-05-31)
64
+
65
+ * Avoid using Proc.new without block, fixing deprecation warning on ruby 2.7+ (jeremyevans)
66
+
67
+ * Use rb_gc_register_mark_object instead of rb_global_variable (jeremyevans)
68
+
69
+ * Use pkg_config instead of pg_config for configuration (jeremyevans) (#31)
70
+
71
+ === 1.12.0 (2019-03-01)
72
+
73
+ * Allow Dataset#paged_each to be called without a block when using streaming (jeremyevans)
74
+
75
+ * Freeze ruby strings used as temporary buffers for parsing arrays (jeremyevans)
76
+
1
77
  === 1.11.0 (2018-07-09)
2
78
 
3
79
  * Set encoding correctly for hash symbol keys (jeremyevans)
data/MIT-LICENSE CHANGED
@@ -1,4 +1,4 @@
1
- Copyright (c) 2010-2018 Jeremy Evans
1
+ Copyright (c) 2010-2020 Jeremy Evans
2
2
 
3
3
  Permission is hereby granted, free of charge, to any person obtaining a copy
4
4
  of this software and associated documentation files (the "Software"), to
data/README.rdoc CHANGED
@@ -70,12 +70,6 @@ can do the following:
70
70
 
71
71
  gem install sequel_pg
72
72
 
73
- Note that by default sequel_pg only supports result sets with up to
74
- 256 columns. If you will have a result set with more than 256 columns,
75
- you should modify the maximum supported number of columns via:
76
-
77
- gem install sequel_pg -- --with-cflags=\"-DSPG_MAX_FIELDS=512\"
78
-
79
73
  Make sure the pg_config binary is in your PATH so the installation
80
74
  can find the PostgreSQL shared library and header files. Alternatively,
81
75
  you can use the POSTGRES_LIB and POSTGRES_INCLUDE environment
@@ -83,9 +77,16 @@ variables to specify the shared library and header directories.
83
77
 
84
78
  == Running the specs
85
79
 
86
- sequel_pg doesn't ship with it's own specs. It's designed to
87
- replace a part of Sequel, so it just uses Sequel's specs.
88
- Specifically, the spec_postgres rake task from Sequel.
80
+ sequel_pg is designed to replace a part of Sequel, so it should be tested
81
+ using Sequel's specs (the spec_postgres rake task). There is a spec_cov
82
+ task that assumes you have Sequel checked out at ../sequel, and uses a
83
+ small spec suite for parts of sequel_pg not covered by Sequel's specs.
84
+ It sets the SEQUEL_PG_STREAM environment variable when running Sequel's
85
+ specs, make sure that spec/spec_config.rb in Sequel is set to connect
86
+ to PostgreSQL and use the following additional settings:
87
+
88
+ DB.extension(:pg_streaming)
89
+ DB.stream_all_queries = true
89
90
 
90
91
  == Reporting issues/bugs
91
92
 
@@ -118,19 +119,6 @@ requirements:
118
119
 
119
120
  rake build
120
121
 
121
- == Platforms Supported
122
-
123
- sequel_pg has been tested on the following:
124
-
125
- * ruby 1.9.3
126
- * ruby 2.0
127
- * ruby 2.1
128
- * ruby 2.2
129
- * ruby 2.3
130
- * ruby 2.4
131
- * ruby 2.5
132
- * ruby 2.6
133
-
134
122
  == Known Issues
135
123
 
136
124
  * You must be using the ISO PostgreSQL date format (which is the
data/Rakefile CHANGED
@@ -1,7 +1,6 @@
1
- require "rake"
2
1
  require "rake/clean"
3
2
 
4
- CLEAN.include %w'**.rbc rdoc'
3
+ CLEAN.include %w'**.rbc rdoc coverage'
5
4
 
6
5
  desc "Do a full cleaning"
7
6
  task :distclean do
@@ -19,3 +18,25 @@ begin
19
18
  Rake::ExtensionTask.new('sequel_pg')
20
19
  rescue LoadError
21
20
  end
21
+
22
+ # This assumes you have sequel checked out in ../sequel, and that
23
+ # spec_postgres is setup to run Sequel's PostgreSQL specs.
24
+ desc "Run tests with coverage"
25
+ task :spec_cov=>:compile do
26
+ ENV['RUBYLIB'] = "#{__dir__}/lib:#{ENV['RUBYLIB']}"
27
+ ENV['RUBYOPT'] = "-r #{__dir__}/spec/coverage_helper.rb #{ENV['RUBYOPT']}"
28
+ ENV['SIMPLECOV_COMMAND_NAME'] = "sequel_pg"
29
+ sh %'#{FileUtils::RUBY} -I ../sequel/lib spec/sequel_pg_spec.rb'
30
+
31
+ ENV['RUBYOPT'] = "-I lib -r sequel -r sequel/extensions/pg_array #{ENV['RUBYOPT']}"
32
+ ENV['SEQUEL_PG_STREAM'] = "1"
33
+ ENV['SIMPLECOV_COMMAND_NAME'] = "sequel"
34
+ sh %'cd ../sequel && #{FileUtils::RUBY} spec/adapter_spec.rb postgres'
35
+ end
36
+
37
+ desc "Run CI tests"
38
+ task :spec_ci=>:compile do
39
+ ENV['SEQUEL_PG_SPEC_URL'] = ENV['SEQUEL_POSTGRES_URL'] = "postgres://localhost/?user=postgres&password=postgres"
40
+ sh %'#{FileUtils::RUBY} -I lib -I sequel/lib spec/sequel_pg_spec.rb'
41
+ sh %'cd sequel && #{FileUtils::RUBY} -I lib -I ../lib spec/adapter_spec.rb postgres'
42
+ end
@@ -1,6 +1,8 @@
1
1
  require 'mkmf'
2
2
  $CFLAGS << " -O0 -g" if ENV['DEBUG']
3
+ $CFLAGS << " -Drb_tainted_str_new=rb_str_new -DNO_TAINT" if RUBY_VERSION >= '2.7'
3
4
  $CFLAGS << " -Wall " unless RUBY_PLATFORM =~ /solaris/
5
+ $LDFLAGS += " -Wl,-U,_pg_get_pgconn -Wl,-U,_pg_get_result_enc_idx -Wl,-U,_pgresult_get -Wl,-U,_pgresult_stream_any " if RUBY_PLATFORM =~ /darwin/
4
6
  dir_config('pg', ENV["POSTGRES_INCLUDE"] || (IO.popen("pg_config --includedir").readline.chomp rescue nil),
5
7
  ENV["POSTGRES_LIB"] || (IO.popen("pg_config --libdir").readline.chomp rescue nil))
6
8
 
@@ -1,4 +1,4 @@
1
- #define SEQUEL_PG_VERSION_INTEGER 11100
1
+ #define SEQUEL_PG_VERSION_INTEGER 11702
2
2
 
3
3
  #include <string.h>
4
4
  #include <stdio.h>
@@ -15,9 +15,6 @@
15
15
  #include <ruby/version.h>
16
16
  #include <ruby/encoding.h>
17
17
 
18
- #ifndef SPG_MAX_FIELDS
19
- #define SPG_MAX_FIELDS 256
20
- #endif
21
18
  #define SPG_MINUTES_PER_DAY 1440.0
22
19
  #define SPG_SECONDS_PER_DAY 86400.0
23
20
 
@@ -72,8 +69,12 @@
72
69
  /* External functions defined by ruby-pg */
73
70
  PGconn* pg_get_pgconn(VALUE);
74
71
  PGresult* pgresult_get(VALUE);
72
+ int pg_get_result_enc_idx(VALUE);
73
+ VALUE pgresult_stream_any(VALUE self, int (*yielder)(VALUE, int, int, void*), void* data);
75
74
 
76
75
  static int spg_use_ipaddr_alloc;
76
+ static int spg_use_pg_get_result_enc_idx;
77
+ static int spg_use_pg_stream_any;
77
78
 
78
79
  static VALUE spg_Sequel;
79
80
  static VALUE spg_PGArray;
@@ -198,10 +199,10 @@ static int enc_get_index(VALUE val) {
198
199
  } while(0)
199
200
 
200
201
  static VALUE
201
- pg_text_dec_integer(char *val, int len)
202
+ pg_text_dec_integer(char *val, size_t len)
202
203
  {
203
204
  long i;
204
- int max_len;
205
+ size_t max_len;
205
206
 
206
207
  if( sizeof(i) >= 8 && FIXNUM_MAX >= 1000000000000000000LL ){
207
208
  /* 64 bit system can safely handle all numbers up to 18 digits as Fixnum */
@@ -256,7 +257,7 @@ pg_text_dec_integer(char *val, int len)
256
257
 
257
258
  static VALUE spg__array_col_value(char *v, size_t length, VALUE converter, int enc_index, int oid, VALUE db);
258
259
 
259
- static VALUE read_array(int *index, char *c_pg_array_string, int array_string_length, VALUE buf, VALUE converter, int enc_index, int oid, VALUE db) {
260
+ static VALUE read_array(int *index, char *c_pg_array_string, long array_string_length, VALUE buf, VALUE converter, int enc_index, int oid, VALUE db) {
260
261
  int word_index = 0;
261
262
  char *word = RSTRING_PTR(buf);
262
263
 
@@ -352,7 +353,7 @@ static VALUE read_array(int *index, char *c_pg_array_string, int array_string_le
352
353
  return array;
353
354
  }
354
355
 
355
- static VALUE check_pg_array(int* index, char *c_pg_array_string, int array_string_length) {
356
+ static VALUE check_pg_array(int* index, char *c_pg_array_string, long array_string_length) {
356
357
  if (array_string_length == 0) {
357
358
  rb_raise(rb_eArgError, "unexpected PostgreSQL array format, empty");
358
359
  } else if (array_string_length == 2 && c_pg_array_string[0] == '{' && c_pg_array_string[0] == '}') {
@@ -383,7 +384,7 @@ static VALUE parse_pg_array(VALUE self, VALUE pg_array_string, VALUE converter)
383
384
  /* convert to c-string, create additional ruby string buffer of
384
385
  * the same length, as that will be the worst case. */
385
386
  char *c_pg_array_string = StringValueCStr(pg_array_string);
386
- int array_string_length = RSTRING_LEN(pg_array_string);
387
+ long array_string_length = RSTRING_LEN(pg_array_string);
387
388
  int index = 1;
388
389
  VALUE ary;
389
390
 
@@ -391,10 +392,14 @@ static VALUE parse_pg_array(VALUE self, VALUE pg_array_string, VALUE converter)
391
392
  return ary;
392
393
  }
393
394
 
395
+ ary = rb_str_buf_new(array_string_length);
396
+ rb_str_set_len(ary, array_string_length);
397
+ rb_obj_freeze(ary);
398
+
394
399
  return read_array(&index,
395
400
  c_pg_array_string,
396
401
  array_string_length,
397
- rb_str_buf_new(array_string_length),
402
+ ary,
398
403
  converter,
399
404
  enc_get_index(pg_array_string),
400
405
  0,
@@ -405,7 +410,7 @@ static VALUE spg_timestamp_error(const char *s, VALUE self, const char *error_ms
405
410
  self = rb_funcall(self, spg_id_db, 0);
406
411
  if(RTEST(rb_funcall(self, spg_id_convert_infinite_timestamps, 0))) {
407
412
  if((strcmp(s, "infinity") == 0) || (strcmp(s, "-infinity") == 0)) {
408
- return rb_funcall(self, spg_id_infinite_timestamp_value, 1, rb_tainted_str_new2(s));
413
+ return rb_funcall(self, spg_id_infinite_timestamp_value, 1, rb_tainted_str_new(s, strlen(s)));
409
414
  }
410
415
  }
411
416
  rb_raise(rb_eArgError, "%s", error_msg);
@@ -529,7 +534,7 @@ static VALUE spg_timestamp(const char *s, VALUE self, size_t length, int tz) {
529
534
  }
530
535
 
531
536
  if (remaining < 19) {
532
- return spg_timestamp_error(s, self, "unexpected timetamp format, too short");
537
+ return spg_timestamp_error(s, self, "unexpected timestamp format, too short");
533
538
  }
534
539
 
535
540
  year = parse_year(&p, &remaining);
@@ -625,7 +630,7 @@ static VALUE spg_timestamp(const char *s, VALUE self, size_t length, int tz) {
625
630
  if (tz & SPG_APP_UTC) {
626
631
  dt = rb_funcall(dt, spg_id_utc, 0);
627
632
  } else if (tz & SPG_APP_LOCAL) {
628
- dt = rb_funcall(dt, spg_id_local, 0);
633
+ dt = rb_funcall(dt, spg_id_localtime, 0);
629
634
  }
630
635
 
631
636
  return dt;
@@ -697,8 +702,8 @@ static VALUE spg_timestamp(const char *s, VALUE self, size_t length, int tz) {
697
702
  SPG_DT_ADD_USEC
698
703
 
699
704
  if (tz & SPG_APP_LOCAL) {
700
- utc_offset = NUM2INT(rb_funcall(rb_funcall(rb_cTime, spg_id_new, 0), spg_id_utc_offset, 0))/SPG_SECONDS_PER_DAY;
701
- dt = rb_funcall(dt, spg_id_new_offset, 1, rb_float_new(utc_offset));
705
+ offset_fraction = NUM2INT(rb_funcall(rb_funcall(rb_cTime, spg_id_local, 6, INT2NUM(year), INT2NUM(month), INT2NUM(day), INT2NUM(hour), INT2NUM(min), INT2NUM(sec)), spg_id_utc_offset, 0))/SPG_SECONDS_PER_DAY;
706
+ dt = rb_funcall(dt, spg_id_new_offset, 1, rb_float_new(offset_fraction));
702
707
  } else if (tz & SPG_APP_UTC) {
703
708
  dt = rb_funcall(dt, spg_id_new_offset, 1, INT2NUM(0));
704
709
  }
@@ -853,7 +858,11 @@ static VALUE spg_create_Blob(VALUE v) {
853
858
  if (bi->blob_string == NULL) {
854
859
  rb_raise(rb_eNoMemError, "PQunescapeBytea failure: probably not enough memory");
855
860
  }
856
- return rb_obj_taint(rb_str_new_with_class(spg_Blob_instance, bi->blob_string, bi->length));
861
+ v = rb_str_new_with_class(spg_Blob_instance, bi->blob_string, bi->length);
862
+ #ifndef NO_TAINT
863
+ rb_obj_taint(v);
864
+ #endif
865
+ return v;
857
866
  }
858
867
 
859
868
  static VALUE spg_fetch_rows_set_cols(VALUE self, VALUE ignore) {
@@ -870,7 +879,7 @@ static VALUE spg__array_col_value(char *v, size_t length, VALUE converter, int e
870
879
  break;
871
880
  case 17: /* bytea */
872
881
  bi.blob_string = (char *)PQunescapeBytea((unsigned char*)v, &bi.length);
873
- rv = rb_ensure(spg_create_Blob, (VALUE)&bi, (VALUE(*)())PQfreemem, (VALUE)bi.blob_string);
882
+ rv = rb_ensure(spg_create_Blob, (VALUE)&bi, (VALUE(*)(VALUE))PQfreemem, (VALUE)bi.blob_string);
874
883
  break;
875
884
  case 20: /* integer */
876
885
  case 21:
@@ -921,8 +930,10 @@ static VALUE spg__array_col_value(char *v, size_t length, VALUE converter, int e
921
930
  break;
922
931
  case 869: /* inet */
923
932
  case 650: /* cidr */
924
- rv = spg_inet(v, length);
925
- break;
933
+ if (!RTEST(converter)) {
934
+ rv = spg_inet(v, length);
935
+ break;
936
+ }
926
937
  default:
927
938
  rv = rb_tainted_str_new(v, length);
928
939
  PG_ENCODING_SET_NOCHECK(rv, enc_index);
@@ -936,6 +947,7 @@ static VALUE spg__array_col_value(char *v, size_t length, VALUE converter, int e
936
947
 
937
948
  static VALUE spg_array_value(char *c_pg_array_string, int array_string_length, VALUE converter, int enc_index, int oid, VALUE self, VALUE array_type) {
938
949
  int index = 1;
950
+ VALUE buf;
939
951
  VALUE args[2];
940
952
  args[1] = array_type;
941
953
 
@@ -943,7 +955,10 @@ static VALUE spg_array_value(char *c_pg_array_string, int array_string_length, V
943
955
  return rb_class_new_instance(2, args, spg_PGArray);
944
956
  }
945
957
 
946
- args[0] = read_array(&index, c_pg_array_string, array_string_length, rb_str_buf_new(array_string_length), converter, enc_index, oid, self);
958
+ buf = rb_str_buf_new(array_string_length);
959
+ rb_str_set_len(buf, array_string_length);
960
+ rb_obj_freeze(buf);
961
+ args[0] = read_array(&index, c_pg_array_string, array_string_length, buf, converter, enc_index, oid, self);
947
962
  return rb_class_new_instance(2, args, spg_PGArray);
948
963
  }
949
964
 
@@ -997,12 +1012,12 @@ static int spg_timestamp_info_bitmask(VALUE self) {
997
1012
  return tz;
998
1013
  }
999
1014
 
1000
- static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* colconvert, int enc_index) {
1015
+ static VALUE spg__col_value(VALUE self, PGresult *res, int i, int j, VALUE* colconvert, int enc_index) {
1001
1016
  char *v;
1002
1017
  VALUE rv;
1003
1018
  int ftype = PQftype(res, j);
1004
1019
  VALUE array_type;
1005
- VALUE scalar_oid;
1020
+ int scalar_oid;
1006
1021
  struct spg_blob_initialization bi;
1007
1022
 
1008
1023
  if(PQgetisnull(res, i, j)) {
@@ -1016,7 +1031,7 @@ static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* co
1016
1031
  break;
1017
1032
  case 17: /* bytea */
1018
1033
  bi.blob_string = (char *)PQunescapeBytea((unsigned char*)v, &bi.length);
1019
- rv = rb_ensure(spg_create_Blob, (VALUE)&bi, (VALUE(*)())PQfreemem, (VALUE)bi.blob_string);
1034
+ rv = rb_ensure(spg_create_Blob, (VALUE)&bi, (VALUE(*)(VALUE))PQfreemem, (VALUE)bi.blob_string);
1020
1035
  break;
1021
1036
  case 20: /* integer */
1022
1037
  case 21:
@@ -1065,10 +1080,6 @@ static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* co
1065
1080
  rv = rb_tainted_str_new(v, PQgetlength(res, i, j));
1066
1081
  PG_ENCODING_SET_NOCHECK(rv, enc_index);
1067
1082
  break;
1068
- case 869: /* inet */
1069
- case 650: /* cidr */
1070
- rv = spg_inet(v, PQgetlength(res, i, j));
1071
- break;
1072
1083
  /* array types */
1073
1084
  case 1009:
1074
1085
  case 1014:
@@ -1206,17 +1217,30 @@ static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* co
1206
1217
  scalar_oid = 22;
1207
1218
  break;
1208
1219
  case 1041:
1220
+ if (RTEST(colconvert[j])) {
1221
+ goto default_cond;
1222
+ }
1209
1223
  array_type = spg_sym_inet;
1210
1224
  scalar_oid = 869;
1211
1225
  break;
1212
1226
  case 651:
1227
+ if (RTEST(colconvert[j])) {
1228
+ goto default_cond;
1229
+ }
1213
1230
  array_type = spg_sym_cidr;
1214
1231
  scalar_oid = 650;
1215
1232
  break;
1216
1233
  }
1217
1234
  rv = spg_array_value(v, PQgetlength(res, i, j), colconvert[j], enc_index, scalar_oid, self, array_type);
1218
1235
  break;
1236
+ case 869: /* inet */
1237
+ case 650: /* cidr */
1238
+ if (colconvert[j] == Qnil) {
1239
+ rv = spg_inet(v, PQgetlength(res, i, j));
1240
+ break;
1241
+ }
1219
1242
  default:
1243
+ default_cond:
1220
1244
  rv = rb_tainted_str_new(v, PQgetlength(res, i, j));
1221
1245
  PG_ENCODING_SET_NOCHECK(rv, enc_index);
1222
1246
  if (colconvert[j] != Qnil) {
@@ -1227,20 +1251,20 @@ static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* co
1227
1251
  return rv;
1228
1252
  }
1229
1253
 
1230
- static VALUE spg__col_values(VALUE self, VALUE v, VALUE *colsyms, long nfields, PGresult *res, long i, VALUE *colconvert, int enc_index) {
1254
+ static VALUE spg__col_values(VALUE self, VALUE v, VALUE *colsyms, long nfields, PGresult *res, int i, VALUE *colconvert, int enc_index) {
1231
1255
  long j;
1232
1256
  VALUE cur;
1233
1257
  long len = RARRAY_LEN(v);
1234
1258
  VALUE a = rb_ary_new2(len);
1235
1259
  for (j=0; j<len; j++) {
1236
1260
  cur = rb_ary_entry(v, j);
1237
- rb_ary_store(a, j, cur == Qnil ? Qnil : spg__col_value(self, res, i, NUM2LONG(cur), colconvert, enc_index));
1261
+ rb_ary_store(a, j, cur == Qnil ? Qnil : spg__col_value(self, res, i, NUM2INT(cur), colconvert, enc_index));
1238
1262
  }
1239
1263
  return a;
1240
1264
  }
1241
1265
 
1242
- static long spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
1243
- long j;
1266
+ static int spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
1267
+ int j;
1244
1268
  for (j=0; j<nfields; j++) {
1245
1269
  if (colsyms[j] == v) {
1246
1270
  return j;
@@ -1251,7 +1275,7 @@ static long spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
1251
1275
 
1252
1276
  static VALUE spg__field_ids(VALUE v, VALUE *colsyms, long nfields) {
1253
1277
  long i;
1254
- long j;
1278
+ int j;
1255
1279
  VALUE cur;
1256
1280
  long len = RARRAY_LEN(v);
1257
1281
  VALUE pg_columns = rb_ary_new2(len);
@@ -1264,9 +1288,9 @@ static VALUE spg__field_ids(VALUE v, VALUE *colsyms, long nfields) {
1264
1288
  }
1265
1289
 
1266
1290
  static void spg_set_column_info(VALUE self, PGresult *res, VALUE *colsyms, VALUE *colconvert, int enc_index) {
1267
- long i;
1268
- long j;
1269
- long nfields;
1291
+ int i;
1292
+ int j;
1293
+ int nfields;
1270
1294
  int timestamp_info = 0;
1271
1295
  int time_info = 0;
1272
1296
  VALUE conv_procs = 0;
@@ -1355,34 +1379,19 @@ static void spg_set_column_info(VALUE self, PGresult *res, VALUE *colsyms, VALUE
1355
1379
  rb_funcall(self, spg_id_columns_equal, 1, rb_ary_new4(nfields, colsyms));
1356
1380
  }
1357
1381
 
1358
- static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1359
- PGresult *res;
1360
- VALUE colsyms[SPG_MAX_FIELDS];
1361
- VALUE colconvert[SPG_MAX_FIELDS];
1362
- long ntuples;
1363
- long nfields;
1364
- long i;
1365
- long j;
1382
+ static VALUE spg_yield_hash_rows_internal(VALUE self, PGresult *res, int enc_index, VALUE* colsyms, VALUE* colconvert) {
1383
+ int ntuples;
1384
+ int nfields;
1385
+ int i;
1386
+ int j;
1366
1387
  VALUE h;
1367
1388
  VALUE opts;
1368
1389
  VALUE pg_type;
1369
1390
  VALUE pg_value;
1370
1391
  char type = SPG_YIELD_NORMAL;
1371
- int enc_index;
1372
-
1373
- if (!RTEST(rres)) {
1374
- return self;
1375
- }
1376
- res = pgresult_get(rres);
1377
-
1378
- enc_index = enc_get_index(rres);
1379
1392
 
1380
1393
  ntuples = PQntuples(res);
1381
1394
  nfields = PQnfields(res);
1382
- if (nfields > SPG_MAX_FIELDS) {
1383
- rb_raise(rb_eRangeError, "more than %d columns in query (%ld columns detected)", SPG_MAX_FIELDS, nfields);
1384
- }
1385
-
1386
1395
  spg_set_column_info(self, res, colsyms, colconvert, enc_index);
1387
1396
 
1388
1397
  opts = rb_funcall(self, spg_id_opts, 0);
@@ -1474,7 +1483,7 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1474
1483
  case SPG_YIELD_KV_HASH_GROUPS:
1475
1484
  /* Hash with single key and single value */
1476
1485
  {
1477
- VALUE k, v;
1486
+ int k, v;
1478
1487
  h = rb_hash_new();
1479
1488
  k = spg__field_id(rb_ary_entry(pg_value, 0), colsyms, nfields);
1480
1489
  v = spg__field_id(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1502,7 +1511,8 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1502
1511
  case SPG_YIELD_MKV_HASH_GROUPS:
1503
1512
  /* Hash with array of keys and single value */
1504
1513
  {
1505
- VALUE k, v;
1514
+ VALUE k;
1515
+ int v;
1506
1516
  h = rb_hash_new();
1507
1517
  k = spg__field_ids(rb_ary_entry(pg_value, 0), colsyms, nfields);
1508
1518
  v = spg__field_id(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1530,7 +1540,8 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1530
1540
  case SPG_YIELD_KMV_HASH_GROUPS:
1531
1541
  /* Hash with single keys and array of values */
1532
1542
  {
1533
- VALUE k, v;
1543
+ VALUE v;
1544
+ int k;
1534
1545
  h = rb_hash_new();
1535
1546
  k = spg__field_id(rb_ary_entry(pg_value, 0), colsyms, nfields);
1536
1547
  v = spg__field_ids(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1599,6 +1610,40 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1599
1610
  return self;
1600
1611
  }
1601
1612
 
1613
+ #define def_spg_yield_hash_rows(max_fields) static VALUE spg_yield_hash_rows_ ## max_fields(VALUE self, PGresult *res, int enc_index) { \
1614
+ VALUE colsyms[max_fields]; \
1615
+ VALUE colconvert[max_fields]; \
1616
+ return spg_yield_hash_rows_internal(self, res, enc_index, colsyms, colconvert); \
1617
+ }
1618
+
1619
+ def_spg_yield_hash_rows(16)
1620
+ def_spg_yield_hash_rows(64)
1621
+ def_spg_yield_hash_rows(256)
1622
+ def_spg_yield_hash_rows(1664)
1623
+
1624
+ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1625
+ PGresult *res;
1626
+ int nfields;
1627
+ int enc_index;
1628
+
1629
+ if (!RTEST(rres)) {
1630
+ return self;
1631
+ }
1632
+ res = pgresult_get(rres);
1633
+
1634
+ enc_index = spg_use_pg_get_result_enc_idx ? pg_get_result_enc_idx(rres) : enc_get_index(rres);
1635
+
1636
+ nfields = PQnfields(res);
1637
+ if (nfields <= 16) return spg_yield_hash_rows_16(self, res, enc_index);
1638
+ else if (nfields <= 64) return spg_yield_hash_rows_64(self, res, enc_index);
1639
+ else if (nfields <= 256) return spg_yield_hash_rows_256(self, res, enc_index);
1640
+ else if (nfields <= 1664) return spg_yield_hash_rows_1664(self, res, enc_index);
1641
+ else rb_raise(rb_eRangeError, "more than 1664 columns in query (%d columns detected)", nfields);
1642
+
1643
+ /* UNREACHABLE */
1644
+ return self;
1645
+ }
1646
+
1602
1647
  static VALUE spg_supports_streaming_p(VALUE self) {
1603
1648
  return
1604
1649
  #if HAVE_PQSETSINGLEROWMODE
@@ -1618,32 +1663,50 @@ static VALUE spg_set_single_row_mode(VALUE self) {
1618
1663
  return Qnil;
1619
1664
  }
1620
1665
 
1621
- static VALUE spg__yield_each_row(VALUE self) {
1622
- PGresult *res;
1623
- VALUE rres;
1624
- VALUE rconn;
1625
- VALUE colsyms[SPG_MAX_FIELDS];
1626
- VALUE colconvert[SPG_MAX_FIELDS];
1627
- long nfields;
1628
- long j;
1666
+ struct spg__yield_each_row_stream_data {
1667
+ VALUE self;
1668
+ VALUE *colsyms;
1669
+ VALUE *colconvert;
1670
+ VALUE pg_value;
1671
+ int enc_index;
1672
+ char type;
1673
+ };
1674
+
1675
+ static int spg__yield_each_row_stream(VALUE rres, int ntuples, int nfields, void *rdata) {
1676
+ struct spg__yield_each_row_stream_data* data = (struct spg__yield_each_row_stream_data *)rdata;
1677
+ VALUE h = rb_hash_new();
1678
+ VALUE self = data->self;
1679
+ VALUE *colsyms = data->colsyms;
1680
+ VALUE *colconvert= data->colconvert;
1681
+ PGresult *res = pgresult_get(rres);
1682
+ int enc_index = data->enc_index;
1683
+ int j;
1684
+
1685
+ for(j=0; j<nfields; j++) {
1686
+ rb_hash_aset(h, colsyms[j], spg__col_value(self, res, 0, j, colconvert , enc_index));
1687
+ }
1688
+
1689
+ if(data->type == SPG_YIELD_MODEL) {
1690
+ VALUE model = rb_obj_alloc(data->pg_value);
1691
+ rb_ivar_set(model, spg_id_values, h);
1692
+ rb_yield(model);
1693
+ } else {
1694
+ rb_yield(h);
1695
+ }
1696
+ return 1; /* clear the result */
1697
+ }
1698
+
1699
+ static VALUE spg__yield_each_row_internal(VALUE self, VALUE rconn, VALUE rres, PGresult *res, int enc_index, VALUE *colsyms, VALUE *colconvert) {
1700
+ int nfields;
1701
+ int j;
1629
1702
  VALUE h;
1630
1703
  VALUE opts;
1631
1704
  VALUE pg_type;
1632
1705
  VALUE pg_value = Qnil;
1633
1706
  char type = SPG_YIELD_NORMAL;
1634
- int enc_index;
1635
-
1636
- rconn = rb_ary_entry(self, 1);
1637
- self = rb_ary_entry(self, 0);
1638
-
1639
- rres = rb_funcall(rconn, spg_id_get_result, 0);
1640
- if (rres == Qnil) {
1641
- goto end_yield_each_row;
1642
- }
1643
- rb_funcall(rres, spg_id_check, 0);
1644
- res = pgresult_get(rres);
1707
+ struct spg__yield_each_row_stream_data data;
1645
1708
 
1646
- enc_index = enc_get_index(rres);
1709
+ nfields = PQnfields(res);
1647
1710
 
1648
1711
  /* Only handle regular and model types. All other types require compiling all
1649
1712
  * of the results at once, which is not a use case for streaming. The streaming
@@ -1657,14 +1720,20 @@ static VALUE spg__yield_each_row(VALUE self) {
1657
1720
  }
1658
1721
  }
1659
1722
 
1660
- nfields = PQnfields(res);
1661
- if (nfields > SPG_MAX_FIELDS) {
1662
- rb_funcall(rres, spg_id_clear, 0);
1663
- rb_raise(rb_eRangeError, "more than %d columns in query", SPG_MAX_FIELDS);
1664
- }
1665
-
1666
1723
  spg_set_column_info(self, res, colsyms, colconvert, enc_index);
1667
1724
 
1725
+ if (spg_use_pg_stream_any) {
1726
+ data.self = self;
1727
+ data.colsyms = colsyms;
1728
+ data.colconvert = colconvert;
1729
+ data.pg_value = pg_value;
1730
+ data.enc_index = enc_index;
1731
+ data.type = type;
1732
+
1733
+ pgresult_stream_any(rres, spg__yield_each_row_stream, &data);
1734
+ return self;
1735
+ }
1736
+
1668
1737
  while (PQntuples(res) != 0) {
1669
1738
  h = rb_hash_new();
1670
1739
  for(j=0; j<nfields; j++) {
@@ -1684,14 +1753,57 @@ static VALUE spg__yield_each_row(VALUE self) {
1684
1753
 
1685
1754
  rres = rb_funcall(rconn, spg_id_get_result, 0);
1686
1755
  if (rres == Qnil) {
1687
- goto end_yield_each_row;
1756
+ return self;
1688
1757
  }
1689
1758
  rb_funcall(rres, spg_id_check, 0);
1690
1759
  res = pgresult_get(rres);
1691
1760
  }
1692
1761
  rb_funcall(rres, spg_id_clear, 0);
1693
1762
 
1694
- end_yield_each_row:
1763
+ return self;
1764
+ }
1765
+
1766
+ #define def_spg__yield_each_row(max_fields) static VALUE spg__yield_each_row_ ## max_fields(VALUE self, VALUE rconn, VALUE rres, PGresult *res, int enc_index) { \
1767
+ VALUE colsyms[max_fields]; \
1768
+ VALUE colconvert[max_fields]; \
1769
+ return spg__yield_each_row_internal(self, rconn, rres, res, enc_index, colsyms, colconvert); \
1770
+ }
1771
+
1772
+ def_spg__yield_each_row(16)
1773
+ def_spg__yield_each_row(64)
1774
+ def_spg__yield_each_row(256)
1775
+ def_spg__yield_each_row(1664)
1776
+
1777
+ static VALUE spg__yield_each_row(VALUE self) {
1778
+ PGresult *res;
1779
+ VALUE rres;
1780
+ VALUE rconn;
1781
+ int enc_index;
1782
+ int nfields;
1783
+
1784
+ rconn = rb_ary_entry(self, 1);
1785
+ self = rb_ary_entry(self, 0);
1786
+
1787
+ rres = rb_funcall(rconn, spg_id_get_result, 0);
1788
+ if (rres == Qnil) {
1789
+ return self;
1790
+ }
1791
+ rb_funcall(rres, spg_id_check, 0);
1792
+ res = pgresult_get(rres);
1793
+
1794
+ enc_index = spg_use_pg_get_result_enc_idx ? pg_get_result_enc_idx(rres) : enc_get_index(rres);
1795
+
1796
+ nfields = PQnfields(res);
1797
+ if (nfields <= 16) return spg__yield_each_row_16(self, rconn, rres, res, enc_index);
1798
+ else if (nfields <= 64) return spg__yield_each_row_64(self, rconn, rres, res, enc_index);
1799
+ else if (nfields <= 256) return spg__yield_each_row_256(self, rconn, rres, res, enc_index);
1800
+ else if (nfields <= 1664) return spg__yield_each_row_1664(self, rconn, rres, res, enc_index);
1801
+ else {
1802
+ rb_funcall(rres, spg_id_clear, 0);
1803
+ rb_raise(rb_eRangeError, "more than 1664 columns in query (%d columns detected)", nfields);
1804
+ }
1805
+
1806
+ /* UNREACHABLE */
1695
1807
  return self;
1696
1808
  }
1697
1809
 
@@ -1737,7 +1849,7 @@ void Init_sequel_pg(void) {
1737
1849
  VALUE c, spg_Postgres;
1738
1850
 
1739
1851
  spg_Sequel = rb_const_get(rb_cObject, rb_intern("Sequel"));
1740
- rb_global_variable(&spg_Sequel);
1852
+ rb_gc_register_mark_object(spg_Sequel);
1741
1853
  spg_Postgres = rb_const_get(spg_Sequel, rb_intern("Postgres"));
1742
1854
 
1743
1855
  if(rb_obj_respond_to(spg_Postgres, rb_intern("sequel_pg_version_supported?"), 0)) {
@@ -1746,7 +1858,22 @@ void Init_sequel_pg(void) {
1746
1858
  return;
1747
1859
  }
1748
1860
  }
1749
-
1861
+
1862
+ c = rb_eval_string("defined?(PG::VERSION) && PG::VERSION.split('.').map(&:to_i)");
1863
+ if (RB_TYPE_P(c, T_ARRAY) && RARRAY_LEN(c) >= 3) {
1864
+ if (FIX2INT(RARRAY_AREF(c, 0)) > 1) {
1865
+ spg_use_pg_get_result_enc_idx = 1;
1866
+ spg_use_pg_stream_any = 1;
1867
+ } else if (FIX2INT(RARRAY_AREF(c, 0)) == 1) {
1868
+ if (FIX2INT(RARRAY_AREF(c, 1)) >= 2) {
1869
+ spg_use_pg_get_result_enc_idx = 1;
1870
+ }
1871
+ if (FIX2INT(RARRAY_AREF(c, 1)) > 4 || (FIX2INT(RARRAY_AREF(c, 1)) == 4 && FIX2INT(RARRAY_AREF(c, 2)) >= 4)) {
1872
+ spg_use_pg_stream_any = 1;
1873
+ }
1874
+ }
1875
+ }
1876
+
1750
1877
  rb_const_set(spg_Postgres, rb_intern("SEQUEL_PG_VERSION_INTEGER"), INT2FIX(SEQUEL_PG_VERSION_INTEGER));
1751
1878
 
1752
1879
  spg_id_BigDecimal = rb_intern("BigDecimal");
@@ -1829,35 +1956,35 @@ void Init_sequel_pg(void) {
1829
1956
  spg_sym_cidr = ID2SYM(rb_intern("cidr"));
1830
1957
 
1831
1958
  spg_Blob = rb_const_get(rb_const_get(spg_Sequel, rb_intern("SQL")), rb_intern("Blob"));
1832
- rb_global_variable(&spg_Blob);
1959
+ rb_gc_register_mark_object(spg_Blob);
1833
1960
  spg_Blob_instance = rb_obj_freeze(rb_funcall(spg_Blob, spg_id_new, 0));
1834
- rb_global_variable(&spg_Blob_instance);
1961
+ rb_gc_register_mark_object(spg_Blob_instance);
1835
1962
  spg_SQLTime = rb_const_get(spg_Sequel, rb_intern("SQLTime"));
1836
- rb_global_variable(&spg_SQLTime);
1963
+ rb_gc_register_mark_object(spg_SQLTime);
1837
1964
  spg_Date = rb_const_get(rb_cObject, rb_intern("Date"));
1838
- rb_global_variable(&spg_Date);
1965
+ rb_gc_register_mark_object(spg_Date);
1839
1966
  spg_DateTime = rb_const_get(rb_cObject, rb_intern("DateTime"));
1840
- rb_global_variable(&spg_DateTime);
1967
+ rb_gc_register_mark_object(spg_DateTime);
1841
1968
  spg_PGError = rb_const_get(rb_const_get(rb_cObject, rb_intern("PG")), rb_intern("Error"));
1842
- rb_global_variable(&spg_PGError);
1969
+ rb_gc_register_mark_object(spg_PGError);
1843
1970
 
1844
1971
  spg_nan = rb_eval_string("0.0/0.0");
1845
- rb_global_variable(&spg_nan);
1972
+ rb_gc_register_mark_object(spg_nan);
1846
1973
  spg_pos_inf = rb_eval_string("1.0/0.0");
1847
- rb_global_variable(&spg_pos_inf);
1974
+ rb_gc_register_mark_object(spg_pos_inf);
1848
1975
  spg_neg_inf = rb_eval_string("-1.0/0.0");
1849
- rb_global_variable(&spg_neg_inf);
1976
+ rb_gc_register_mark_object(spg_neg_inf);
1850
1977
  spg_usec_per_day = ULL2NUM(86400000000ULL);
1851
- rb_global_variable(&spg_usec_per_day);
1978
+ rb_gc_register_mark_object(spg_usec_per_day);
1852
1979
 
1853
1980
  rb_require("ipaddr");
1854
1981
  spg_IPAddr = rb_const_get(rb_cObject, rb_intern("IPAddr"));
1855
- rb_global_variable(&spg_IPAddr);
1982
+ rb_gc_register_mark_object(spg_IPAddr);
1856
1983
  spg_use_ipaddr_alloc = RTEST(rb_eval_string("IPAddr.new.instance_variables.sort == [:@addr, :@family, :@mask_addr]"));
1857
1984
  spg_vmasks4 = rb_eval_string("a = [0]*33; a[0] = 0; a[32] = 0xffffffff; 31.downto(1){|i| a[i] = a[i+1] - (1 << (31 - i))}; a.freeze");
1858
- rb_global_variable(&spg_vmasks4);
1985
+ rb_gc_register_mark_object(spg_vmasks4);
1859
1986
  spg_vmasks6 = rb_eval_string("a = [0]*129; a[0] = 0; a[128] = 0xffffffffffffffffffffffffffffffff; 127.downto(1){|i| a[i] = a[i+1] - (1 << (127 - i))}; a.freeze");
1860
- rb_global_variable(&spg_vmasks6);
1987
+ rb_gc_register_mark_object(spg_vmasks6);
1861
1988
 
1862
1989
  c = rb_const_get(spg_Postgres, rb_intern("Dataset"));
1863
1990
  rb_undef_method(c, "yield_hash_rows");
@@ -1883,5 +2010,5 @@ void Init_sequel_pg(void) {
1883
2010
 
1884
2011
  rb_require("sequel/extensions/pg_array");
1885
2012
  spg_PGArray = rb_const_get(spg_Postgres, rb_intern("PGArray"));
1886
- rb_global_variable(&spg_PGArray);
2013
+ rb_gc_register_mark_object(spg_PGArray);
1887
2014
  }
@@ -1,9 +1,11 @@
1
+ # :nocov:
1
2
  unless Sequel::Postgres.respond_to?(:supports_streaming?)
2
3
  raise LoadError, "either sequel_pg not loaded, or an old version of sequel_pg loaded"
3
4
  end
4
5
  unless Sequel::Postgres.supports_streaming?
5
6
  raise LoadError, "streaming is not supported by the version of libpq in use"
6
7
  end
8
+ # :nocov:
7
9
 
8
10
  # Database methods necessary to support streaming. You should load this extension
9
11
  # into your database object:
@@ -73,12 +75,20 @@ module Sequel::Postgres::Streaming
73
75
 
74
76
  private
75
77
 
78
+ # :nocov:
79
+ unless Sequel::Postgres::Adapter.method_defined?(:send_query_params)
80
+ def send_query_params(*args)
81
+ send_query(*args)
82
+ end
83
+ end
84
+ # :nocov:
85
+
76
86
  if Sequel::Database.instance_methods.map(&:to_s).include?('log_connection_yield')
77
87
  # If using single row mode, send the query instead of executing it.
78
88
  def execute_query(sql, args)
79
89
  if @single_row_mode
80
90
  @single_row_mode = false
81
- @db.log_connection_yield(sql, self, args){args ? send_query(sql, args) : send_query(sql)}
91
+ @db.log_connection_yield(sql, self, args){args ? send_query_params(sql, args) : send_query(sql)}
82
92
  set_single_row_mode
83
93
  block
84
94
  self
@@ -87,6 +97,7 @@ module Sequel::Postgres::Streaming
87
97
  end
88
98
  end
89
99
  else
100
+ # :nocov:
90
101
  def execute_query(sql, args)
91
102
  if @single_row_mode
92
103
  @single_row_mode = false
@@ -98,6 +109,7 @@ module Sequel::Postgres::Streaming
98
109
  super
99
110
  end
100
111
  end
112
+ # :nocov:
101
113
  end
102
114
  end
103
115
 
@@ -119,7 +131,15 @@ module Sequel::Postgres::Streaming
119
131
 
120
132
  # Use streaming to implement paging.
121
133
  def paged_each(opts=Sequel::OPTS, &block)
122
- stream.each(&block)
134
+ unless block_given?
135
+ return enum_for(:paged_each, opts)
136
+ end
137
+
138
+ if stream_results?
139
+ each(&block)
140
+ else
141
+ super
142
+ end
123
143
  end
124
144
 
125
145
  # Return a clone of the dataset that will use streaming to load
@@ -53,11 +53,13 @@ class Sequel::Postgres::Dataset
53
53
  end
54
54
  end
55
55
 
56
+ # :nocov:
56
57
  unless Sequel::Dataset.method_defined?(:as_hash)
57
58
  # Handle previous versions of Sequel that use to_hash instead of as_hash
58
59
  alias to_hash as_hash
59
60
  remove_method :as_hash
60
61
  end
62
+ # :nocov:
61
63
 
62
64
  # In the case where both arguments given, use an optimized version.
63
65
  def to_hash_groups(key_column, value_column = nil, opts = Sequel::OPTS)
@@ -74,9 +76,9 @@ class Sequel::Postgres::Dataset
74
76
  if defined?(Sequel::Model::ClassMethods)
75
77
  # If model loads are being optimized and this is a model load, use the optimized
76
78
  # version.
77
- def each
79
+ def each(&block)
78
80
  if optimize_model_load?
79
- clone(:_sequel_pg_type=>:model, :_sequel_pg_value=>row_proc).fetch_rows(sql, &Proc.new)
81
+ clone(:_sequel_pg_type=>:model, :_sequel_pg_value=>row_proc).fetch_rows(sql, &block)
80
82
  else
81
83
  super
82
84
  end
@@ -120,6 +122,11 @@ if defined?(Sequel::Postgres::PGArray)
120
122
  # pg_array extension previously loaded
121
123
 
122
124
  class Sequel::Postgres::PGArray::Creator
125
+ # :nocov:
126
+ # Avoid method redefined verbose warning
127
+ alias call call if method_defined?(:call)
128
+ # :nocov:
129
+
123
130
  # Override Creator to use sequel_pg's C-based parser instead of the pure ruby parser.
124
131
  def call(string)
125
132
  Sequel::Postgres::PGArray.new(Sequel::Postgres.parse_pg_array(string, @converter), @type)
metadata CHANGED
@@ -1,14 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sequel_pg
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.11.0
4
+ version: 1.17.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jeremy Evans
8
- autorequire:
9
8
  bindir: bin
10
9
  cert_chain: []
11
- date: 2018-07-09 00:00:00.000000000 Z
10
+ date: 2025-03-14 00:00:00.000000000 Z
12
11
  dependencies:
13
12
  - !ruby/object:Gem::Dependency
14
13
  name: pg
@@ -17,6 +16,9 @@ dependencies:
17
16
  - - ">="
18
17
  - !ruby/object:Gem::Version
19
18
  version: 0.18.0
19
+ - - "!="
20
+ - !ruby/object:Gem::Version
21
+ version: 1.2.0
20
22
  type: :runtime
21
23
  prerelease: false
22
24
  version_requirements: !ruby/object:Gem::Requirement
@@ -24,6 +26,9 @@ dependencies:
24
26
  - - ">="
25
27
  - !ruby/object:Gem::Version
26
28
  version: 0.18.0
29
+ - - "!="
30
+ - !ruby/object:Gem::Version
31
+ version: 1.2.0
27
32
  - !ruby/object:Gem::Dependency
28
33
  name: sequel
29
34
  requirement: !ruby/object:Gem::Requirement
@@ -67,8 +72,12 @@ files:
67
72
  homepage: http://github.com/jeremyevans/sequel_pg
68
73
  licenses:
69
74
  - MIT
70
- metadata: {}
71
- post_install_message:
75
+ metadata:
76
+ bug_tracker_uri: https://github.com/jeremyevans/sequel_pg/issues
77
+ changelog_uri: https://github.com/jeremyevans/sequel_pg/blob/master/CHANGELOG
78
+ documentation_uri: https://github.com/jeremyevans/sequel_pg/blob/master/README.rdoc
79
+ mailing_list_uri: https://github.com/jeremyevans/sequel_pg/discussions
80
+ source_code_uri: https://github.com/jeremyevans/sequel_pg
72
81
  rdoc_options:
73
82
  - "--quiet"
74
83
  - "--line-numbers"
@@ -90,9 +99,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
90
99
  - !ruby/object:Gem::Version
91
100
  version: '0'
92
101
  requirements: []
93
- rubyforge_project:
94
- rubygems_version: 2.7.6
95
- signing_key:
102
+ rubygems_version: 3.6.2
96
103
  specification_version: 4
97
104
  summary: Faster SELECTs when using Sequel with pg
98
105
  test_files: []