sequel_pg 1.13.0 → 1.16.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9ec2808e8924aa1b5ab101973ff6d3505f8199e21c6e919bfeab3bca809ac61c
4
- data.tar.gz: 41f30cc22ea16c6700cab68e2d7fbdd3fca950a975c004ffa12d21423608f198
3
+ metadata.gz: e7bcc241985851b36df5d4e199e1edc08626fa8dcd23065aefd13b4cafd135a9
4
+ data.tar.gz: 8816059db9824f1396cb9d734c1b12f1c4ccb38c13ebc4a9c953a78cfff90fdd
5
5
  SHA512:
6
- metadata.gz: 904f2d4068265c5df0c584fd0da9bffc2521d1f048e55294c8cebba96232e2900a1424f2473dae222770cd2c79c50196d6c3ae424ac1ef9f772eb785e7f40453
7
- data.tar.gz: a0d3748e9b150ecb55e53cb9ccd423f507f12e196a1686f60ba2804f8f4319303844dd828b21229fbf7871531256f8da39eafcaefe229f90da62b795415da744
6
+ metadata.gz: 2220c53c58d8bd549f66849529584dcf836495f0f55dc5819bd01841131e213f24cdcda089ae63a9703ae08c6c5ae5514160b1ce1f181e1277d099765d2f683b
7
+ data.tar.gz: 6ac01c39dddb568d29100e9bcd3a326d8bcc1b0cd4a54a4c507793fd0817d0709bfabbe6252873c7b422e8ed68f8a6b6d8f77f13b69d1012965b314611734a9c
data/CHANGELOG CHANGED
@@ -1,3 +1,27 @@
1
+ === 1.16.0 (2022-08-16)
2
+
3
+ * Fix memory leak when using streaming with 1.3.4+ (jeremyevans) (#48)
4
+
5
+ * Modify LDFLAGS when building on MacOS arm64 to allow undefined functions (maxsz) (#46)
6
+
7
+ * Adjust many internal C types to fix compilation warnings (jeremyevans)
8
+
9
+ === 1.15.0 (2022-03-16)
10
+
11
+ * Avoid deprecation warning in the pg_streaming extension on pg 1.3+ when streaming a query with bound parameters (jeremyevans)
12
+
13
+ * Use pgresult_stream_any when using pg 1.3.4+ for faster streaming (jeremyevans)
14
+
15
+ * Do not use streaming by default for Dataset#paged_each in the pg_streaming extension (jeremyevans)
16
+
17
+ * Avoid verbose warning if loading sequel_pg after Sequel pg_array extension (jeremyevans)
18
+
19
+ === 1.14.0 (2020-09-22)
20
+
21
+ * Reduce stack memory usage for result sets with 64 or fewer columns (jeremyevans)
22
+
23
+ * Support result sets with more than 256 columns by default (jeremyevans) (#39)
24
+
1
25
  === 1.13.0 (2020-04-13)
2
26
 
3
27
  * Allow overriding of inet/cidr type conversion using conversion procs (beanieboi, jeremyevans) (#36, #37)
data/README.rdoc CHANGED
@@ -70,12 +70,6 @@ can do the following:
70
70
 
71
71
  gem install sequel_pg
72
72
 
73
- Note that by default sequel_pg only supports result sets with up to
74
- 256 columns. If you will have a result set with more than 256 columns,
75
- you should modify the maximum supported number of columns via:
76
-
77
- gem install sequel_pg -- --with-cflags=\"-DSPG_MAX_FIELDS=512\"
78
-
79
73
  Make sure the pg_config binary is in your PATH so the installation
80
74
  can find the PostgreSQL shared library and header files. Alternatively,
81
75
  you can use the POSTGRES_LIB and POSTGRES_INCLUDE environment
@@ -83,9 +77,16 @@ variables to specify the shared library and header directories.
83
77
 
84
78
  == Running the specs
85
79
 
86
- sequel_pg doesn't ship with it's own specs. It's designed to
87
- replace a part of Sequel, so it just uses Sequel's specs.
88
- Specifically, the spec_postgres rake task from Sequel.
80
+ sequel_pg is designed to replace a part of Sequel, so it shold be tested
81
+ using Sequel's specs (the spec_postgres rake task). There is a spec_cov
82
+ task that assumes you have Sequel checked out at ../sequel, and uses a
83
+ small spec suite for parts of sequel_pg not covered by Sequel's specs.
84
+ It sets the SEQUEL_PG_STREAM environment variable when running Sequel's
85
+ specs, make sure that spec/spec_config.rb in Sequel is set to connect
86
+ to PostgreSQL and use the following additional settings:
87
+
88
+ DB.extension(:pg_streaming)
89
+ DB.stream_all_queries = true
89
90
 
90
91
  == Reporting issues/bugs
91
92
 
@@ -118,19 +119,6 @@ requirements:
118
119
 
119
120
  rake build
120
121
 
121
- == Platforms Supported
122
-
123
- sequel_pg has been tested on the following:
124
-
125
- * ruby 1.9.3
126
- * ruby 2.0
127
- * ruby 2.1
128
- * ruby 2.2
129
- * ruby 2.3
130
- * ruby 2.4
131
- * ruby 2.5
132
- * ruby 2.6
133
-
134
122
  == Known Issues
135
123
 
136
124
  * You must be using the ISO PostgreSQL date format (which is the
data/Rakefile CHANGED
@@ -1,7 +1,6 @@
1
- require "rake"
2
1
  require "rake/clean"
3
2
 
4
- CLEAN.include %w'**.rbc rdoc'
3
+ CLEAN.include %w'**.rbc rdoc coverage'
5
4
 
6
5
  desc "Do a full cleaning"
7
6
  task :distclean do
@@ -19,3 +18,25 @@ begin
19
18
  Rake::ExtensionTask.new('sequel_pg')
20
19
  rescue LoadError
21
20
  end
21
+
22
+ # This assumes you have sequel checked out in ../sequel, and that
23
+ # spec_postgres is setup to run Sequel's PostgreSQL specs.
24
+ desc "Run tests with coverage"
25
+ task :spec_cov=>:compile do
26
+ ENV['RUBYLIB'] = "#{__dir__}/lib:#{ENV['RUBYLIB']}"
27
+ ENV['RUBYOPT'] = "-r #{__dir__}/spec/coverage_helper.rb #{ENV['RUBYOPT']}"
28
+ ENV['SIMPLECOV_COMMAND_NAME'] = "sequel_pg"
29
+ sh %'#{FileUtils::RUBY} -I ../sequel/lib spec/sequel_pg_spec.rb'
30
+
31
+ ENV['RUBYOPT'] = "-I lib -r sequel -r sequel/extensions/pg_array #{ENV['RUBYOPT']}"
32
+ ENV['SEQUEL_PG_STREAM'] = "1"
33
+ ENV['SIMPLECOV_COMMAND_NAME'] = "sequel"
34
+ sh %'cd ../sequel && #{FileUtils::RUBY} spec/adapter_spec.rb postgres'
35
+ end
36
+
37
+ desc "Run CI tests"
38
+ task :spec_ci=>:compile do
39
+ ENV['SEQUEL_PG_SPEC_URL'] = ENV['SEQUEL_POSTGRES_URL'] = "postgres://localhost/?user=postgres&password=postgres"
40
+ sh %'#{FileUtils::RUBY} -I lib -I sequel/lib spec/sequel_pg_spec.rb'
41
+ sh %'cd sequel && #{FileUtils::RUBY} -I lib -I ../lib spec/adapter_spec.rb postgres'
42
+ end
@@ -2,6 +2,7 @@ require 'mkmf'
2
2
  $CFLAGS << " -O0 -g" if ENV['DEBUG']
3
3
  $CFLAGS << " -Drb_tainted_str_new=rb_str_new -DNO_TAINT" if RUBY_VERSION >= '2.7'
4
4
  $CFLAGS << " -Wall " unless RUBY_PLATFORM =~ /solaris/
5
+ $LDFLAGS += " -Wl,-U,_pg_get_pgconn -Wl,-U,_pg_get_result_enc_idx -Wl,-U,_pgresult_get -Wl,-U,_pgresult_stream_any " if RUBY_PLATFORM =~ /arm64-darwin/
5
6
  dir_config('pg', ENV["POSTGRES_INCLUDE"] || (IO.popen("pg_config --includedir").readline.chomp rescue nil),
6
7
  ENV["POSTGRES_LIB"] || (IO.popen("pg_config --libdir").readline.chomp rescue nil))
7
8
 
@@ -1,4 +1,4 @@
1
- #define SEQUEL_PG_VERSION_INTEGER 11300
1
+ #define SEQUEL_PG_VERSION_INTEGER 11600
2
2
 
3
3
  #include <string.h>
4
4
  #include <stdio.h>
@@ -15,9 +15,6 @@
15
15
  #include <ruby/version.h>
16
16
  #include <ruby/encoding.h>
17
17
 
18
- #ifndef SPG_MAX_FIELDS
19
- #define SPG_MAX_FIELDS 256
20
- #endif
21
18
  #define SPG_MINUTES_PER_DAY 1440.0
22
19
  #define SPG_SECONDS_PER_DAY 86400.0
23
20
 
@@ -73,9 +70,11 @@
73
70
  PGconn* pg_get_pgconn(VALUE);
74
71
  PGresult* pgresult_get(VALUE);
75
72
  int pg_get_result_enc_idx(VALUE);
73
+ VALUE pgresult_stream_any(VALUE self, void (*yielder)(VALUE, int, int, void*), void* data);
76
74
 
77
75
  static int spg_use_ipaddr_alloc;
78
76
  static int spg_use_pg_get_result_enc_idx;
77
+ static int spg_use_pg_stream_any;
79
78
 
80
79
  static VALUE spg_Sequel;
81
80
  static VALUE spg_PGArray;
@@ -200,10 +199,10 @@ static int enc_get_index(VALUE val) {
200
199
  } while(0)
201
200
 
202
201
  static VALUE
203
- pg_text_dec_integer(char *val, int len)
202
+ pg_text_dec_integer(char *val, size_t len)
204
203
  {
205
204
  long i;
206
- int max_len;
205
+ size_t max_len;
207
206
 
208
207
  if( sizeof(i) >= 8 && FIXNUM_MAX >= 1000000000000000000LL ){
209
208
  /* 64 bit system can safely handle all numbers up to 18 digits as Fixnum */
@@ -258,7 +257,7 @@ pg_text_dec_integer(char *val, int len)
258
257
 
259
258
  static VALUE spg__array_col_value(char *v, size_t length, VALUE converter, int enc_index, int oid, VALUE db);
260
259
 
261
- static VALUE read_array(int *index, char *c_pg_array_string, int array_string_length, VALUE buf, VALUE converter, int enc_index, int oid, VALUE db) {
260
+ static VALUE read_array(int *index, char *c_pg_array_string, long array_string_length, VALUE buf, VALUE converter, int enc_index, int oid, VALUE db) {
262
261
  int word_index = 0;
263
262
  char *word = RSTRING_PTR(buf);
264
263
 
@@ -354,7 +353,7 @@ static VALUE read_array(int *index, char *c_pg_array_string, int array_string_le
354
353
  return array;
355
354
  }
356
355
 
357
- static VALUE check_pg_array(int* index, char *c_pg_array_string, int array_string_length) {
356
+ static VALUE check_pg_array(int* index, char *c_pg_array_string, long array_string_length) {
358
357
  if (array_string_length == 0) {
359
358
  rb_raise(rb_eArgError, "unexpected PostgreSQL array format, empty");
360
359
  } else if (array_string_length == 2 && c_pg_array_string[0] == '{' && c_pg_array_string[0] == '}') {
@@ -385,7 +384,7 @@ static VALUE parse_pg_array(VALUE self, VALUE pg_array_string, VALUE converter)
385
384
  /* convert to c-string, create additional ruby string buffer of
386
385
  * the same length, as that will be the worst case. */
387
386
  char *c_pg_array_string = StringValueCStr(pg_array_string);
388
- int array_string_length = RSTRING_LEN(pg_array_string);
387
+ long array_string_length = RSTRING_LEN(pg_array_string);
389
388
  int index = 1;
390
389
  VALUE ary;
391
390
 
@@ -535,7 +534,7 @@ static VALUE spg_timestamp(const char *s, VALUE self, size_t length, int tz) {
535
534
  }
536
535
 
537
536
  if (remaining < 19) {
538
- return spg_timestamp_error(s, self, "unexpected timetamp format, too short");
537
+ return spg_timestamp_error(s, self, "unexpected timestamp format, too short");
539
538
  }
540
539
 
541
540
  year = parse_year(&p, &remaining);
@@ -1013,12 +1012,12 @@ static int spg_timestamp_info_bitmask(VALUE self) {
1013
1012
  return tz;
1014
1013
  }
1015
1014
 
1016
- static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* colconvert, int enc_index) {
1015
+ static VALUE spg__col_value(VALUE self, PGresult *res, int i, int j, VALUE* colconvert, int enc_index) {
1017
1016
  char *v;
1018
1017
  VALUE rv;
1019
1018
  int ftype = PQftype(res, j);
1020
1019
  VALUE array_type;
1021
- VALUE scalar_oid;
1020
+ int scalar_oid;
1022
1021
  struct spg_blob_initialization bi;
1023
1022
 
1024
1023
  if(PQgetisnull(res, i, j)) {
@@ -1252,20 +1251,20 @@ static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* co
1252
1251
  return rv;
1253
1252
  }
1254
1253
 
1255
- static VALUE spg__col_values(VALUE self, VALUE v, VALUE *colsyms, long nfields, PGresult *res, long i, VALUE *colconvert, int enc_index) {
1254
+ static VALUE spg__col_values(VALUE self, VALUE v, VALUE *colsyms, long nfields, PGresult *res, int i, VALUE *colconvert, int enc_index) {
1256
1255
  long j;
1257
1256
  VALUE cur;
1258
1257
  long len = RARRAY_LEN(v);
1259
1258
  VALUE a = rb_ary_new2(len);
1260
1259
  for (j=0; j<len; j++) {
1261
1260
  cur = rb_ary_entry(v, j);
1262
- rb_ary_store(a, j, cur == Qnil ? Qnil : spg__col_value(self, res, i, NUM2LONG(cur), colconvert, enc_index));
1261
+ rb_ary_store(a, j, cur == Qnil ? Qnil : spg__col_value(self, res, i, NUM2INT(cur), colconvert, enc_index));
1263
1262
  }
1264
1263
  return a;
1265
1264
  }
1266
1265
 
1267
- static long spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
1268
- long j;
1266
+ static int spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
1267
+ int j;
1269
1268
  for (j=0; j<nfields; j++) {
1270
1269
  if (colsyms[j] == v) {
1271
1270
  return j;
@@ -1276,7 +1275,7 @@ static long spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
1276
1275
 
1277
1276
  static VALUE spg__field_ids(VALUE v, VALUE *colsyms, long nfields) {
1278
1277
  long i;
1279
- long j;
1278
+ int j;
1280
1279
  VALUE cur;
1281
1280
  long len = RARRAY_LEN(v);
1282
1281
  VALUE pg_columns = rb_ary_new2(len);
@@ -1289,9 +1288,9 @@ static VALUE spg__field_ids(VALUE v, VALUE *colsyms, long nfields) {
1289
1288
  }
1290
1289
 
1291
1290
  static void spg_set_column_info(VALUE self, PGresult *res, VALUE *colsyms, VALUE *colconvert, int enc_index) {
1292
- long i;
1293
- long j;
1294
- long nfields;
1291
+ int i;
1292
+ int j;
1293
+ int nfields;
1295
1294
  int timestamp_info = 0;
1296
1295
  int time_info = 0;
1297
1296
  VALUE conv_procs = 0;
@@ -1380,34 +1379,19 @@ static void spg_set_column_info(VALUE self, PGresult *res, VALUE *colsyms, VALUE
1380
1379
  rb_funcall(self, spg_id_columns_equal, 1, rb_ary_new4(nfields, colsyms));
1381
1380
  }
1382
1381
 
1383
- static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1384
- PGresult *res;
1385
- VALUE colsyms[SPG_MAX_FIELDS];
1386
- VALUE colconvert[SPG_MAX_FIELDS];
1387
- long ntuples;
1388
- long nfields;
1389
- long i;
1390
- long j;
1382
+ static VALUE spg_yield_hash_rows_internal(VALUE self, PGresult *res, int enc_index, VALUE* colsyms, VALUE* colconvert) {
1383
+ int ntuples;
1384
+ int nfields;
1385
+ int i;
1386
+ int j;
1391
1387
  VALUE h;
1392
1388
  VALUE opts;
1393
1389
  VALUE pg_type;
1394
1390
  VALUE pg_value;
1395
1391
  char type = SPG_YIELD_NORMAL;
1396
- int enc_index;
1397
-
1398
- if (!RTEST(rres)) {
1399
- return self;
1400
- }
1401
- res = pgresult_get(rres);
1402
-
1403
- enc_index = spg_use_pg_get_result_enc_idx ? pg_get_result_enc_idx(rres) : enc_get_index(rres);
1404
1392
 
1405
1393
  ntuples = PQntuples(res);
1406
1394
  nfields = PQnfields(res);
1407
- if (nfields > SPG_MAX_FIELDS) {
1408
- rb_raise(rb_eRangeError, "more than %d columns in query (%ld columns detected)", SPG_MAX_FIELDS, nfields);
1409
- }
1410
-
1411
1395
  spg_set_column_info(self, res, colsyms, colconvert, enc_index);
1412
1396
 
1413
1397
  opts = rb_funcall(self, spg_id_opts, 0);
@@ -1499,7 +1483,7 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1499
1483
  case SPG_YIELD_KV_HASH_GROUPS:
1500
1484
  /* Hash with single key and single value */
1501
1485
  {
1502
- VALUE k, v;
1486
+ int k, v;
1503
1487
  h = rb_hash_new();
1504
1488
  k = spg__field_id(rb_ary_entry(pg_value, 0), colsyms, nfields);
1505
1489
  v = spg__field_id(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1527,7 +1511,8 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1527
1511
  case SPG_YIELD_MKV_HASH_GROUPS:
1528
1512
  /* Hash with array of keys and single value */
1529
1513
  {
1530
- VALUE k, v;
1514
+ VALUE k;
1515
+ int v;
1531
1516
  h = rb_hash_new();
1532
1517
  k = spg__field_ids(rb_ary_entry(pg_value, 0), colsyms, nfields);
1533
1518
  v = spg__field_id(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1555,7 +1540,8 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1555
1540
  case SPG_YIELD_KMV_HASH_GROUPS:
1556
1541
  /* Hash with single keys and array of values */
1557
1542
  {
1558
- VALUE k, v;
1543
+ VALUE v;
1544
+ int k;
1559
1545
  h = rb_hash_new();
1560
1546
  k = spg__field_id(rb_ary_entry(pg_value, 0), colsyms, nfields);
1561
1547
  v = spg__field_ids(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1624,6 +1610,40 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1624
1610
  return self;
1625
1611
  }
1626
1612
 
1613
+ #define def_spg_yield_hash_rows(max_fields) static VALUE spg_yield_hash_rows_ ## max_fields(VALUE self, PGresult *res, int enc_index) { \
1614
+ VALUE colsyms[max_fields]; \
1615
+ VALUE colconvert[max_fields]; \
1616
+ return spg_yield_hash_rows_internal(self, res, enc_index, colsyms, colconvert); \
1617
+ }
1618
+
1619
+ def_spg_yield_hash_rows(16)
1620
+ def_spg_yield_hash_rows(64)
1621
+ def_spg_yield_hash_rows(256)
1622
+ def_spg_yield_hash_rows(1664)
1623
+
1624
+ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
1625
+ PGresult *res;
1626
+ int nfields;
1627
+ int enc_index;
1628
+
1629
+ if (!RTEST(rres)) {
1630
+ return self;
1631
+ }
1632
+ res = pgresult_get(rres);
1633
+
1634
+ enc_index = spg_use_pg_get_result_enc_idx ? pg_get_result_enc_idx(rres) : enc_get_index(rres);
1635
+
1636
+ nfields = PQnfields(res);
1637
+ if (nfields <= 16) return spg_yield_hash_rows_16(self, res, enc_index);
1638
+ else if (nfields <= 64) return spg_yield_hash_rows_64(self, res, enc_index);
1639
+ else if (nfields <= 256) return spg_yield_hash_rows_256(self, res, enc_index);
1640
+ else if (nfields <= 1664) return spg_yield_hash_rows_1664(self, res, enc_index);
1641
+ else rb_raise(rb_eRangeError, "more than 1664 columns in query (%d columns detected)", nfields);
1642
+
1643
+ /* UNREACHABLE */
1644
+ return self;
1645
+ }
1646
+
1627
1647
  static VALUE spg_supports_streaming_p(VALUE self) {
1628
1648
  return
1629
1649
  #if HAVE_PQSETSINGLEROWMODE
@@ -1643,32 +1663,50 @@ static VALUE spg_set_single_row_mode(VALUE self) {
1643
1663
  return Qnil;
1644
1664
  }
1645
1665
 
1646
- static VALUE spg__yield_each_row(VALUE self) {
1647
- PGresult *res;
1648
- VALUE rres;
1649
- VALUE rconn;
1650
- VALUE colsyms[SPG_MAX_FIELDS];
1651
- VALUE colconvert[SPG_MAX_FIELDS];
1652
- long nfields;
1653
- long j;
1666
+ struct spg__yield_each_row_stream_data {
1667
+ VALUE self;
1668
+ VALUE *colsyms;
1669
+ VALUE *colconvert;
1670
+ VALUE pg_value;
1671
+ int enc_index;
1672
+ char type;
1673
+ };
1674
+
1675
+ static void spg__yield_each_row_stream(VALUE rres, int ntuples, int nfields, void *rdata) {
1676
+ struct spg__yield_each_row_stream_data* data = (struct spg__yield_each_row_stream_data *)rdata;
1677
+ VALUE h = rb_hash_new();
1678
+ VALUE self = data->self;
1679
+ VALUE *colsyms = data->colsyms;
1680
+ VALUE *colconvert= data->colconvert;
1681
+ PGresult *res = pgresult_get(rres);
1682
+ int enc_index = data->enc_index;
1683
+ int j;
1684
+
1685
+ for(j=0; j<nfields; j++) {
1686
+ rb_hash_aset(h, colsyms[j], spg__col_value(self, res, 0, j, colconvert , enc_index));
1687
+ }
1688
+
1689
+ if(data->type == SPG_YIELD_MODEL) {
1690
+ VALUE model = rb_obj_alloc(data->pg_value);
1691
+ rb_ivar_set(model, spg_id_values, h);
1692
+ rb_yield(model);
1693
+ } else {
1694
+ rb_yield(h);
1695
+ }
1696
+ PQclear(res);
1697
+ }
1698
+
1699
+ static VALUE spg__yield_each_row_internal(VALUE self, VALUE rconn, VALUE rres, PGresult *res, int enc_index, VALUE *colsyms, VALUE *colconvert) {
1700
+ int nfields;
1701
+ int j;
1654
1702
  VALUE h;
1655
1703
  VALUE opts;
1656
1704
  VALUE pg_type;
1657
1705
  VALUE pg_value = Qnil;
1658
1706
  char type = SPG_YIELD_NORMAL;
1659
- int enc_index;
1660
-
1661
- rconn = rb_ary_entry(self, 1);
1662
- self = rb_ary_entry(self, 0);
1663
-
1664
- rres = rb_funcall(rconn, spg_id_get_result, 0);
1665
- if (rres == Qnil) {
1666
- goto end_yield_each_row;
1667
- }
1668
- rb_funcall(rres, spg_id_check, 0);
1669
- res = pgresult_get(rres);
1707
+ struct spg__yield_each_row_stream_data data;
1670
1708
 
1671
- enc_index = spg_use_pg_get_result_enc_idx ? pg_get_result_enc_idx(rres) : enc_get_index(rres);
1709
+ nfields = PQnfields(res);
1672
1710
 
1673
1711
  /* Only handle regular and model types. All other types require compiling all
1674
1712
  * of the results at once, which is not a use case for streaming. The streaming
@@ -1682,14 +1720,20 @@ static VALUE spg__yield_each_row(VALUE self) {
1682
1720
  }
1683
1721
  }
1684
1722
 
1685
- nfields = PQnfields(res);
1686
- if (nfields > SPG_MAX_FIELDS) {
1687
- rb_funcall(rres, spg_id_clear, 0);
1688
- rb_raise(rb_eRangeError, "more than %d columns in query", SPG_MAX_FIELDS);
1689
- }
1690
-
1691
1723
  spg_set_column_info(self, res, colsyms, colconvert, enc_index);
1692
1724
 
1725
+ if (spg_use_pg_stream_any) {
1726
+ data.self = self;
1727
+ data.colsyms = colsyms;
1728
+ data.colconvert = colconvert;
1729
+ data.pg_value = pg_value;
1730
+ data.enc_index = enc_index;
1731
+ data.type = type;
1732
+
1733
+ pgresult_stream_any(rres, spg__yield_each_row_stream, &data);
1734
+ return self;
1735
+ }
1736
+
1693
1737
  while (PQntuples(res) != 0) {
1694
1738
  h = rb_hash_new();
1695
1739
  for(j=0; j<nfields; j++) {
@@ -1709,14 +1753,57 @@ static VALUE spg__yield_each_row(VALUE self) {
1709
1753
 
1710
1754
  rres = rb_funcall(rconn, spg_id_get_result, 0);
1711
1755
  if (rres == Qnil) {
1712
- goto end_yield_each_row;
1756
+ return self;
1713
1757
  }
1714
1758
  rb_funcall(rres, spg_id_check, 0);
1715
1759
  res = pgresult_get(rres);
1716
1760
  }
1717
1761
  rb_funcall(rres, spg_id_clear, 0);
1718
1762
 
1719
- end_yield_each_row:
1763
+ return self;
1764
+ }
1765
+
1766
+ #define def_spg__yield_each_row(max_fields) static VALUE spg__yield_each_row_ ## max_fields(VALUE self, VALUE rconn, VALUE rres, PGresult *res, int enc_index) { \
1767
+ VALUE colsyms[max_fields]; \
1768
+ VALUE colconvert[max_fields]; \
1769
+ return spg__yield_each_row_internal(self, rconn, rres, res, enc_index, colsyms, colconvert); \
1770
+ }
1771
+
1772
+ def_spg__yield_each_row(16)
1773
+ def_spg__yield_each_row(64)
1774
+ def_spg__yield_each_row(256)
1775
+ def_spg__yield_each_row(1664)
1776
+
1777
+ static VALUE spg__yield_each_row(VALUE self) {
1778
+ PGresult *res;
1779
+ VALUE rres;
1780
+ VALUE rconn;
1781
+ int enc_index;
1782
+ int nfields;
1783
+
1784
+ rconn = rb_ary_entry(self, 1);
1785
+ self = rb_ary_entry(self, 0);
1786
+
1787
+ rres = rb_funcall(rconn, spg_id_get_result, 0);
1788
+ if (rres == Qnil) {
1789
+ return self;
1790
+ }
1791
+ rb_funcall(rres, spg_id_check, 0);
1792
+ res = pgresult_get(rres);
1793
+
1794
+ enc_index = spg_use_pg_get_result_enc_idx ? pg_get_result_enc_idx(rres) : enc_get_index(rres);
1795
+
1796
+ nfields = PQnfields(res);
1797
+ if (nfields <= 16) return spg__yield_each_row_16(self, rconn, rres, res, enc_index);
1798
+ else if (nfields <= 64) return spg__yield_each_row_64(self, rconn, rres, res, enc_index);
1799
+ else if (nfields <= 256) return spg__yield_each_row_256(self, rconn, rres, res, enc_index);
1800
+ else if (nfields <= 1664) return spg__yield_each_row_1664(self, rconn, rres, res, enc_index);
1801
+ else {
1802
+ rb_funcall(rres, spg_id_clear, 0);
1803
+ rb_raise(rb_eRangeError, "more than 1664 columns in query (%d columns detected)", nfields);
1804
+ }
1805
+
1806
+ /* UNREACHABLE */
1720
1807
  return self;
1721
1808
  }
1722
1809
 
@@ -1772,10 +1859,21 @@ void Init_sequel_pg(void) {
1772
1859
  }
1773
1860
  }
1774
1861
 
1775
- if (RTEST(rb_eval_string("defined?(PG::VERSION) && PG::VERSION.to_f >= 1.2"))) {
1776
- spg_use_pg_get_result_enc_idx = 1;
1862
+ c = rb_eval_string("defined?(PG::VERSION) && PG::VERSION.split('.').map(&:to_i)");
1863
+ if (RB_TYPE_P(c, T_ARRAY) && RARRAY_LEN(c) >= 3) {
1864
+ if (FIX2INT(RARRAY_AREF(c, 0)) > 1) {
1865
+ spg_use_pg_get_result_enc_idx = 1;
1866
+ spg_use_pg_stream_any = 1;
1867
+ } else if (FIX2INT(RARRAY_AREF(c, 0)) == 1) {
1868
+ if (FIX2INT(RARRAY_AREF(c, 1)) >= 2) {
1869
+ spg_use_pg_get_result_enc_idx = 1;
1870
+ }
1871
+ if (FIX2INT(RARRAY_AREF(c, 1)) > 3 || (FIX2INT(RARRAY_AREF(c, 1)) == 3 && FIX2INT(RARRAY_AREF(c, 2)) >= 4)) {
1872
+ spg_use_pg_stream_any = 1;
1873
+ }
1874
+ }
1777
1875
  }
1778
-
1876
+
1779
1877
  rb_const_set(spg_Postgres, rb_intern("SEQUEL_PG_VERSION_INTEGER"), INT2FIX(SEQUEL_PG_VERSION_INTEGER));
1780
1878
 
1781
1879
  spg_id_BigDecimal = rb_intern("BigDecimal");
@@ -1,9 +1,11 @@
1
+ # :nocov:
1
2
  unless Sequel::Postgres.respond_to?(:supports_streaming?)
2
3
  raise LoadError, "either sequel_pg not loaded, or an old version of sequel_pg loaded"
3
4
  end
4
5
  unless Sequel::Postgres.supports_streaming?
5
6
  raise LoadError, "streaming is not supported by the version of libpq in use"
6
7
  end
8
+ # :nocov:
7
9
 
8
10
  # Database methods necessary to support streaming. You should load this extension
9
11
  # into your database object:
@@ -73,12 +75,20 @@ module Sequel::Postgres::Streaming
73
75
 
74
76
  private
75
77
 
78
+ # :nocov:
79
+ unless Sequel::Postgres::Adapter.method_defined?(:send_query_params)
80
+ def send_query_params(*args)
81
+ send_query(*args)
82
+ end
83
+ end
84
+ # :nocov:
85
+
76
86
  if Sequel::Database.instance_methods.map(&:to_s).include?('log_connection_yield')
77
87
  # If using single row mode, send the query instead of executing it.
78
88
  def execute_query(sql, args)
79
89
  if @single_row_mode
80
90
  @single_row_mode = false
81
- @db.log_connection_yield(sql, self, args){args ? send_query(sql, args) : send_query(sql)}
91
+ @db.log_connection_yield(sql, self, args){args ? send_query_params(sql, args) : send_query(sql)}
82
92
  set_single_row_mode
83
93
  block
84
94
  self
@@ -87,6 +97,7 @@ module Sequel::Postgres::Streaming
87
97
  end
88
98
  end
89
99
  else
100
+ # :nocov:
90
101
  def execute_query(sql, args)
91
102
  if @single_row_mode
92
103
  @single_row_mode = false
@@ -98,6 +109,7 @@ module Sequel::Postgres::Streaming
98
109
  super
99
110
  end
100
111
  end
112
+ # :nocov:
101
113
  end
102
114
  end
103
115
 
@@ -122,7 +134,12 @@ module Sequel::Postgres::Streaming
122
134
  unless block_given?
123
135
  return enum_for(:paged_each, opts)
124
136
  end
125
- stream.each(&block)
137
+
138
+ if stream_results?
139
+ each(&block)
140
+ else
141
+ super
142
+ end
126
143
  end
127
144
 
128
145
  # Return a clone of the dataset that will use streaming to load
@@ -53,11 +53,13 @@ class Sequel::Postgres::Dataset
53
53
  end
54
54
  end
55
55
 
56
+ # :nocov:
56
57
  unless Sequel::Dataset.method_defined?(:as_hash)
57
58
  # Handle previous versions of Sequel that use to_hash instead of as_hash
58
59
  alias to_hash as_hash
59
60
  remove_method :as_hash
60
61
  end
62
+ # :nocov:
61
63
 
62
64
  # In the case where both arguments given, use an optimized version.
63
65
  def to_hash_groups(key_column, value_column = nil, opts = Sequel::OPTS)
@@ -120,6 +122,11 @@ if defined?(Sequel::Postgres::PGArray)
120
122
  # pg_array extension previously loaded
121
123
 
122
124
  class Sequel::Postgres::PGArray::Creator
125
+ # :nocov:
126
+ # Avoid method redefined verbose warning
127
+ alias call call if method_defined?(:call)
128
+ # :nocov:
129
+
123
130
  # Override Creator to use sequel_pg's C-based parser instead of the pure ruby parser.
124
131
  def call(string)
125
132
  Sequel::Postgres::PGArray.new(Sequel::Postgres.parse_pg_array(string, @converter), @type)
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sequel_pg
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.13.0
4
+ version: 1.16.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Jeremy Evans
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-04-13 00:00:00.000000000 Z
11
+ date: 2022-08-16 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: pg
@@ -101,7 +101,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
101
101
  - !ruby/object:Gem::Version
102
102
  version: '0'
103
103
  requirements: []
104
- rubygems_version: 3.1.2
104
+ rubygems_version: 3.3.7
105
105
  signing_key:
106
106
  specification_version: 4
107
107
  summary: Faster SELECTs when using Sequel with pg