iostreams 1.0.0 → 1.2.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (76) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +4 -426
  3. data/Rakefile +7 -7
  4. data/lib/io_streams/builder.rb +4 -3
  5. data/lib/io_streams/bzip2/reader.rb +1 -1
  6. data/lib/io_streams/bzip2/writer.rb +1 -1
  7. data/lib/io_streams/deprecated.rb +2 -3
  8. data/lib/io_streams/encode/reader.rb +5 -8
  9. data/lib/io_streams/encode/writer.rb +1 -1
  10. data/lib/io_streams/io_streams.rb +23 -2
  11. data/lib/io_streams/line/reader.rb +4 -3
  12. data/lib/io_streams/path.rb +4 -4
  13. data/lib/io_streams/paths/http.rb +9 -10
  14. data/lib/io_streams/paths/matcher.rb +11 -12
  15. data/lib/io_streams/paths/s3.rb +6 -6
  16. data/lib/io_streams/paths/sftp.rb +39 -24
  17. data/lib/io_streams/pgp.rb +46 -112
  18. data/lib/io_streams/pgp/reader.rb +4 -6
  19. data/lib/io_streams/pgp/writer.rb +31 -7
  20. data/lib/io_streams/reader.rb +2 -2
  21. data/lib/io_streams/record/reader.rb +33 -8
  22. data/lib/io_streams/record/writer.rb +35 -12
  23. data/lib/io_streams/row/reader.rb +4 -4
  24. data/lib/io_streams/row/writer.rb +7 -9
  25. data/lib/io_streams/stream.rb +12 -13
  26. data/lib/io_streams/symmetric_encryption/reader.rb +1 -3
  27. data/lib/io_streams/symmetric_encryption/writer.rb +2 -6
  28. data/lib/io_streams/tabular.rb +41 -11
  29. data/lib/io_streams/tabular/header.rb +4 -4
  30. data/lib/io_streams/tabular/parser/array.rb +2 -4
  31. data/lib/io_streams/tabular/parser/csv.rb +3 -5
  32. data/lib/io_streams/tabular/parser/fixed.rb +4 -3
  33. data/lib/io_streams/tabular/parser/hash.rb +2 -4
  34. data/lib/io_streams/tabular/parser/json.rb +2 -4
  35. data/lib/io_streams/tabular/parser/psv.rb +5 -7
  36. data/lib/io_streams/tabular/utility/csv_row.rb +9 -17
  37. data/lib/io_streams/utils.rb +7 -3
  38. data/lib/io_streams/version.rb +1 -1
  39. data/lib/io_streams/writer.rb +1 -1
  40. data/lib/io_streams/xlsx/reader.rb +5 -5
  41. data/lib/io_streams/zip/reader.rb +1 -1
  42. data/lib/io_streams/zip/writer.rb +2 -2
  43. data/lib/iostreams.rb +34 -34
  44. data/test/builder_test.rb +74 -74
  45. data/test/bzip2_reader_test.rb +8 -13
  46. data/test/bzip2_writer_test.rb +8 -9
  47. data/test/deprecated_test.rb +25 -29
  48. data/test/encode_reader_test.rb +14 -18
  49. data/test/encode_writer_test.rb +29 -30
  50. data/test/gzip_reader_test.rb +8 -13
  51. data/test/gzip_writer_test.rb +10 -11
  52. data/test/io_streams_test.rb +84 -35
  53. data/test/line_reader_test.rb +35 -39
  54. data/test/line_writer_test.rb +8 -9
  55. data/test/minimal_file_reader.rb +1 -1
  56. data/test/path_test.rb +24 -24
  57. data/test/paths/file_test.rb +42 -42
  58. data/test/paths/http_test.rb +5 -5
  59. data/test/paths/matcher_test.rb +26 -17
  60. data/test/paths/s3_test.rb +44 -46
  61. data/test/paths/sftp_test.rb +18 -18
  62. data/test/pgp_reader_test.rb +13 -15
  63. data/test/pgp_test.rb +43 -44
  64. data/test/pgp_writer_test.rb +53 -28
  65. data/test/record_reader_test.rb +9 -10
  66. data/test/record_writer_test.rb +10 -11
  67. data/test/row_reader_test.rb +5 -6
  68. data/test/row_writer_test.rb +7 -8
  69. data/test/stream_test.rb +60 -62
  70. data/test/tabular_test.rb +111 -111
  71. data/test/test_helper.rb +26 -22
  72. data/test/utils_test.rb +7 -7
  73. data/test/xlsx_reader_test.rb +12 -12
  74. data/test/zip_reader_test.rb +14 -21
  75. data/test/zip_writer_test.rb +10 -10
  76. metadata +3 -3
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e81d84bc2bb265b09acc66d39fa3ebc168d045127cf9942ce49cb06c5da369f2
4
- data.tar.gz: 534e1bf05113578b1848fe196981b21c516115c783d778bb31ec78b0bc60b271
3
+ metadata.gz: 1dad581b0665992975c33f75b23f50964ae1311e025b7a1524fca4004f0ede2b
4
+ data.tar.gz: 4db01e4d6c2d36ce522df3b323a6e0d9f42de0d1644a282a0cea06479e979289
5
5
  SHA512:
6
- metadata.gz: 7a9bdf64f5142ab31c6e3f1f620d6e619041c7b9802928fd2b9c2508b9c90b95e943188299632c2407660594a0aeecbe3b5fc61a7bee80703701cfdf2827d906
7
- data.tar.gz: d9584118b6bb8088c2e1e54f60fc2370213d693b1130e0e104c5a6cfcc4027b776e736f39e71393e14c0510ab1353d76398fad9ff44ec32479d17bbb5f45c86d
6
+ metadata.gz: 4057a5c484129c60dbc9c84e462026da862900e17b0604b385164210f14814fbae6d065d015ee9171402eb9f793f33ac26c0ee7658f94b8cdeb0724c796cbe63
7
+ data.tar.gz: 5a84fe37c1eebc775bd84b9903181ff035c325b1233ab64990e586f5b0bd3fd51c21d4f1429f9b0e8ab64733e9b63be5c5e05df7bf026e9d1d8c0cd8a7716417
data/README.md CHANGED
@@ -1,437 +1,15 @@
1
1
  # iostreams
2
- [![Gem Version](https://img.shields.io/gem/v/iostreams.svg)](https://rubygems.org/gems/iostreams) [![Build Status](https://travis-ci.org/rocketjob/iostreams.svg?branch=master)](https://travis-ci.org/rocketjob/iostreams) [![Downloads](https://img.shields.io/gem/dt/iostreams.svg)](https://rubygems.org/gems/iostreams) [![License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](http://opensource.org/licenses/Apache-2.0) ![](https://img.shields.io/badge/status-Beta-yellow.svg) [![Gitter chat](https://img.shields.io/badge/IRC%20(gitter)-Support-brightgreen.svg)](https://gitter.im/rocketjob/support)
2
+ [![Gem Version](https://img.shields.io/gem/v/iostreams.svg)](https://rubygems.org/gems/iostreams) [![Build Status](https://travis-ci.org/rocketjob/iostreams.svg?branch=master)](https://travis-ci.org/rocketjob/iostreams) [![Downloads](https://img.shields.io/gem/dt/iostreams.svg)](https://rubygems.org/gems/iostreams) [![License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](http://opensource.org/licenses/Apache-2.0) ![](https://img.shields.io/badge/status-Production%20Ready-blue.svg) [![Gitter chat](https://img.shields.io/badge/IRC%20(gitter)-Support-brightgreen.svg)](https://gitter.im/rocketjob/support)
3
3
 
4
4
  Input and Output streaming for Ruby.
5
5
 
6
6
  ## Project Status
7
7
 
8
- Production Ready, but API is subject to breaking changes until V1 is released.
8
+ Production Ready, heavily used in production environments, many as part of Rocket Job.
9
9
 
10
- ## Features
10
+ ## Documentation
11
11
 
12
- Supported streams:
13
-
14
- * Zip
15
- * Gzip
16
- * BZip2
17
- * PGP (Requires GnuPG)
18
- * Xlsx (Reading)
19
- * Encryption using [Symmetric Encryption](https://github.com/reidmorrison/symmetric-encryption)
20
-
21
- Supported sources and/or targets:
22
-
23
- * File
24
- * HTTP (Read only)
25
- * AWS S3
26
- * SFTP
27
-
28
- Supported file formats:
29
-
30
- * CSV
31
- * Fixed width formats
32
- * JSON
33
- * PSV
34
-
35
- ## Quick examples
36
-
37
- Read an entire file into memory:
38
-
39
- ```ruby
40
- IOStreams.path('example.txt').read
41
- ```
42
-
43
- Decompress an entire gzip file into memory:
44
-
45
- ```ruby
46
- IOStreams.path('example.gz').read
47
- ```
48
-
49
- Read and decompress the first file in a zip file into memory:
50
-
51
- ```ruby
52
- IOStreams.path('example.zip').read
53
- ```
54
-
55
- Read a file one line at a time
56
-
57
- ```ruby
58
- IOStreams.path('example.txt').each do |line|
59
- puts line
60
- end
61
- ```
62
-
63
- Read a CSV file one line at a time, returning each line as an array:
64
-
65
- ```ruby
66
- IOStreams.path('example.csv').each(:array) do |array|
67
- p array
68
- end
69
- ```
70
-
71
- Read a CSV file a record at a time, returning each line as a hash.
72
- The first line of the file is assumed to be the header line:
73
-
74
- ```ruby
75
- IOStreams.path('example.csv').each(:hash) do |hash|
76
- p hash
77
- end
78
- ```
79
-
80
- Read a file using an http get,
81
- decompressing the named file in the zip file,
82
- returning each records from the named file as a hash:
83
-
84
- ```ruby
85
- IOStreams.
86
- path("https://www5.fdic.gov/idasp/Offices2.zip").
87
- option(:zip, entry_file_name: 'OFFICES2_ALL.CSV').
88
- reader(:hash) do |stream|
89
- p stream.read
90
- end
91
- ```
92
-
93
- Read the file without unzipping and streaming the first file in the zip:
94
-
95
- ```ruby
96
- IOStreams.path('https://www5.fdic.gov/idasp/Offices2.zip').stream(:none).reader {|file| puts file.read}
97
- ```
98
-
99
-
100
- ## Introduction
101
-
102
- If all files were small, they could just be loaded into memory in their entirety. With the
103
- advent of very large files, often into several Gigabytes, or even Terabytes in size, loading
104
- them into memory is not feasible.
105
-
106
- In linux it is common to use pipes to stream data between processes.
107
- For example:
108
-
109
- ```
110
- # Count the number of lines in a file that has been compressed with gzip
111
- cat abc.gz | gunzip -c | wc -l
112
- ```
113
-
114
- For large files it is critical to be able to read and write these files as streams. Ruby has support
115
- for reading and writing files using streams, but has no built-in way of passing one stream through
116
- another to support for example compressing the data, encrypting it and then finally writing the result
117
- to a file. Several streaming implementations exist for languages such as `C++` and `Java` to chain
118
- together several streams, `iostreams` attempts to offer similar features for Ruby.
119
-
120
- ```ruby
121
- # Read a compressed file:
122
- IOStreams.path("hello.gz").reader do |reader|
123
- data = reader.read(1024)
124
- puts "Read: #{data}"
125
- end
126
- ```
127
-
128
- The true power of streams is shown when many streams are chained together to achieve the end
129
- result, without holding the entire file in memory, or ideally without needing to create
130
- any temporary files to process the stream.
131
-
132
- ```ruby
133
- # Create a file that is compressed with GZip and then encrypted with Symmetric Encryption:
134
- IOStreams.path("hello.gz.enc").writer do |writer|
135
- writer.write("Hello World")
136
- writer.write("and some more")
137
- end
138
- ```
139
-
140
- The power of the above example applies when the data being written starts to exceed hundreds of megabytes,
141
- or even gigabytes.
142
-
143
- By looking at the file name supplied above, `iostreams` is able to determine which streams to apply
144
- to the data being read or written. For example:
145
- * `hello.zip` => Compressed using Zip
146
- * `hello.zip.enc` => Compressed using Zip and then encrypted using Symmetric Encryption
147
- * `hello.gz.enc` => Compressed using GZip and then encrypted using Symmetric Encryption
148
-
149
- The objective is that all of these streaming processes are performed used streaming
150
- so that only the current portion of the file is loaded into memory as it moves
151
- through the entire file.
152
- Where possible each stream never goes to disk, which for example could expose
153
- un-encrypted data.
154
-
155
- ## Examples
156
-
157
- While decompressing the file, display 128 characters at a time from the file.
158
-
159
- ~~~ruby
160
- require "iostreams"
161
- IOStreams.path("abc.csv").reader do |io|
162
- while (data = io.read(128))
163
- p data
164
- end
165
- end
166
- ~~~
167
-
168
- While decompressing the file, display one line at a time from the file.
169
-
170
- ~~~ruby
171
- IOStreams.path("abc.csv").each do |line|
172
- puts line
173
- end
174
- ~~~
175
-
176
- While decompressing the file, display each row from the csv file as an array.
177
-
178
- ~~~ruby
179
- IOStreams.path("abc.csv").each(:array) do |array|
180
- p array
181
- end
182
- ~~~
183
-
184
- While decompressing the file, display each record from the csv file as a hash.
185
- The first line is assumed to be the header row.
186
-
187
- ~~~ruby
188
- IOStreams.path("abc.csv").each(:hash) do |hash|
189
- p hash
190
- end
191
- ~~~
192
-
193
- Write data while compressing the file.
194
-
195
- ~~~ruby
196
- IOStreams.path("abc.csv").writer do |io|
197
- io.write("This")
198
- io.write(" is ")
199
- io.write(" one line\n")
200
- end
201
- ~~~
202
-
203
- Write a line at a time while compressing the file.
204
-
205
- ~~~ruby
206
- IOStreams.path("abc.csv").writer(:line) do |file|
207
- file << "these"
208
- file << "are"
209
- file << "all"
210
- file << "separate"
211
- file << "lines"
212
- end
213
- ~~~
214
-
215
- Write an array (row) at a time while compressing the file.
216
- Each array is converted to csv before being compressed with zip.
217
-
218
- ~~~ruby
219
- IOStreams.path("abc.csv").writer(:array) do |io|
220
- io << %w[name address zip_code]
221
- io << %w[Jack There 1234]
222
- io << ["Joe", "Over There somewhere", 1234]
223
- end
224
- ~~~
225
-
226
- Write a hash (record) at a time while compressing the file.
227
- Each hash is converted to csv before being compressed with zip.
228
- The header row is extracted from the first hash supplied.
229
-
230
- ~~~ruby
231
- IOStreams.path("abc.csv").writer(:hash) do |stream|
232
- stream << {name: "Jack", address: "There", zip_code: 1234}
233
- stream << {name: "Joe", address: "Over There somewhere", zip_code: 1234}
234
- end
235
- ~~~
236
-
237
- Write to a string IO for testing, supplying the filename so that the streams can be determined.
238
-
239
- ~~~ruby
240
- io = StringIO.new
241
- IOStreams.stream(io, file_name: "abc.csv").writer(:hash) do |stream|
242
- stream << {name: "Jack", address: "There", zip_code: 1234}
243
- stream << {name: "Joe", address: "Over There somewhere", zip_code: 1234}
244
- end
245
- puts io.string
246
- ~~~
247
-
248
- Read a CSV file and write the output to an encrypted file in JSON format.
249
-
250
- ~~~ruby
251
- IOStreams.path("sample.json.enc").writer(:hash) do |output|
252
- IOStreams.path("sample.csv").each(:hash) do |record|
253
- output << record
254
- end
255
- end
256
- ~~~
257
-
258
- ## Copying between files
259
-
260
- Stream based file copying. Changes the file type without changing the file format. For example, compress or encrypt.
261
-
262
- Encrypt the contents of the file `sample.json` and write to `sample.json.enc`
263
-
264
- ~~~ruby
265
- input = IOStreams.path("sample.json")
266
- IOStreams.path("sample.json.enc").copy_from(input)
267
- ~~~
268
-
269
- Encrypt and compress the contents of the file `sample.json` with Symmetric Encryption and write to `sample.json.enc`
270
-
271
- ~~~ruby
272
- input = IOStreams.path("sample.json")
273
- IOStreams.path("sample.json.enc").option(:enc, compress: true).copy_from(input)
274
- ~~~
275
-
276
- Encrypt and compress the contents of the file `sample.json` with pgp and write to `sample.json.enc`
277
-
278
- ~~~ruby
279
- input = IOStreams.path("sample.json")
280
- IOStreams.path("sample.json.pgp").option(:pgp, recipient: "sender@example.org").copy_from(input)
281
- ~~~
282
-
283
- Decrypt the file `abc.csv.enc` and write it to `xyz.csv`.
284
-
285
- ~~~ruby
286
- input = IOStreams.path("abc.csv.enc")
287
- IOStreams.path("xyz.csv").copy_from(input)
288
- ~~~
289
-
290
- Decrypt file `ABC` that was encrypted with Symmetric Encryption,
291
- PGP encrypt the output file and write it to `xyz.csv.pgp` using the pgp key that was imported for `a@a.com`.
292
-
293
- ~~~ruby
294
- input = IOStreams.path("ABC").stream(:enc)
295
- IOStreams.path("xyz.csv.pgp").option(:pgp, recipient: "a@a.com").copy_from(input)
296
- ~~~
297
-
298
- To copy a file _without_ performing any conversions (ignore file extensions), set `convert` to `false`:
299
-
300
- ~~~ruby
301
- input = IOStreams.path("sample.json.zip")
302
- IOStreams.path("sample.copy").copy_from(input, convert: false)
303
- ~~~
304
-
305
- ## Philosopy
306
-
307
- IOStreams can be used to work against a single stream. it's real capability becomes apparent when chaining together
308
- multiple streams to process data, without loading entire files into memory.
309
-
310
- #### Linux Pipes
311
-
312
- Linux has built-in support for streaming using the `|` (pipe operator) to send the output from one process to another.
313
-
314
- Example: count the number of lines in a compressed file:
315
-
316
- gunzip -c hello.csv.gz | wc -l
317
-
318
- The file `hello.csv.gz` is uncompressed and returned to standard output, which in turn is piped into the standard
319
- input for `wc -l`, which counts the number of lines in the uncompressed data.
320
-
321
- As each block of data is returned from `gunzip` it is immediately passed into `wc` so that it
322
- can start counting lines of uncompressed data, without waiting until the entire file is decompressed.
323
- The uncompressed contents of the file are not written to disk before passing to `wc -l` and the file is not loaded
324
- into memory before passing to `wc -l`.
325
-
326
- In this way extremely large files can be processed with very little memory being used.
327
-
328
- #### Push Model
329
-
330
- In the Linux pipes example above this would be considered a "push model" where each task in the list pushes
331
- its output to the input of the next task.
332
-
333
- A major challenge or disadvantage with the push model is that buffering would need to occur between tasks since
334
- each task could complete at very different speeds. To prevent large memory usage the standard output from a previous
335
- task would have to be blocked to try and make it slow down.
336
-
337
- #### Pull Model
338
-
339
- Another approach with multiple tasks that need to process a single stream, is to move to a "pull model" where the
340
- task at the end of the list pulls a block from a previous task when it is ready to process it.
341
-
342
- #### IOStreams
343
-
344
- IOStreams uses the pull model when reading data, where each stream performs a read against the previous stream
345
- when it is ready for more data.
346
-
347
- When writing to an output stream, IOStreams uses the push model, where each block of data that is ready to be written
348
- is pushed to the task/stream in the list. The write push only returns once it has traversed all the way down to
349
- the final task / stream in the list, this avoids complex buffering issues between each task / stream in the list.
350
-
351
- Example: Implementing in Ruby: `gunzip -c hello.csv.gz | wc -l`
352
-
353
- ~~~ruby
354
- line_count = 0
355
- IOStreams::Gzip::Reader.open("hello.csv.gz") do |input|
356
- IOStreams::Line::Reader.open(input) do |lines|
357
- lines.each { line_count += 1}
358
- end
359
- end
360
- puts "hello.csv.gz contains #{line_count} lines"
361
- ~~~
362
-
363
- Since IOStreams can autodetect file types based on the file extension, `IOStreams.reader` can figure which stream
364
- to start with:
365
- ~~~ruby
366
- line_count = 0
367
- IOStreams.path("hello.csv.gz").reader do |input|
368
- IOStreams::Line::Reader.open(input) do |lines|
369
- lines.each { line_count += 1}
370
- end
371
- end
372
- puts "hello.csv.gz contains #{line_count} lines"
373
- ~~~
374
-
375
- Since we know we want a line reader, it can be simplified using `#reader(:line)`:
376
- ~~~ruby
377
- line_count = 0
378
- IOStreams.path("hello.csv.gz").reader(:line) do |lines|
379
- lines.each { line_count += 1}
380
- end
381
- puts "hello.csv.gz contains #{line_count} lines"
382
- ~~~
383
-
384
- It can be simplified even further using `#each`:
385
- ~~~ruby
386
- line_count = 0
387
- IOStreams.path("hello.csv.gz").each { line_count += 1}
388
- puts "hello.csv.gz contains #{line_count} lines"
389
- ~~~
390
-
391
- The benefit in all of the above cases is that the file can be any arbitrary size and only one block of the file
392
- is held in memory at any time.
393
-
394
- #### Chaining
395
-
396
- In the above example only 2 streams were used. Streams can be nested as deep as necessary to process data.
397
-
398
- Example, search for all occurrences of the word apple, cleansing the input data stream of non printable characters
399
- and converting to valid US ASCII.
400
-
401
- ~~~ruby
402
- apple_count = 0
403
- IOStreams::Gzip::Reader.open("hello.csv.gz") do |input|
404
- IOStreams::Encode::Reader.open(input,
405
- encoding: "US-ASCII",
406
- encode_replace: "",
407
- encode_cleaner: :printable) do |cleansed|
408
- IOStreams::Line::Reader.open(cleansed) do |lines|
409
- lines.each { |line| apple_count += line.scan("apple").count}
410
- end
411
- end
412
- puts "Found the word 'apple' #{apple_count} times in hello.csv.gz"
413
- ~~~
414
-
415
- Let IOStreams perform the above stream chaining automatically under the covers:
416
-
417
- ~~~ruby
418
- apple_count = 0
419
- IOStreams.path("hello.csv.gz").
420
- option(:encode, encoding: "US-ASCII", replace: "", cleaner: :printable).
421
- each do |line|
422
- apple_count += line.scan("apple").count
423
- end
424
-
425
- puts "Found the word 'apple' #{apple_count} times in hello.csv.gz"
426
- ~~~
427
-
428
- ## Notes
429
-
430
- * Due to the nature of Zip, both its Reader and Writer methods will create
431
- a temp file when reading from or writing to a stream.
432
- Recommended to use Gzip over Zip since it can be streamed without requiring temp files.
433
- * Zip becomes exponentially slower with very large files, especially files
434
- that exceed 4GB when uncompressed. Highly recommend using GZip for large files.
12
+ [Semantic Logger Guide](http://rocketjob.github.io/iostreams)
435
13
 
436
14
  ## Versioning
437
15