purplelight 0.1.9 → 0.1.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b87960253dbd1ab6aae3b60dc790068d851f3798b124c23451bdae96734d6d67
4
- data.tar.gz: b1eab05f8580a282b836da8eddb5dfe964ef6cb90a94300304ecd0426f786998
3
+ metadata.gz: f0f51fd601a59915a2a022831663fd4f2468e781b68b96f59d396359be49adbc
4
+ data.tar.gz: c899a18e7ce390bfc05f832dd32248aa8cbdc7b43bccf86197350e2c7929e7a6
5
5
  SHA512:
6
- metadata.gz: 7bff1db0acebc6416b7dd484fe882947bc74927a6833e99a0fec64d03203babfbf625f44c6a8d6c29cab31a6bc7ccae31de3a7d0b55283d073053a21515faeb3
7
- data.tar.gz: b56bd93e12571aafe2ab47a1dc087d3429c4a15a731d50159552fbe70a0f63b40ee2d44fb23bf27752045df9f6e146376af906a00afdfada7e068420a4012925
6
+ metadata.gz: f6546911873ed22865b9d4cdd2cc62d855ab3b991030808d8f49f3e054727a406b80c7dc43c518a450915152f2934dcba180d53bf75807c540eef893b3ca50b8
7
+ data.tar.gz: 5e7176eec64956388e72fd3d894db12e006a18edd4aade7eaf13b144381802932d7207ffefac6ad06157c03363f41acec4ca997871fb8abe8efc9e06e2238804
data/README.md CHANGED
@@ -9,7 +9,7 @@ Purplelight is published on RubyGems: [purplelight on RubyGems](https://rubygems
9
9
  Add to your Gemfile:
10
10
 
11
11
  ```ruby
12
- gem 'purplelight', '~> 0.1.9'
12
+ gem 'purplelight', '~> 0.1.10'
13
13
  ```
14
14
 
15
15
  Or install directly:
@@ -138,10 +138,21 @@ Purplelight.snapshot(
138
138
  output: '/data/exports',
139
139
  format: :parquet,
140
140
  sharding: { mode: :single_file, prefix: 'users' },
141
+ # Optional: tune row group size
142
+ # parquet_row_group: 50_000,
141
143
  resume: { enabled: true }
142
144
  )
143
145
  ```
144
146
 
147
+ ### Environment variables (optional)
148
+
149
+ CLI flags take precedence, but these environment variables can set sensible defaults:
150
+
151
+ - `PL_ZSTD_LEVEL`: default zstd compression level used by writers.
152
+ - `PL_WRITE_CHUNK_BYTES`: JSONL join/write chunk size in bytes.
153
+ - `PL_PARQUET_ROW_GROUP`: default Parquet row group size (rows).
154
+ - `PL_TELEMETRY`: set to `1` to enable telemetry by default.
155
+
145
156
  ### CLI
146
157
 
147
158
  ```bash
@@ -149,9 +160,46 @@ bundle exec bin/purplelight \
149
160
  --uri "$MONGO_URL" \
150
161
  --db mydb --collection users \
151
162
  --output /data/exports \
152
- --format jsonl --partitions 8 --by-size $((256*1024*1024)) --prefix users
163
+ --format jsonl --partitions 8 --by-size $((256*1024*1024)) --prefix users \
164
+ --queue-mb 512 --rotate-mb 512 --compression zstd --compression-level 6 \
165
+ --read-preference secondary --read-tags nodeType=ANALYTICS,region=EAST \
166
+ --read-concern majority --no-cursor-timeout true
153
167
  ```
154
168
 
169
+ #### CLI options (reference)
170
+
171
+ - `--uri URI` (required): Mongo connection string.
172
+ - `--db NAME` (required): Database name.
173
+ - `--collection NAME` (required): Collection name.
174
+ - `--output PATH` (required): Output directory or file path.
175
+ - `--format FORMAT`: `jsonl|csv|parquet` (default `jsonl`).
176
+ - `--compression NAME`: `zstd|gzip|none` (default `zstd`).
177
+ - `--compression-level N`: Compression level (zstd or gzip; writer-specific defaults if omitted).
178
+ - `--partitions N`: Number of reader partitions (defaults to ≥4 and ≤32 based on cores).
179
+ - `--batch-size N`: Mongo batch size (default 2000).
180
+ - `--queue-mb MB`: In-memory queue size in MB (default 256).
181
+ - `--rotate-mb MB`: Target rotate size for JSONL/CSV parts in MB (default 256). For multi-part outputs, pairs well with `--by-size`.
182
+ - `--by-size BYTES`: Plan size-based sharding for multi-part outputs.
183
+ - `--single-file`: Single output file (CSV/Parquet; JSONL remains multi-part).
184
+ - `--prefix NAME`: Output filename prefix (defaults to collection name when output is a directory).
185
+ - `--query JSON`: Filter as JSON/Extended JSON (supports `$date`, `$oid`, etc.).
186
+ - `--projection JSON`: Projection as JSON, e.g. `{"_id":1,"email":1}`.
187
+ - `--read-preference MODE`: `primary|primary_preferred|secondary|secondary_preferred|nearest`.
188
+ - `--read-tags key=value[,key=value...]`: Tag sets for node pinning.
189
+ - `--read-concern LEVEL`: `majority|local|linearizable|available|snapshot`.
190
+ - `--no-cursor-timeout BOOL`: Toggle `noCursorTimeout` (default true).
191
+ - `--parquet-row-group N`: Parquet row group size (rows).
192
+ - `--write-chunk-mb MB`: JSONL encode/write chunk size before enqueueing.
193
+ - `--writer-threads N` (experimental): Number of writer threads (JSONL only).
194
+ - `--telemetry on|off`: Force enable/disable telemetry output.
195
+ - `--resume-overwrite-incompatible`: Overwrite an existing incompatible manifest to safely resume anew.
196
+ - `--dry-run`: Print effective read preference JSON and exit (no snapshot).
197
+ - `--version`, `--help`: Utility commands.
198
+
199
+ Notes:
200
+ - Compression backend selection order is: requested format → `zstd-ruby` → `zstds` → `gzip`.
201
+ - `--single-file` and `--by-size` update only the sharding mode/params and preserve any provided `--prefix`.
202
+
155
203
  ### Architecture
156
204
 
157
205
  ```mermaid
@@ -181,19 +229,28 @@ Key points:
181
229
 
182
230
  ### Tuning for performance
183
231
 
184
- - Partitions: start with `2 × cores` (default). Increase gradually if reads are underutilized; too high can add overhead.
185
- - Batch size: 2k–10k usually works well. Larger batches reduce cursor roundtrips, but can raise latency/memory.
186
- - Queue size: increase to 256–512MB to reduce backpressure on readers for fast disks.
187
- - Compression: use `:zstd` for good ratio; for max speed, try `:gzip` with low level.
188
- - Rotation size: larger (512MB–1GB) reduces finalize overhead for many parts.
189
- - Read preference: offload to secondaries or tagged analytics nodes when available.
232
+ - **Partitions**: start with `2 × cores` (default). Increase gradually if reads are underutilized; too high can add overhead. CLI: `--partitions`.
233
+ - **Batch size**: 2k–10k usually works well. Larger batches reduce cursor roundtrips, but can raise latency/memory. CLI: `--batch-size`.
234
+ - **Queue size**: increase to 256–512MB to reduce backpressure on readers for fast disks. CLI: `--queue-mb`.
235
+ - **Compression**: prefer `zstd`; adjust level to balance speed/ratio. CLI: `--compression zstd --compression-level N`. For max speed, try `--compression gzip --compression-level 1`.
236
+ - **Rotation size**: larger (512MB–1GB) reduces finalize overhead for many parts. CLI: `--rotate-mb` (and/or `--by-size`).
237
+ - **JSONL chunking**: tune builder write chunk size for throughput. CLI: `--write-chunk-mb`.
238
+ - **Parquet row groups**: choose a row group size that fits downstream readers. CLI: `--parquet-row-group`.
239
+ - **Read preference**: offload to secondaries or tagged analytics nodes when available. CLI: `--read-preference`, `--read-tags`.
240
+ - **Read concern**: pick an appropriate level for consistency/latency trade-offs. CLI: `--read-concern`.
241
+ - **Cursor timeout**: for very long scans, leave `noCursorTimeout` enabled. CLI: `--no-cursor-timeout true|false`.
242
+ - **Telemetry**: enable to inspect timing breakdowns; disable for minimal output. CLI: `--telemetry on|off`.
190
243
 
191
244
  Benchmarking (optional):
192
245
 
193
246
  ```bash
194
- # 1M docs benchmark with tunables
247
+ # 1M docs benchmark with tunables (JSONL)
195
248
  BENCH=1 BENCH_PARTITIONS=16 BENCH_BATCH_SIZE=8000 BENCH_QUEUE_MB=512 BENCH_ROTATE_MB=512 BENCH_COMPRESSION=gzip \
196
249
  bundle exec rspec spec/benchmark_perf_spec.rb --format doc
250
+
251
+ # Parquet benchmark (requires Arrow/Parquet)
252
+ BENCH=1 BENCH_FORMAT=parquet BENCH_PARQUET_ROW_GROUP=50000 BENCH_PARTITIONS=16 BENCH_BATCH_SIZE=8000 \
253
+ bundle exec rspec spec/benchmark_perf_spec.rb --format doc
197
254
  ```
198
255
 
199
256
  ### Read preference and node pinning
@@ -262,3 +319,8 @@ Benchmark results:
262
319
  Finished in 14.02 seconds (files took 0.31974 seconds to load)
263
320
  1 example, 0 failures
264
321
  ```
322
+
323
+ Additional BENCH variables:
324
+
325
+ - `BENCH_FORMAT`: `jsonl|parquet` (default `jsonl`).
326
+ - `BENCH_PARQUET_ROW_GROUP`: Parquet row group size (rows), e.g. `50000`.
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Purplelight
4
- VERSION = '0.1.9'
4
+ VERSION = '0.1.10'
5
5
  end
@@ -28,6 +28,7 @@ module Purplelight
28
28
  @closed = false
29
29
  @file_seq = 0
30
30
  @part_index = nil
31
+ @pq_writer = nil
31
32
 
32
33
  ensure_dependencies!
33
34
  reset_buffers
@@ -36,6 +37,7 @@ module Purplelight
36
37
  def write_many(array_of_docs)
37
38
  ensure_open!
38
39
  array_of_docs.each { |doc| @buffer_docs << doc }
40
+ flush_row_groups_if_needed
39
41
  @manifest&.add_progress_to_part!(index: @part_index, rows_delta: array_of_docs.length, bytes_delta: 0)
40
42
  end
41
43
 
@@ -43,15 +45,7 @@ module Purplelight
43
45
  return if @closed
44
46
 
45
47
  ensure_open!
46
- unless @buffer_docs.empty?
47
- t_tbl = Thread.current[:pl_telemetry]&.start(:parquet_table_build_time)
48
- table = build_table(@buffer_docs)
49
- Thread.current[:pl_telemetry]&.finish(:parquet_table_build_time, t_tbl)
50
-
51
- t_w = Thread.current[:pl_telemetry]&.start(:parquet_write_time)
52
- write_table(table, @writer_path, append: false)
53
- Thread.current[:pl_telemetry]&.finish(:parquet_write_time, t_w)
54
- end
48
+ flush_all_row_groups
55
49
  finalize_current_part!
56
50
  @closed = true
57
51
  end
@@ -92,22 +86,32 @@ module Purplelight
92
86
  end
93
87
 
94
88
  def write_table(table, path, append: false) # rubocop:disable Lint/UnusedMethodArgument
95
- # Prefer Arrow's save with explicit parquet format; compression defaults per build.
96
- if table.respond_to?(:save)
97
- table.save(path, format: :parquet)
89
+ # Stream via ArrowFileWriter when available to avoid building huge tables
90
+ if defined?(Parquet::ArrowFileWriter)
91
+ unless @pq_writer
92
+ @pq_writer = Parquet::ArrowFileWriter.open(table.schema, path)
93
+ end
94
+ # Prefer passing row_group_size; fallback to single-arg for older APIs
95
+ begin
96
+ @pq_writer.write_table(table, @row_group_size)
97
+ rescue ArgumentError
98
+ @pq_writer.write_table(table)
99
+ end
98
100
  return
99
101
  end
100
- # Fallback to red-parquet writer
101
- if defined?(Parquet::ArrowFileWriter)
102
- writer = Parquet::ArrowFileWriter.open(table.schema, path)
103
- writer.write_table(table)
104
- writer.close
102
+ # Fallback to one-shot save when streaming API is not available
103
+ if table.respond_to?(:save)
104
+ table.save(path, format: :parquet)
105
105
  return
106
106
  end
107
107
  raise 'Parquet writer not available in this environment'
108
108
  end
109
109
 
110
110
  def finalize_current_part!
111
+ if @pq_writer
112
+ @pq_writer.close
113
+ @pq_writer = nil
114
+ end
111
115
  @manifest&.complete_part!(index: @part_index, checksum: nil)
112
116
  @file_seq += 1 unless @single_file
113
117
  @writer_path = nil
@@ -138,5 +142,38 @@ module Purplelight
138
142
 
139
143
  value
140
144
  end
145
+
146
+ def flush_row_groups_if_needed
147
+ return if @buffer_docs.empty?
148
+
149
+ while @buffer_docs.length >= @row_group_size
150
+ group = @buffer_docs.shift(@row_group_size)
151
+ t_tbl = Thread.current[:pl_telemetry]&.start(:parquet_table_build_time)
152
+ table = build_table(group)
153
+ Thread.current[:pl_telemetry]&.finish(:parquet_table_build_time, t_tbl)
154
+
155
+ t_w = Thread.current[:pl_telemetry]&.start(:parquet_write_time)
156
+ write_table(table, @writer_path, append: true)
157
+ Thread.current[:pl_telemetry]&.finish(:parquet_write_time, t_w)
158
+ end
159
+ end
160
+
161
+ def flush_all_row_groups
162
+ return if @buffer_docs.empty?
163
+
164
+ # Flush any full groups first
165
+ flush_row_groups_if_needed
166
+ return if @buffer_docs.empty?
167
+
168
+ # Flush remaining as a final smaller group
169
+ t_tbl = Thread.current[:pl_telemetry]&.start(:parquet_table_build_time)
170
+ table = build_table(@buffer_docs)
171
+ Thread.current[:pl_telemetry]&.finish(:parquet_table_build_time, t_tbl)
172
+
173
+ t_w = Thread.current[:pl_telemetry]&.start(:parquet_write_time)
174
+ write_table(table, @writer_path, append: true)
175
+ Thread.current[:pl_telemetry]&.finish(:parquet_write_time, t_w)
176
+ @buffer_docs.clear
177
+ end
141
178
  end
142
179
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: purplelight
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.9
4
+ version: 0.1.10
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alexander Nicholson