data_drain 0.1.9 → 0.1.13
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +18 -0
- data/README.md +101 -22
- data/data_drain.gemspec +1 -0
- data/lib/data_drain/engine.rb +9 -3
- data/lib/data_drain/glue_runner.rb +43 -0
- data/lib/data_drain/version.rb +1 -1
- data/lib/data_drain.rb +1 -0
- metadata +17 -2
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 9c484ac47a5f767491fa8d8e48dbdb53ccdd55d756a6a0eb90d7bbeb0d28f68a
|
|
4
|
+
data.tar.gz: 18526e071ac821f7c19127cb53dad875108ede9ab9b7bfe40a1d17bde877a6cc
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: abf18e9f987f24cb2e58fb71be8a15f92f867f6e671b174e6414b7a44a5626a316235c091bd59708c1ddc93c755db87ec92af117573c68213d0f2238165728be
|
|
7
|
+
data.tar.gz: '00124804ef7f7c9dc2c67d47a1a2304d4dc996b0caff24548acbe913e85f5ae43d410eac6725f264f9b9648d49c7dc8bdc0baed77e5ea958bfa3fc8cea08ee9d'
|
data/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,23 @@
|
|
|
1
1
|
## [Unreleased]
|
|
2
2
|
|
|
3
|
+
## [0.1.13] - 2026-03-17
|
|
4
|
+
|
|
5
|
+
- Feature: Parametrización total en la orquestación con Glue. Se añadieron \`s3_bucket\`, \`s3_folder\` y \`partition_by\` como argumentos dinámicos, permitiendo que el mismo Job de Glue sirva para múltiples tablas y destinos.
|
|
6
|
+
|
|
7
|
+
## [0.1.12] - 2026-03-17
|
|
8
|
+
|
|
9
|
+
- Feature: Parametrización dinámica de la base de datos en \`GlueRunner\` y el script de PySpark. Ahora se pasan \`db_url\`, \`db_user\`, \`db_password\` y \`db_table\` como argumentos al Job de Glue.
|
|
10
|
+
|
|
11
|
+
## [0.1.11] - 2026-03-17
|
|
12
|
+
|
|
13
|
+
- Feature: Se agregó \`DataDrain::GlueRunner\` para orquestar Jobs de AWS Glue.
|
|
14
|
+
- Feature: Soporte oficial para procesamiento de Big Data (ej. tablas de 1TB) mediante delegación a AWS Glue.
|
|
15
|
+
- Documentation: Se incluyó un script maestro de PySpark en el README compatible con el formato de la gema.
|
|
16
|
+
|
|
17
|
+
## [0.1.10] - 2026-03-17
|
|
18
|
+
|
|
19
|
+
- Feature: Se agregó la opción \`skip_export\` a \`DataDrain::Engine\`. Permite utilizar herramientas externas (como AWS Glue) para la exportación de datos, dejando que DataDrain se encargue solo de la validación de integridad y la purga de PostgreSQL.
|
|
20
|
+
|
|
3
21
|
## [0.1.9] - 2026-03-17
|
|
4
22
|
|
|
5
23
|
- Fix: Mejora en la precisión del rango de fechas en consultas SQL usando límites semi-abiertos (<) para evitar pérdida de registros por microsegundos.
|
data/README.md
CHANGED
|
@@ -98,39 +98,118 @@ ingestor.call
|
|
|
98
98
|
|
|
99
99
|
### 2. Extracción y Purga de BD (Engine)
|
|
100
100
|
|
|
101
|
-
Ideal para crear Ventanas Rodantes de retención (ej. mantener solo 6 meses de datos vivos en Postgres y archivar el resto).
|
|
101
|
+
Ideal para crear Ventanas Rodantes de retención (ej. mantener solo 6 meses de datos vivos en Postgres y archivar el resto).
|
|
102
102
|
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
task versions: :environment do
|
|
106
|
-
target_date = 6.months.ago.beginning_of_month
|
|
107
|
-
|
|
108
|
-
select_sql = <<~SQL
|
|
109
|
-
id, item_type, item_id, event, whodunnit,
|
|
110
|
-
object::VARCHAR AS object,
|
|
111
|
-
object_changes::VARCHAR AS object_changes,
|
|
112
|
-
created_at,
|
|
113
|
-
EXTRACT(YEAR FROM created_at)::INT AS year,
|
|
114
|
-
EXTRACT(MONTH FROM created_at)::INT AS month,
|
|
115
|
-
isp_id
|
|
116
|
-
SQL
|
|
103
|
+
**Modo Purga con Exportación Externa (AWS Glue):**
|
|
104
|
+
Si tu arquitectura ya utiliza **AWS Glue** o **AWS EMR** para mover datos pesados, puedes configurar DataDrain para que actúe únicamente como **Garante de Integridad**. En este modo, el motor omitirá el paso de exportación, pero verificará matemáticamente que los datos existan en el Data Lake antes de proceder a eliminarlos de PostgreSQL.
|
|
117
105
|
|
|
106
|
+
```ruby
|
|
107
|
+
# lib/tasks/archive_with_glue.rake
|
|
108
|
+
task purge_only: :environment do
|
|
118
109
|
engine = DataDrain::Engine.new(
|
|
119
110
|
bucket: 'my-bucket-store',
|
|
120
|
-
start_date:
|
|
121
|
-
end_date:
|
|
111
|
+
start_date: 6.months.ago.beginning_of_month,
|
|
112
|
+
end_date: 6.months.ago.end_of_month,
|
|
122
113
|
table_name: 'versions',
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
where_clause: "event = 'update'"
|
|
114
|
+
partition_keys: %w[year month],
|
|
115
|
+
skip_export: true # ⚡️ No exporta nada, solo valida S3 y purga Postgres
|
|
126
116
|
)
|
|
127
117
|
|
|
128
|
-
# Cuenta, exporta a Parquet, verifica integridad y purga Postgres.
|
|
129
118
|
engine.call
|
|
130
119
|
end
|
|
131
120
|
```
|
|
132
121
|
|
|
133
|
-
### 3.
|
|
122
|
+
### 3. Orquestación con AWS Glue (Big Data)
|
|
123
|
+
|
|
124
|
+
Para tablas de gran volumen (**ej. > 500GB o 1TB**), se recomienda delegar el movimiento de datos a **AWS Glue** (basado en Apache Spark) para evitar saturar el servidor de Ruby. `DataDrain` actúa como el orquestador que dispara el Job, espera a que termine y luego realiza la validación y purga.
|
|
125
|
+
|
|
126
|
+
```ruby
|
|
127
|
+
# 1. Disparar el Job de Glue y esperar su finalización exitosa
|
|
128
|
+
config = DataDrain.configuration
|
|
129
|
+
bucket = "my-bucket"
|
|
130
|
+
table = "versions"
|
|
131
|
+
|
|
132
|
+
DataDrain::GlueRunner.run_and_wait(
|
|
133
|
+
"my-glue-export-job",
|
|
134
|
+
{
|
|
135
|
+
"--start_date" => start_date.to_fs(:db),
|
|
136
|
+
"--end_date" => end_date.to_fs(:db),
|
|
137
|
+
"--s3_bucket" => bucket,
|
|
138
|
+
"--s3_folder" => table,
|
|
139
|
+
"--db_url" => "jdbc:postgresql://#{config.db_host}:#{config.db_port}/#{config.db_name}",
|
|
140
|
+
"--db_user" => config.db_user,
|
|
141
|
+
"--db_password" => config.db_pass,
|
|
142
|
+
"--db_table" => table,
|
|
143
|
+
"--partition_by" => "year,month,isp_id" # <--- Columnas dinámicas
|
|
144
|
+
}
|
|
145
|
+
)
|
|
146
|
+
|
|
147
|
+
# 2. Una vez que Glue exportó el TB, DataDrain valida integridad y purga Postgres
|
|
148
|
+
DataDrain::Engine.new(
|
|
149
|
+
bucket: bucket,
|
|
150
|
+
folder_name: table,
|
|
151
|
+
start_date: start_date,
|
|
152
|
+
end_date: end_date,
|
|
153
|
+
table_name: table,
|
|
154
|
+
partition_keys: %w[year month isp_id],
|
|
155
|
+
skip_export: true # <--- Modo Validación + Purga
|
|
156
|
+
).call
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
#### Script de AWS Glue (PySpark) compatible con DataDrain
|
|
160
|
+
|
|
161
|
+
Crea un Job en la consola de AWS Glue (Spark 4.0+) y utiliza este script como base. Está diseñado para extraer datos de PostgreSQL de forma dinámica:
|
|
162
|
+
|
|
163
|
+
```python
|
|
164
|
+
import sys
|
|
165
|
+
from awsglue.utils import getResolvedOptions
|
|
166
|
+
from pyspark.context import SparkContext
|
|
167
|
+
from awsglue.context import GlueContext
|
|
168
|
+
from awsglue.job import Job
|
|
169
|
+
from pyspark.sql.functions import col, year, month
|
|
170
|
+
|
|
171
|
+
# Parámetros recibidos desde DataDrain::GlueRunner
|
|
172
|
+
args = getResolvedOptions(sys.argv, [
|
|
173
|
+
'JOB_NAME', 'start_date', 'end_date', 's3_bucket', 's3_folder',
|
|
174
|
+
'db_url', 'db_user', 'db_password', 'db_table', 'partition_by'
|
|
175
|
+
])
|
|
176
|
+
|
|
177
|
+
sc = SparkContext()
|
|
178
|
+
glueContext = GlueContext(sc)
|
|
179
|
+
spark = glueContext.spark_session
|
|
180
|
+
job = Job(glueContext)
|
|
181
|
+
job.init(args['JOB_NAME'], args)
|
|
182
|
+
|
|
183
|
+
# 1. Leer de PostgreSQL (vía JDBC dinámico)
|
|
184
|
+
options = {
|
|
185
|
+
"url": args['db_url'],
|
|
186
|
+
"dbtable": args['db_table'],
|
|
187
|
+
"user": args['db_user'],
|
|
188
|
+
"password": args['db_password'],
|
|
189
|
+
"sampleQuery": f"SELECT * FROM {args['db_table']} WHERE created_at >= '{args['start_date']}' AND created_at < '{args['end_date']}'"
|
|
190
|
+
}
|
|
191
|
+
|
|
192
|
+
df = spark.read.format("jdbc").options(**options).load()
|
|
193
|
+
|
|
194
|
+
# 2. Agregar columnas de partición temporales (Hive Partitioning)
|
|
195
|
+
df_final = df.withColumn("year", year(col("created_at"))) \
|
|
196
|
+
.withColumn("month", month(col("created_at")))
|
|
197
|
+
|
|
198
|
+
# 3. Escribir a S3 en Parquet con compresión ZSTD
|
|
199
|
+
# Construimos el path dinámicamente: s3://bucket/folder/
|
|
200
|
+
output_path = f"s3://{args['s3_bucket']}/{args['s3_folder']}/"
|
|
201
|
+
partitions = args['partition_by'].split(",")
|
|
202
|
+
|
|
203
|
+
df_final.write.mode("overwrite") \
|
|
204
|
+
.partitionBy(*partitions) \
|
|
205
|
+
.format("parquet") \
|
|
206
|
+
.option("compression", "zstd") \
|
|
207
|
+
.save(output_path)
|
|
208
|
+
|
|
209
|
+
job.commit()
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
### 4. Consultar el Data Lake (Record)
|
|
134
213
|
|
|
135
214
|
Para consultar los datos archivados sin salir de Ruby, crea un modelo que herede de `DataDrain::Record`.
|
|
136
215
|
|
data/data_drain.gemspec
CHANGED
|
@@ -26,6 +26,7 @@ Gem::Specification.new do |spec|
|
|
|
26
26
|
|
|
27
27
|
# 💡 Dependencias Core de la Gema
|
|
28
28
|
spec.add_dependency "activemodel", ">= 6.0"
|
|
29
|
+
spec.add_dependency "aws-sdk-glue", "~> 1.0"
|
|
29
30
|
spec.add_dependency "aws-sdk-s3", "~> 1.114"
|
|
30
31
|
spec.add_dependency "duckdb", "~> 1.4"
|
|
31
32
|
spec.add_dependency "pg", ">= 1.2"
|
data/lib/data_drain/engine.rb
CHANGED
|
@@ -20,6 +20,7 @@ module DataDrain
|
|
|
20
20
|
# @option options [Array<String, Symbol>] :partition_keys Columnas para particionar.
|
|
21
21
|
# @option options [String] :primary_key (Opcional) Clave primaria para borrado. Por defecto 'id'.
|
|
22
22
|
# @option options [String] :where_clause (Opcional) Condición SQL extra.
|
|
23
|
+
# @option options [Boolean] :skip_export (Opcional) Si es true, no realiza el export a Parquet, solo validación y purga.
|
|
23
24
|
def initialize(options)
|
|
24
25
|
@start_date = options.fetch(:start_date).beginning_of_day
|
|
25
26
|
|
|
@@ -34,6 +35,7 @@ module DataDrain
|
|
|
34
35
|
@primary_key = options.fetch(:primary_key, "id")
|
|
35
36
|
@where_clause = options[:where_clause]
|
|
36
37
|
@bucket = options[:bucket]
|
|
38
|
+
@skip_export = options.fetch(:skip_export, false)
|
|
37
39
|
|
|
38
40
|
@config = DataDrain.configuration
|
|
39
41
|
@logger = @config.logger
|
|
@@ -43,7 +45,7 @@ module DataDrain
|
|
|
43
45
|
@duckdb = database.connect
|
|
44
46
|
end
|
|
45
47
|
|
|
46
|
-
# Ejecuta el flujo completo del motor: Setup, Conteo, Exportación, Verificación y Purga.
|
|
48
|
+
# Ejecuta el flujo completo del motor: Setup, Conteo, Exportación (opcional), Verificación y Purga.
|
|
47
49
|
#
|
|
48
50
|
# @return [Boolean] `true` si el proceso finalizó con éxito, `false` si falló la integridad.
|
|
49
51
|
def call
|
|
@@ -58,8 +60,12 @@ module DataDrain
|
|
|
58
60
|
return true
|
|
59
61
|
end
|
|
60
62
|
|
|
61
|
-
|
|
62
|
-
|
|
63
|
+
if @skip_export
|
|
64
|
+
@logger.info "[DataDrain Engine] ⏭️ Modo 'Skip Export' activo. Saltando paso de exportación..."
|
|
65
|
+
else
|
|
66
|
+
@logger.info "[DataDrain Engine] 📦 Exportando #{@pg_count} registros a Parquet..."
|
|
67
|
+
export_to_parquet
|
|
68
|
+
end
|
|
63
69
|
|
|
64
70
|
if verify_integrity
|
|
65
71
|
purge_from_postgres
|
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
require "aws-sdk-glue"
|
|
4
|
+
|
|
5
|
+
module DataDrain
|
|
6
|
+
# Orquestador para AWS Glue. Permite disparar y monitorear Jobs en AWS
|
|
7
|
+
# para delegar el movimiento masivo de datos (ej. tablas de 1TB).
|
|
8
|
+
class GlueRunner
|
|
9
|
+
# Dispara un Job de Glue y espera a que termine exitosamente.
|
|
10
|
+
#
|
|
11
|
+
# @param job_name [String] Nombre del Job en la consola de AWS.
|
|
12
|
+
# @param arguments [Hash] Argumentos de ejecución (deben empezar con --).
|
|
13
|
+
# @param polling_interval [Integer] Segundos de espera entre cada chequeo de estado.
|
|
14
|
+
# @return [Boolean] true si el Job terminó exitosamente (SUCCEEDED).
|
|
15
|
+
# @raise [RuntimeError] Si el Job falla o se detiene.
|
|
16
|
+
def self.run_and_wait(job_name, arguments = {}, polling_interval: 30)
|
|
17
|
+
config = DataDrain.configuration
|
|
18
|
+
client = Aws::Glue::Client.new(region: config.aws_region)
|
|
19
|
+
|
|
20
|
+
config.logger.info "[DataDrain GlueRunner] 🚀 Disparando Job: #{job_name}..."
|
|
21
|
+
resp = client.start_job_run(job_name: job_name, arguments: arguments)
|
|
22
|
+
run_id = resp.job_run_id
|
|
23
|
+
|
|
24
|
+
loop do
|
|
25
|
+
run_info = client.get_job_run(job_name: job_name, run_id: run_id).job_run
|
|
26
|
+
status = run_info.job_run_state
|
|
27
|
+
|
|
28
|
+
case status
|
|
29
|
+
when "SUCCEEDED"
|
|
30
|
+
config.logger.info "[DataDrain GlueRunner] ✅ Job completado con éxito."
|
|
31
|
+
return true
|
|
32
|
+
when "FAILED", "STOPPED", "TIMEOUT"
|
|
33
|
+
error_msg = run_info.error_message || "Sin mensaje de error disponible."
|
|
34
|
+
config.logger.error "[DataDrain GlueRunner] ❌ ERROR: El Job terminó con estado #{status}: #{error_msg}"
|
|
35
|
+
raise "Glue Job #{job_name} (Run ID: #{run_id}) falló con estado #{status}."
|
|
36
|
+
else
|
|
37
|
+
config.logger.info "[DataDrain GlueRunner] ⏳ Estado: #{status}. Esperando #{polling_interval}s..."
|
|
38
|
+
sleep polling_interval
|
|
39
|
+
end
|
|
40
|
+
end
|
|
41
|
+
end
|
|
42
|
+
end
|
|
43
|
+
end
|
data/lib/data_drain/version.rb
CHANGED
data/lib/data_drain.rb
CHANGED
|
@@ -8,6 +8,7 @@ require_relative "data_drain/storage"
|
|
|
8
8
|
require_relative "data_drain/engine"
|
|
9
9
|
require_relative "data_drain/record"
|
|
10
10
|
require_relative "data_drain/file_ingestor"
|
|
11
|
+
require_relative "data_drain/glue_runner"
|
|
11
12
|
|
|
12
13
|
# Registramos el tipo JSON personalizado de ActiveModel
|
|
13
14
|
require_relative "data_drain/types/json_type"
|
metadata
CHANGED
|
@@ -1,14 +1,14 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: data_drain
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 0.1.
|
|
4
|
+
version: 0.1.13
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Gabriel
|
|
8
8
|
autorequire:
|
|
9
9
|
bindir: exe
|
|
10
10
|
cert_chain: []
|
|
11
|
-
date: 2026-03-
|
|
11
|
+
date: 2026-03-20 00:00:00.000000000 Z
|
|
12
12
|
dependencies:
|
|
13
13
|
- !ruby/object:Gem::Dependency
|
|
14
14
|
name: activemodel
|
|
@@ -24,6 +24,20 @@ dependencies:
|
|
|
24
24
|
- - ">="
|
|
25
25
|
- !ruby/object:Gem::Version
|
|
26
26
|
version: '6.0'
|
|
27
|
+
- !ruby/object:Gem::Dependency
|
|
28
|
+
name: aws-sdk-glue
|
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
|
30
|
+
requirements:
|
|
31
|
+
- - "~>"
|
|
32
|
+
- !ruby/object:Gem::Version
|
|
33
|
+
version: '1.0'
|
|
34
|
+
type: :runtime
|
|
35
|
+
prerelease: false
|
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
37
|
+
requirements:
|
|
38
|
+
- - "~>"
|
|
39
|
+
- !ruby/object:Gem::Version
|
|
40
|
+
version: '1.0'
|
|
27
41
|
- !ruby/object:Gem::Dependency
|
|
28
42
|
name: aws-sdk-s3
|
|
29
43
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -87,6 +101,7 @@ files:
|
|
|
87
101
|
- lib/data_drain/engine.rb
|
|
88
102
|
- lib/data_drain/errors.rb
|
|
89
103
|
- lib/data_drain/file_ingestor.rb
|
|
104
|
+
- lib/data_drain/glue_runner.rb
|
|
90
105
|
- lib/data_drain/record.rb
|
|
91
106
|
- lib/data_drain/storage.rb
|
|
92
107
|
- lib/data_drain/storage/base.rb
|