data_drain 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: a9745b689c4134b2dd8ac402e1d8371b5e92d112a516f310d323b2eb3957a665
4
+ data.tar.gz: 3cb740933d8d7031446ef2f3c9c4564c0a27b1a93b7e210ca2e449f9213fd453
5
+ SHA512:
6
+ metadata.gz: c31dbd33cf14556384ca9abaa1fb056bfa52efa324cd34e829027babff82183d694418ea12f5d4d99fa43d459733c5466d75c510d4d61d900b49ca3561da84d4
7
+ data.tar.gz: 3a1968bdd650604d7699a3225ca79e9c5c174749927c77684284cf736f29d7b4b97a80e5570ba77ebb97f769a0c1e5282d64bb67b4eb586af600390d378fd0b6
data/.rspec ADDED
@@ -0,0 +1,3 @@
1
+ --format documentation
2
+ --color
3
+ --require spec_helper
data/.rubocop.yml ADDED
@@ -0,0 +1,8 @@
1
+ AllCops:
2
+ TargetRubyVersion: 3.2
3
+
4
+ Style/StringLiterals:
5
+ EnforcedStyle: double_quotes
6
+
7
+ Style/StringLiteralsInInterpolation:
8
+ EnforcedStyle: double_quotes
data/CHANGELOG.md ADDED
@@ -0,0 +1,5 @@
1
+ ## [Unreleased]
2
+
3
+ ## [0.1.0] - 2026-03-11
4
+
5
+ - Initial release
@@ -0,0 +1,132 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our
6
+ community a harassment-free experience for everyone, regardless of age, body
7
+ size, visible or invisible disability, ethnicity, sex characteristics, gender
8
+ identity and expression, level of experience, education, socio-economic status,
9
+ nationality, personal appearance, race, caste, color, religion, or sexual
10
+ identity and orientation.
11
+
12
+ We pledge to act and interact in ways that contribute to an open, welcoming,
13
+ diverse, inclusive, and healthy community.
14
+
15
+ ## Our Standards
16
+
17
+ Examples of behavior that contributes to a positive environment for our
18
+ community include:
19
+
20
+ * Demonstrating empathy and kindness toward other people
21
+ * Being respectful of differing opinions, viewpoints, and experiences
22
+ * Giving and gracefully accepting constructive feedback
23
+ * Accepting responsibility and apologizing to those affected by our mistakes,
24
+ and learning from the experience
25
+ * Focusing on what is best not just for us as individuals, but for the overall
26
+ community
27
+
28
+ Examples of unacceptable behavior include:
29
+
30
+ * The use of sexualized language or imagery, and sexual attention or advances of
31
+ any kind
32
+ * Trolling, insulting or derogatory comments, and personal or political attacks
33
+ * Public or private harassment
34
+ * Publishing others' private information, such as a physical or email address,
35
+ without their explicit permission
36
+ * Other conduct which could reasonably be considered inappropriate in a
37
+ professional setting
38
+
39
+ ## Enforcement Responsibilities
40
+
41
+ Community leaders are responsible for clarifying and enforcing our standards of
42
+ acceptable behavior and will take appropriate and fair corrective action in
43
+ response to any behavior that they deem inappropriate, threatening, offensive,
44
+ or harmful.
45
+
46
+ Community leaders have the right and responsibility to remove, edit, or reject
47
+ comments, commits, code, wiki edits, issues, and other contributions that are
48
+ not aligned to this Code of Conduct, and will communicate reasons for moderation
49
+ decisions when appropriate.
50
+
51
+ ## Scope
52
+
53
+ This Code of Conduct applies within all community spaces, and also applies when
54
+ an individual is officially representing the community in public spaces.
55
+ Examples of representing our community include using an official email address,
56
+ posting via an official social media account, or acting as an appointed
57
+ representative at an online or offline event.
58
+
59
+ ## Enforcement
60
+
61
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
62
+ reported to the community leaders responsible for enforcement at
63
+ [INSERT CONTACT METHOD].
64
+ All complaints will be reviewed and investigated promptly and fairly.
65
+
66
+ All community leaders are obligated to respect the privacy and security of the
67
+ reporter of any incident.
68
+
69
+ ## Enforcement Guidelines
70
+
71
+ Community leaders will follow these Community Impact Guidelines in determining
72
+ the consequences for any action they deem in violation of this Code of Conduct:
73
+
74
+ ### 1. Correction
75
+
76
+ **Community Impact**: Use of inappropriate language or other behavior deemed
77
+ unprofessional or unwelcome in the community.
78
+
79
+ **Consequence**: A private, written warning from community leaders, providing
80
+ clarity around the nature of the violation and an explanation of why the
81
+ behavior was inappropriate. A public apology may be requested.
82
+
83
+ ### 2. Warning
84
+
85
+ **Community Impact**: A violation through a single incident or series of
86
+ actions.
87
+
88
+ **Consequence**: A warning with consequences for continued behavior. No
89
+ interaction with the people involved, including unsolicited interaction with
90
+ those enforcing the Code of Conduct, for a specified period of time. This
91
+ includes avoiding interactions in community spaces as well as external channels
92
+ like social media. Violating these terms may lead to a temporary or permanent
93
+ ban.
94
+
95
+ ### 3. Temporary Ban
96
+
97
+ **Community Impact**: A serious violation of community standards, including
98
+ sustained inappropriate behavior.
99
+
100
+ **Consequence**: A temporary ban from any sort of interaction or public
101
+ communication with the community for a specified period of time. No public or
102
+ private interaction with the people involved, including unsolicited interaction
103
+ with those enforcing the Code of Conduct, is allowed during this period.
104
+ Violating these terms may lead to a permanent ban.
105
+
106
+ ### 4. Permanent Ban
107
+
108
+ **Community Impact**: Demonstrating a pattern of violation of community
109
+ standards, including sustained inappropriate behavior, harassment of an
110
+ individual, or aggression toward or disparagement of classes of individuals.
111
+
112
+ **Consequence**: A permanent ban from any sort of public interaction within the
113
+ community.
114
+
115
+ ## Attribution
116
+
117
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118
+ version 2.1, available at
119
+ [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
120
+
121
+ Community Impact Guidelines were inspired by
122
+ [Mozilla's code of conduct enforcement ladder][Mozilla CoC].
123
+
124
+ For answers to common questions about this code of conduct, see the FAQ at
125
+ [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
126
+ [https://www.contributor-covenant.org/translations][translations].
127
+
128
+ [homepage]: https://www.contributor-covenant.org
129
+ [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
130
+ [Mozilla CoC]: https://github.com/mozilla/diversity
131
+ [FAQ]: https://www.contributor-covenant.org/faq
132
+ [translations]: https://www.contributor-covenant.org/translations
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2026 Gabriel
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,172 @@
1
+ # DataDrain 🚰
2
+
3
+ DataDrain es un micro-framework de nivel empresarial diseñado para extraer, archivar y purgar datos históricos desde bases de datos PostgreSQL transaccionales, así como para **ingerir archivos crudos (CSV, JSON, Parquet)**, hacia un Data Lake analítico.
4
+
5
+ Utiliza **DuckDB** en memoria para lograr velocidades de procesamiento y compresión extremas. Garantiza la retención segura de datos mediante chequeos de integridad estrictos antes de purgar las bases de datos de origen, y automatiza la conversión y subida de archivos pesados a la nube.
6
+
7
+ ## Características Principales
8
+
9
+ * **ETL de Alto Rendimiento:** Transfiere millones de registros desde Postgres a Parquet utilizando DuckDB sin cargar los objetos en la memoria RAM de Ruby.
10
+ * **File Ingestion:** Convierte archivos crudos masivos (ej. logs de Netflow en CSV) a Parquet (ZSTD) y los sube directamente a S3 en milisegundos.
11
+ * **Hive Partitioning:** Organiza automáticamente los archivos en carpetas optimizadas para consultas (`year=X/month=Y/tenant_id=Z`).
12
+ * **Storage Adapters:** Soporte nativo y transparente para almacenamiento en Disco Local y AWS S3.
13
+ * **Integridad Garantizada:** Verifica matemáticamente que los datos exportados coincidan exactamente con el origen antes de ejecutar sentencias `DELETE`.
14
+ * **ORM Analítico Integrado:** Incluye una clase base (`DataDrain::Record`) compatible con `ActiveModel` para consultar y destruir particiones históricas de forma idiomática.
15
+
16
+ ## Instalación
17
+
18
+ Agrega esta línea al `Gemfile` de tu aplicación o microservicio:
19
+
20
+ ```ruby
21
+ gem 'data_drain', git: '[https://github.com/tu-organizacion/data_drain.git](https://github.com/tu-organizacion/data_drain.git)', branch: 'main'
22
+ ```
23
+
24
+ Y ejecuta:
25
+ ```bash
26
+ $ bundle install
27
+ ```
28
+
29
+ ## Configuración
30
+
31
+ Crea un inicializador en tu aplicación (ej. `config/initializers/data_drain.rb`) para configurar las credenciales y el comportamiento del motor:
32
+
33
+ ```ruby
34
+ DataDrain.configure do |config|
35
+ # Almacenamiento (:local o :s3)
36
+ config.storage_mode = ENV.fetch('STORAGE_MODE', 'local').to_sym
37
+
38
+ # AWS S3 (Requerido solo si storage_mode es :s3)
39
+ # config.aws_region = ENV['AWS_REGION']
40
+ # config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
41
+ # config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
42
+
43
+ # Base de Datos PostgreSQL de Origen (Requerido solo para DataDrain::Engine)
44
+ config.db_host = ENV.fetch('DB_HOST', '127.0.0.1')
45
+ config.db_port = ENV.fetch('DB_PORT', '5432')
46
+ config.db_user = ENV.fetch('DB_USER', 'postgres')
47
+ config.db_pass = ENV.fetch('DB_PASS', '')
48
+ config.db_name = ENV.fetch('DB_NAME', 'core_production')
49
+
50
+ # Rendimiento y Tuning
51
+ config.batch_size = 5000 # Registros a borrar por transacción
52
+ config.throttle_delay = 0.5 # Segundos de pausa entre borrados
53
+ config.logger = Rails.logger
54
+ end
55
+ ```
56
+
57
+ ## Uso
58
+
59
+ El framework provee tres herramientas principales: **Ingestor de Archivos**, **Drenaje de Base de Datos**, y el **ORM Analítico**.
60
+
61
+ ### 1. Ingestión de Archivos Crudos (FileIngestor)
62
+
63
+ Ideal para servicios que generan grandes volúmenes de datos (ej. métricas de Netflow). Toma un archivo local, lo transforma, lo comprime a Parquet y lo sube particionado a S3.
64
+
65
+ ```ruby
66
+ # Un archivo generado temporalmente por tu servicio
67
+ archivo_temporal = "/tmp/netflow_metrics_1600.csv"
68
+
69
+ ingestor = DataDrain::FileIngestor.new(
70
+ bucket: 'my-bucket-store',
71
+ source_path: archivo_temporal,
72
+ folder_name: 'netflow',
73
+ # Particionamos dinámicamente según columnas extraídas al vuelo
74
+ partition_keys: %w[year month isp_id],
75
+ # Transformación SQL ejecutada por DuckDB durante la lectura
76
+ select_sql: "*, EXTRACT(YEAR FROM timestamp) AS year, EXTRACT(MONTH FROM timestamp) AS month",
77
+ delete_after_upload: true # Limpia el archivo temporal al terminar
78
+ )
79
+
80
+ ingestor.call
81
+ ```
82
+
83
+ ### 2. Extracción y Purga de BD (Engine)
84
+
85
+ Ideal para crear Ventanas Rodantes de retención (ej. mantener solo 6 meses de datos vivos en Postgres y archivar el resto).
86
+
87
+ ```ruby
88
+ # lib/tasks/archive.rake
89
+ task versions: :environment do
90
+ target_date = 6.months.ago.beginning_of_month
91
+
92
+ select_sql = <<~SQL
93
+ id, item_type, item_id, event, whodunnit,
94
+ object::VARCHAR AS object,
95
+ object_changes::VARCHAR AS object_changes,
96
+ created_at,
97
+ EXTRACT(YEAR FROM created_at)::INT AS year,
98
+ EXTRACT(MONTH FROM created_at)::INT AS month,
99
+ isp_id
100
+ SQL
101
+
102
+ engine = DataDrain::Engine.new(
103
+ bucket: 'my-bucket-store',
104
+ start_date: target_date.beginning_of_month,
105
+ end_date: target_date.end_of_month,
106
+ table_name: 'versions',
107
+ select_sql: select_sql,
108
+ partition_keys: %w[year month isp_id],
109
+ where_clause: "event = 'update'"
110
+ )
111
+
112
+ # Cuenta, exporta a Parquet, verifica integridad y purga Postgres.
113
+ engine.call
114
+ end
115
+ ```
116
+
117
+ ### 3. Consultar el Data Lake (Record)
118
+
119
+ Para consultar los datos archivados sin salir de Ruby, crea un modelo que herede de `DataDrain::Record`.
120
+
121
+ ```ruby
122
+ # app/models/archived_version.rb
123
+ class ArchivedVersion < DataDrain::Record
124
+ self.bucket = 'my-bucket-storage'
125
+ self.folder_name = 'versions'
126
+ self.partition_keys = [:year, :month, :isp_id]
127
+
128
+ attribute :id, :string
129
+ attribute :item_type, :string
130
+ attribute :item_id, :string
131
+ attribute :event, :string
132
+ attribute :whodunnit, :string
133
+ attribute :created_at, :datetime
134
+
135
+ # Utiliza el tipo :json provisto por la gema para hidratar Hashes
136
+ attribute :object, :json
137
+ attribute :object_changes, :json
138
+ end
139
+ ```
140
+
141
+ Consultas altamente optimizadas mediante Hive Partitioning:
142
+
143
+ ```ruby
144
+ # Búsqueda puntual hiper-rápida aislando las particiones
145
+ version = ArchivedVersion.find("un-uuid", year: 2026, month: 3, isp_id: 42)
146
+ puts version.object_changes # => {"status" => ["active", "suspended"]}
147
+
148
+ # Colecciones
149
+ history = ArchivedVersion.where(limit: 10, year: 2026, month: 3, isp_id: 42)
150
+ ```
151
+
152
+ ### 4. Destrucción de Datos (Retención y Cumplimiento)
153
+
154
+ El framework permite eliminar físicamente carpetas completas en S3 o Local utilizando comodines.
155
+
156
+ ```ruby
157
+ # Elimina todo el historial de un cliente en específico a través de todos los años
158
+ ArchivedVersion.destroy_all(isp_id: 42)
159
+
160
+ # Elimina todos los datos de marzo de 2024 globalmente
161
+ ArchivedVersion.destroy_all(year: 2024, month: 3)
162
+ ```
163
+
164
+ ## Arquitectura
165
+
166
+ DataDrain implementa el patrón **Storage Adapter**, lo que permite aislar completamente la lógica del sistema de archivos de los motores de procesamiento.
167
+ * DuckDB mantiene una conexión persistente (`Thread-Safe`) para maximizar el rendimiento de las consultas web.
168
+ * El ORM Analítico incluye sanitización de parámetros para prevenir Inyección SQL al consultar archivos Parquet.
169
+
170
+ ## Licencia
171
+
172
+ La gema está disponible como código abierto bajo los términos de la Licencia MIT.
data/Rakefile ADDED
@@ -0,0 +1,12 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rspec/core/rake_task"
5
+
6
+ RSpec::Core::RakeTask.new(:spec)
7
+
8
+ require "rubocop/rake_task"
9
+
10
+ RuboCop::RakeTask.new
11
+
12
+ task default: %i[spec rubocop]
@@ -0,0 +1,32 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "lib/data_drain/version"
4
+
5
+ Gem::Specification.new do |spec|
6
+ spec.name = "data_drain"
7
+ spec.version = DataDrain::VERSION
8
+ spec.authors = ["Gabriel"]
9
+ spec.email = ["gab.edera@gmail.com"]
10
+
11
+ spec.summary = "Micro-framework para drenar datos de PostgreSQL a Parquet vía DuckDB."
12
+ spec.description = "Extrae datos transaccionales, los archiva en un Data Lake (S3/Local) " \
13
+ "en formato Parquet usando Hive Partitioning, y purga el origen de forma segura."
14
+ spec.homepage = "https://github.com/gedera/data_drain"
15
+ spec.required_ruby_version = ">= 3.0.0"
16
+
17
+ spec.files = Dir.chdir(__dir__) do
18
+ `git ls-files -z`.split("\x0").reject do |f|
19
+ (File.expand_path(f) == __FILE__) ||
20
+ f.start_with?(*%w[bin/ test/ spec/ features/ .git .github appveyor Gemfile])
21
+ end
22
+ end
23
+ spec.bindir = "exe"
24
+ spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
25
+ spec.require_paths = ["lib"]
26
+
27
+ # 💡 Dependencias Core de la Gema
28
+ spec.add_dependency "activemodel", ">= 6.0"
29
+ spec.add_dependency "aws-sdk-s3", "~> 1.114"
30
+ spec.add_dependency "duckdb", "~> 1.4"
31
+ spec.add_dependency "pg", ">= 1.2"
32
+ end
@@ -0,0 +1,27 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "logger"
4
+
5
+ module DataDrain
6
+ # Contenedor para todas las opciones de configuración del motor DataDrain.
7
+ class Configuration
8
+ attr_accessor :storage_mode, :aws_region,
9
+ :aws_access_key_id, :aws_secret_access_key,
10
+ :db_host, :db_port, :db_user, :db_pass, :db_name,
11
+ :batch_size, :throttle_delay, :logger
12
+
13
+ def initialize
14
+ @storage_mode = :local
15
+ @db_host = "127.0.0.1"
16
+ @db_port = 5432
17
+ @batch_size = 5000
18
+ @throttle_delay = 0.5
19
+ @logger = Logger.new($stdout)
20
+ end
21
+
22
+ # @return [String] Cadena de conexión optimizada para DuckDB.
23
+ def duckdb_connection_string
24
+ "host=#{@db_host} port=#{@db_port} dbname=#{@db_name} user=#{@db_user} password=#{@db_pass}"
25
+ end
26
+ end
27
+ end
@@ -0,0 +1,177 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "duckdb"
4
+ require "pg"
5
+
6
+ module DataDrain
7
+ # Motor principal de extracción y purga de datos (DataDrain).
8
+ #
9
+ # Orquesta el flujo ETL desde PostgreSQL hacia un Data Lake analítico
10
+ # delegando la interacción del almacenamiento al adaptador configurado.
11
+ class Engine
12
+ # Inicializa una nueva instancia del motor de extracción.
13
+ #
14
+ # @param options [Hash] Diccionario de configuración para la extracción.
15
+ # @option options [Time, DateTime, Date] :start_date Fecha y hora de inicio.
16
+ # @option options [Time, DateTime, Date] :end_date Fecha y hora de fin.
17
+ # @option options [String] :table_name Nombre de la tabla en PostgreSQL.
18
+ # @option options [String] :folder_name (Opcional) Nombre de la carpeta destino.
19
+ # @option options [String] :select_sql (Opcional) Sentencia SELECT personalizada.
20
+ # @option options [Array<String, Symbol>] :partition_keys Columnas para particionar.
21
+ # @option options [String] :primary_key (Opcional) Clave primaria para borrado. Por defecto 'id'.
22
+ # @option options [String] :where_clause (Opcional) Condición SQL extra.
23
+ def initialize(options)
24
+ @start_date = options.fetch(:start_date).beginning_of_day
25
+ @end_date = options.fetch(:end_date).end_of_day
26
+ @table_name = options.fetch(:table_name)
27
+ @folder_name = options.fetch(:folder_name, @table_name)
28
+ @select_sql = options.fetch(:select_sql, "*")
29
+ @partition_keys = options.fetch(:partition_keys)
30
+ @primary_key = options.fetch(:primary_key, "id")
31
+ @where_clause = options[:where_clause]
32
+ @bucket = options[:bucket]
33
+
34
+ @config = DataDrain.configuration
35
+ @logger = @config.logger
36
+ @adapter = DataDrain::Storage.adapter
37
+
38
+ database = DuckDB::Database.open(":memory:")
39
+ @duckdb = database.connect
40
+ end
41
+
42
+ # Ejecuta el flujo completo del motor: Setup, Conteo, Exportación, Verificación y Purga.
43
+ #
44
+ # @return [Boolean] `true` si el proceso finalizó con éxito, `false` si falló la integridad.
45
+ def call
46
+ @logger.info "[DataDrain Engine] 🚀 Preparando '#{@table_name}' (#{@start_date.to_date} a #{@end_date.to_date})..."
47
+
48
+ setup_duckdb
49
+
50
+ @pg_count = get_postgres_count
51
+
52
+ if @pg_count.zero?
53
+ @logger.info "[DataDrain Engine] ⏭️ No hay registros que cumplan las condiciones."
54
+ return true
55
+ end
56
+
57
+ @logger.info "[DataDrain Engine] 📦 Exportando #{@pg_count} registros a Parquet..."
58
+ export_to_parquet
59
+
60
+ if verify_integrity
61
+ purge_from_postgres
62
+ @logger.info "[DataDrain Engine] ✅ Proceso completado exitosamente para '#{@table_name}'."
63
+ true
64
+ else
65
+ @logger.error "[DataDrain Engine] ❌ ERROR de integridad en '#{@table_name}'. Abortando purga."
66
+ false
67
+ end
68
+ end
69
+
70
+ private
71
+
72
+ # @api private
73
+ # @return [String]
74
+ def base_where_sql
75
+ sql = "created_at >= '#{@start_date.to_fs(:db)}' AND created_at <= '#{@end_date.to_fs(:db)}'"
76
+ sql += " AND #{@where_clause}" if @where_clause && !@where_clause.empty?
77
+ sql
78
+ end
79
+
80
+ # @api private
81
+ def setup_duckdb
82
+ @duckdb.query("INSTALL postgres; LOAD postgres;")
83
+ @duckdb.query("SET max_memory='2GB';")
84
+
85
+ # 💡 Magia del Adapter: Él sabe si cargar httpfs y setear credenciales o no hacer nada
86
+ @adapter.setup_duckdb(@duckdb)
87
+ end
88
+
89
+ # @api private
90
+ # @return [Integer]
91
+ def get_postgres_count
92
+ query = <<~SQL
93
+ SELECT COUNT(*)
94
+ FROM postgres_scan('#{@config.duckdb_connection_string}', 'public', '#{@table_name}')
95
+ WHERE #{base_where_sql}
96
+ SQL
97
+ @duckdb.query(query).first.first
98
+ end
99
+
100
+ # @api private
101
+ def export_to_parquet
102
+ # 💡 Magia del Adapter: Si es local crea las carpetas, si es S3 no hace nada.
103
+ @adapter.prepare_export_path(@bucket, @folder_name)
104
+
105
+ # Determinamos el path base de destino según el adaptador
106
+ dest_path = @config.storage_mode.to_sym == :s3 ? "s3://#{@bucket}/#{@folder_name}/" : File.join(@bucket, @folder_name, "")
107
+
108
+ query = <<~SQL
109
+ COPY (
110
+ SELECT #{@select_sql}
111
+ FROM postgres_scan('#{@config.duckdb_connection_string}', 'public', '#{@table_name}')
112
+ WHERE #{base_where_sql}
113
+ ) TO '#{dest_path}'
114
+ (
115
+ FORMAT PARQUET,
116
+ PARTITION_BY (#{@partition_keys.join(', ')}),
117
+ COMPRESSION 'ZSTD',
118
+ OVERWRITE_OR_IGNORE 1
119
+ );
120
+ SQL
121
+ @duckdb.query(query)
122
+ end
123
+
124
+ # @api private
125
+ # @return [Boolean]
126
+ def verify_integrity
127
+ # 💡 Magia del Adapter: Construye la ruta de búsqueda global ('**/*.parquet')
128
+ archive_path = @adapter.build_path(@bucket, @folder_name, nil)
129
+
130
+ begin
131
+ query = <<~SQL
132
+ SELECT COUNT(*)
133
+ FROM read_parquet('#{archive_path}')
134
+ WHERE #{base_where_sql}
135
+ SQL
136
+ parquet_result = @duckdb.query(query).first.first
137
+ rescue DuckDB::Error => e
138
+ @logger.error "[DataDrain Engine] ❌ Error leyendo Parquet: #{e.message}"
139
+ return false
140
+ end
141
+
142
+ @logger.info "[DataDrain Engine] 📊 Verificación -> Postgres: #{@pg_count} | Parquet: #{parquet_result}"
143
+ @pg_count == parquet_result
144
+ end
145
+
146
+ # @api private
147
+ def purge_from_postgres
148
+ @logger.info "[DataDrain Engine] 🗑️ Purgando en base de datos (Lotes de #{@config.batch_size})..."
149
+
150
+ conn = PG.connect(
151
+ host: @config.db_host,
152
+ port: @config.db_port,
153
+ user: @config.db_user,
154
+ password: @config.db_pass,
155
+ dbname: @config.db_name
156
+ )
157
+
158
+ loop do
159
+ sql = <<~SQL
160
+ DELETE FROM #{@table_name}
161
+ WHERE #{@primary_key} IN (
162
+ SELECT #{@primary_key} FROM #{@table_name}
163
+ WHERE #{base_where_sql}
164
+ LIMIT #{@config.batch_size}
165
+ )
166
+ SQL
167
+
168
+ result = conn.exec(sql)
169
+ break if result.cmd_tuples.zero?
170
+
171
+ sleep(@config.throttle_delay) if @config.throttle_delay.positive?
172
+ end
173
+ ensure
174
+ conn&.close
175
+ end
176
+ end
177
+ end
@@ -0,0 +1,15 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DataDrain
4
+ # Clase base para todos los errores originados en el framework DataDrain.
5
+ class Error < StandardError; end
6
+
7
+ # Levantado cuando falta configuración obligatoria.
8
+ class ConfigurationError < Error; end
9
+
10
+ # Levantado cuando la verificación matemática entre Postgres y Parquet no coincide.
11
+ class IntegrityError < Error; end
12
+
13
+ # Levantado cuando hay problemas interactuando con DuckDB, el disco local o AWS S3.
14
+ class StorageError < Error; end
15
+ end
@@ -0,0 +1,111 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DataDrain
4
+ # Clase encargada de ingerir archivos locales (CSV, JSON, Parquet)
5
+ # generados por otros servicios (ej. Netflow) y subirlos al Data Lake
6
+ # aplicando compresión ZSTD y particionamiento Hive.
7
+ class FileIngestor
8
+ # @param options [Hash] Opciones de ingestión.
9
+ # @option options [String] :source_path Ruta absoluta al archivo local.
10
+ # @option options [String] :folder_name Nombre de la carpeta destino en el Data Lake.
11
+ # @option options [Array<String, Symbol>] :partition_keys (Opcional) Columnas para particionar.
12
+ # @option options [String] :select_sql (Opcional) Sentencia SELECT para transformar datos al vuelo.
13
+ # @option options [Boolean] :delete_after_upload (Opcional) Borra el archivo local al terminar. Por defecto true.
14
+ def initialize(options)
15
+ @source_path = options.fetch(:source_path)
16
+ @folder_name = options.fetch(:folder_name)
17
+ @partition_keys = options.fetch(:partition_keys, [])
18
+ @select_sql = options.fetch(:select_sql, "*")
19
+ @delete_after_upload = options.fetch(:delete_after_upload, true)
20
+ @bucket = options[:bucket]
21
+
22
+ @config = DataDrain.configuration
23
+ @logger = @config.logger
24
+ @adapter = DataDrain::Storage.adapter
25
+
26
+ database = DuckDB::Database.open(":memory:")
27
+ @duckdb = database.connect
28
+ end
29
+
30
+ # Ejecuta el flujo de ingestión.
31
+ # @return [Boolean] true si el proceso fue exitoso.
32
+ def call
33
+ @logger.info "[DataDrain FileIngestor] 🚀 Iniciando ingestión de '#{@source_path}'..."
34
+
35
+ unless File.exist?(@source_path)
36
+ @logger.error "[DataDrain FileIngestor] ❌ El archivo origen no existe: #{@source_path}"
37
+ return false
38
+ end
39
+
40
+ @adapter.setup_duckdb(@duckdb)
41
+
42
+ # Determinamos la función lectora de DuckDB según la extensión del archivo
43
+ reader_function = determine_reader
44
+
45
+ # 1. Conteo de seguridad
46
+ source_count = @duckdb.query("SELECT COUNT(*) FROM #{reader_function}").first.first
47
+ @logger.info "[DataDrain FileIngestor] 📊 Encontrados #{source_count} registros para procesar."
48
+
49
+ if source_count.zero?
50
+ cleanup_local_file
51
+ return true
52
+ end
53
+
54
+ # 2. Exportación / Subida
55
+ @adapter.prepare_export_path(@bucket, @folder_name)
56
+ dest_path = @config.storage_mode.to_sym == :s3 ? "s3://#{@bucket}/#{@folder_name}/" : File.join(@bucket, @folder_name, "")
57
+
58
+ partition_clause = @partition_keys.any? ? "PARTITION_BY (#{@partition_keys.join(', ')})," : ""
59
+
60
+ query = <<~SQL
61
+ COPY (
62
+ SELECT #{@select_sql}
63
+ FROM #{reader_function}
64
+ ) TO '#{dest_path}'
65
+ (
66
+ FORMAT PARQUET,
67
+ #{partition_clause}
68
+ COMPRESSION 'ZSTD',
69
+ OVERWRITE_OR_IGNORE 1
70
+ );
71
+ SQL
72
+
73
+ @logger.info "[DataDrain FileIngestor] ☁️ Escribiendo en el Data Lake..."
74
+ @duckdb.query(query)
75
+
76
+ @logger.info "[DataDrain FileIngestor] ✅ Archivo ingerido y comprimido exitosamente."
77
+
78
+ cleanup_local_file
79
+ true
80
+ rescue DuckDB::Error => e
81
+ @logger.error "[DataDrain FileIngestor] ❌ Error de DuckDB durante la ingestión: #{e.message}"
82
+ false
83
+ ensure
84
+ @duckdb&.close
85
+ end
86
+
87
+ private
88
+
89
+ # @api private
90
+ def determine_reader
91
+ case File.extname(@source_path).downcase
92
+ when '.csv'
93
+ "read_csv_auto('#{@source_path}')"
94
+ when '.json'
95
+ "read_json_auto('#{@source_path}')"
96
+ when '.parquet'
97
+ "read_parquet('#{@source_path}')"
98
+ else
99
+ raise DataDrain::Error, "Formato de archivo no soportado para ingestión: #{@source_path}"
100
+ end
101
+ end
102
+
103
+ # @api private
104
+ def cleanup_local_file
105
+ if @delete_after_upload && File.exist?(@source_path)
106
+ File.delete(@source_path)
107
+ @logger.info "[DataDrain FileIngestor] 🗑️ Archivo temporal local eliminado."
108
+ end
109
+ end
110
+ end
111
+ end
@@ -0,0 +1,127 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "active_model"
4
+ require "duckdb"
5
+
6
+ module DataDrain
7
+ # Clase base que actúa como un ORM (Object-Relational Mapper) de solo lectura y purga
8
+ # para interactuar con el Data Lake en formato Parquet utilizando DuckDB.
9
+ #
10
+ # @abstract Subclasifica este modelo para cada tabla archivada.
11
+ # @example
12
+ # class ArchivedVersion < DataDrain::Record
13
+ # self.folder_name = 'versions'
14
+ # self.partition_keys = [:year, :month, :isp_id]
15
+ # attribute :event, :string
16
+ # end
17
+ class Record
18
+ include ActiveModel::Model
19
+ include ActiveModel::Attributes
20
+
21
+ class_attribute :bucket
22
+ class_attribute :folder_name
23
+ class_attribute :partition_keys
24
+
25
+ # Retorna la conexión persistente a DuckDB en memoria para el hilo (Thread) actual.
26
+ # Esto previene tener que recargar extensiones (como httpfs) en cada consulta.
27
+ #
28
+ # @return [DuckDB::Connection] Conexión activa a DuckDB.
29
+ def self.connection
30
+ Thread.current[:data_drain_duckdb_conn] ||= begin
31
+ db = DuckDB::Database.open(":memory:")
32
+ conn = db.connect
33
+ DataDrain::Storage.adapter.setup_duckdb(conn)
34
+ conn
35
+ end
36
+ end
37
+
38
+ # Consulta registros en el Data Lake filtrando por claves de partición.
39
+ #
40
+ # @param limit [Integer] Cantidad máxima de registros a retornar.
41
+ # @param partitions [Hash] Pares clave-valor correspondientes a las particiones.
42
+ # @return [Array<DataDrain::Record>] Colección de registros instanciados.
43
+ def self.where(limit: 50, **partitions)
44
+ path = build_query_path(partitions)
45
+
46
+ sql = <<~SQL
47
+ SELECT #{attribute_names.join(', ')}
48
+ FROM read_parquet('#{path}')
49
+ ORDER BY created_at DESC
50
+ LIMIT #{limit}
51
+ SQL
52
+
53
+ execute_and_instantiate(sql, attribute_names)
54
+ end
55
+
56
+ # Busca un registro específico por su ID.
57
+ # Implementa sanitización básica para prevenir Inyección SQL.
58
+ #
59
+ # @param id [String, Integer] Identificador único del registro.
60
+ # @param partitions [Hash] Pares clave-valor de las particiones donde buscar.
61
+ # @return [DataDrain::Record, nil] El registro encontrado o nil.
62
+ def self.find(id, **partitions)
63
+ path = build_query_path(partitions)
64
+ # Sanitización básica: duplicar comillas simples para anular escapes SQL
65
+ safe_id = id.to_s.gsub("'", "''")
66
+
67
+ sql = <<~SQL
68
+ SELECT #{attribute_names.join(', ')}
69
+ FROM read_parquet('#{path}')
70
+ WHERE id = '#{safe_id}'
71
+ LIMIT 1
72
+ SQL
73
+
74
+ execute_and_instantiate(sql, attribute_names).first
75
+ end
76
+
77
+ # Elimina físicamente los directorios o prefijos de S3.
78
+ #
79
+ # @param partitions [Hash] Particiones a eliminar.
80
+ # @return [Integer] Cantidad de particiones físicas eliminadas.
81
+ def self.destroy_all(**partitions)
82
+ adapter = DataDrain::Storage.adapter
83
+ DataDrain.configuration.logger.info "[DataDrain] 🗑️ Ejecutando destroy_all en #{folder_name} con: #{partitions.inspect}"
84
+
85
+ adapter.destroy_partitions(bucket, folder_name, partition_keys, partitions)
86
+ end
87
+
88
+ # @return [String] Representación legible en consola.
89
+ def inspect
90
+ inspection = attributes.map do |name, value|
91
+ "#{name}: #{value.nil? ? 'nil' : value.inspect}"
92
+ end.compact.join(", ")
93
+
94
+ "#<#{self.class} #{inspection}>"
95
+ end
96
+
97
+ class << self
98
+ private
99
+
100
+ # @api private
101
+ # @param partitions [Hash]
102
+ # @return [String]
103
+ def build_query_path(partitions)
104
+ partition_path = partitions.map { |k, v| "#{k}=#{v}" }.join("/")
105
+ DataDrain::Storage.adapter.build_path(bucket, folder_name, partition_path)
106
+ end
107
+
108
+ # @api private
109
+ # @param sql [String]
110
+ # @param columns [Array<String>]
111
+ # @return [Array<DataDrain::Record>]
112
+ def execute_and_instantiate(sql, columns)
113
+ begin
114
+ result = connection.query(sql)
115
+ rescue DuckDB::Error => e
116
+ DataDrain.configuration.logger.warn "[DataDrain] ⚠️ Ruta o archivo no encontrado: #{e.message}"
117
+ return []
118
+ end
119
+
120
+ result.map do |row|
121
+ attributes_hash = columns.zip(row).to_h
122
+ new(attributes_hash)
123
+ end
124
+ end
125
+ end
126
+ end
127
+ end
@@ -0,0 +1,59 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DataDrain
4
+ module Storage
5
+ # Interfaz abstracta para los adaptadores de almacenamiento de DataDrain.
6
+ # Define los métodos obligatorios que cada proveedor (Local, S3, etc.)
7
+ # debe implementar para interactuar con DuckDB y el sistema de archivos.
8
+ #
9
+ # @abstract
10
+ class Base
11
+ # @return [DataDrain::Configuration] Configuración actual del framework.
12
+ attr_reader :config
13
+
14
+ # Inicializa el adaptador con la configuración proveída.
15
+ #
16
+ # @param config [DataDrain::Configuration]
17
+ def initialize(config)
18
+ @config = config
19
+ end
20
+
21
+ # Configura las extensiones y credenciales necesarias en la conexión de DuckDB.
22
+ #
23
+ # @param connection [DuckDB::Connection] Conexión activa a DuckDB.
24
+ # @raise [NotImplementedError] Si la subclase no lo implementa.
25
+ def setup_duckdb(connection)
26
+ raise NotImplementedError, "#{self.class} debe implementar #setup_duckdb"
27
+ end
28
+
29
+ # Prepara el directorio destino antes de una exportación (ej. crear carpetas).
30
+ #
31
+ # @param bucket [String] nombre del bucket tanto local o de S3.
32
+ # @param folder_name [String] Nombre de la carpeta principal de la tabla.
33
+ def prepare_export_path(bucket, folder_name)
34
+ # Operación nula por defecto. Las subclases pueden sobreescribirlo.
35
+ end
36
+
37
+ # Construye la ruta de lectura compatible con la función `read_parquet` de DuckDB.
38
+ #
39
+ # @param bucket [String] nombre del bucket tanto local o de S3.
40
+ # @param folder_name [String] Carpeta de la tabla (ej. 'versions').
41
+ # @param partition_path [String, nil] Ruta parcial de particiones (ej. 'year=2026/month=3').
42
+ # @return [String] Ruta completa con comodines (ej. '.../**/*.parquet').
43
+ def build_path(bucket, folder_name, partition_path)
44
+ raise NotImplementedError, "#{self.class} debe implementar #build_path"
45
+ end
46
+
47
+ # Elimina físicamente las particiones que coincidan con los criterios.
48
+ #
49
+ # @param bucket [String] nombre del bucket tanto local o de S3.
50
+ # @param folder_name [String] Carpeta de la tabla.
51
+ # @param partition_keys [Array<Symbol>] Claves de partición esperadas.
52
+ # @param partitions [Hash] Valores de las particiones a eliminar (puede contener nulos).
53
+ # @return [Integer] Cantidad de particiones o archivos eliminados.
54
+ def destroy_partitions(bucket, folder_name, partition_keys, partitions)
55
+ raise NotImplementedError, "#{self.class} debe implementar #destroy_partitions"
56
+ end
57
+ end
58
+ end
59
+ end
@@ -0,0 +1,53 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "fileutils"
4
+
5
+ module DataDrain
6
+ module Storage
7
+ # Implementación del adaptador de almacenamiento para el disco local.
8
+ class Local < Base
9
+ # (DuckDB ya soporta archivos locales de forma nativa, no requiere extensiones extras)
10
+ # @param connection [DuckDB::Connection]
11
+ def setup_duckdb(connection)
12
+ # No-op
13
+ end
14
+
15
+ # Crea la jerarquía de carpetas en el disco si no existe.
16
+ # @param bucket [String]
17
+ # @param folder_name [String]
18
+ def prepare_export_path(bucket, folder_name)
19
+ FileUtils.mkdir_p(File.join(bucket, folder_name))
20
+ end
21
+
22
+ # @param bucket [String]
23
+ # @param folder_name [String]
24
+ # @param partition_path [String, nil]
25
+ # @return [String]
26
+ def build_path(bucket, folder_name, partition_path)
27
+ base = File.join(bucket, folder_name)
28
+ base = File.join(base, partition_path) if partition_path && !partition_path.empty?
29
+ "#{base}/**/*.parquet"
30
+ end
31
+
32
+ # @param bucket [String]
33
+ # @param folder_name [String]
34
+ # @param partition_keys [Array<Symbol>]
35
+ # @param partitions [Hash]
36
+ # @return [Integer]
37
+ def destroy_partitions(bucket, folder_name, partition_keys, partitions)
38
+ path_parts = partition_keys.map do |key|
39
+ val = partitions[key]
40
+ val.nil? || val.to_s.empty? ? "#{key}=*" : "#{key}=#{val}"
41
+ end
42
+
43
+ pattern = File.join(bucket, folder_name, path_parts.join("/"))
44
+ folders_to_delete = Dir.glob(pattern)
45
+
46
+ return 0 if folders_to_delete.empty?
47
+
48
+ folders_to_delete.each { |folder| FileUtils.rm_rf(folder) }
49
+ folders_to_delete.size
50
+ end
51
+ end
52
+ end
53
+ end
@@ -0,0 +1,74 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DataDrain
4
+ module Storage
5
+ # Implementación del adaptador de almacenamiento para Amazon S3.
6
+ class S3 < Base
7
+ # Carga la extensión httpfs en DuckDB e inyecta las credenciales de AWS.
8
+ # @param connection [DuckDB::Connection]
9
+ def setup_duckdb(connection)
10
+ connection.query("INSTALL httpfs; LOAD httpfs;")
11
+ connection.query("SET s3_region='#{@config.aws_region}';")
12
+ connection.query("SET s3_access_key_id='#{@config.aws_access_key_id}';")
13
+ connection.query("SET s3_secret_access_key='#{@config.aws_secret_access_key}';")
14
+ end
15
+
16
+ # @param bucket [String]
17
+ # @param folder_name [String]
18
+ # @param partition_path [String, nil]
19
+ # @return [String]
20
+ def build_path(bucket, folder_name, partition_path)
21
+ # En S3, el base_path actúa como el nombre del bucket
22
+ base = File.join(bucket, folder_name)
23
+ base = File.join(base, partition_path) if partition_path && !partition_path.empty?
24
+ "s3://#{base}/**/*.parquet"
25
+ end
26
+
27
+ # @param bucket [String]
28
+ # @param folder_name [String]
29
+ # @param partition_keys [Array<Symbol>]
30
+ # @param partitions [Hash]
31
+ # @return [Integer]
32
+ def destroy_partitions(bucket, folder_name, partition_keys, partitions)
33
+ client = Aws::S3::Client.new(
34
+ region: @config.aws_region,
35
+ access_key_id: @config.aws_access_key_id,
36
+ secret_access_key: @config.aws_secret_access_key
37
+ )
38
+
39
+ regex_parts = partition_keys.map do |key|
40
+ val = partitions[key]
41
+ val.nil? || val.to_s.empty? ? "#{key}=[^/]+" : "#{key}=#{val}"
42
+ end
43
+ pattern_regex = Regexp.new("^#{folder_name}/#{regex_parts.join('/')}")
44
+
45
+ objects_to_delete = []
46
+ prefix = "#{folder_name}/"
47
+ first_key = partition_keys.first
48
+ prefix += "#{first_key}=#{partitions[first_key]}/" if partitions[first_key]
49
+
50
+ client.list_objects_v2(bucket: bucket, prefix: prefix).each do |response|
51
+ response.contents.each do |obj|
52
+ objects_to_delete << { key: obj.key } if obj.key.match?(pattern_regex)
53
+ end
54
+ end
55
+
56
+ delete_in_batches(client, bucket, objects_to_delete)
57
+ end
58
+
59
+ private
60
+
61
+ # @api private
62
+ def delete_in_batches(client, bucket, objects_to_delete)
63
+ return 0 if objects_to_delete.empty?
64
+
65
+ deleted_count = 0
66
+ objects_to_delete.each_slice(1000) do |batch|
67
+ client.delete_objects(bucket: bucket, delete: { objects: batch, quiet: true })
68
+ deleted_count += batch.size
69
+ end
70
+ deleted_count
71
+ end
72
+ end
73
+ end
74
+ end
@@ -0,0 +1,30 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "storage/base"
4
+ require_relative "storage/local"
5
+ require_relative "storage/s3"
6
+
7
+ module DataDrain
8
+ # Espacio de nombres para las estrategias de almacenamiento físico.
9
+ module Storage
10
+ # Excepción lanzada cuando se intenta usar un modo de almacenamiento no registrado.
11
+ class InvalidAdapterError < DataDrain::Error; end
12
+
13
+ # Resuelve e instancia el adaptador de almacenamiento correspondiente
14
+ # basándose en la configuración actual del framework.
15
+ #
16
+ # @return [DataDrain::Storage::Base] Una instancia de Local o S3.
17
+ # @raise [InvalidAdapterError] Si el storage_mode no es válido.
18
+ def self.adapter
19
+ mode = DataDrain.configuration.storage_mode
20
+ case mode.to_sym
21
+ when :local
22
+ Local.new(DataDrain.configuration)
23
+ when :s3
24
+ S3.new(DataDrain.configuration)
25
+ else
26
+ raise InvalidAdapterError, "Storage mode '#{mode}' no está soportado."
27
+ end
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "json"
4
+
5
+ module DataDrain
6
+ module Types
7
+ # Tipo personalizado para ActiveModel que maneja la conversión de
8
+ # cadenas JSON de DuckDB hacia Hashes de Ruby.
9
+ class JsonType < ActiveModel::Type::Value
10
+ # @param value [String, Hash, Array, nil]
11
+ # @return [Hash, Array, String, nil]
12
+ def cast(value)
13
+ return value if value.is_a?(Hash) || value.is_a?(Array) || value.nil?
14
+
15
+ begin
16
+ JSON.parse(value.to_s)
17
+ rescue JSON::ParserError
18
+ value
19
+ end
20
+ end
21
+ end
22
+ end
23
+ end
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DataDrain
4
+ VERSION = "0.1.0"
5
+ end
data/lib/data_drain.rb ADDED
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "active_model"
4
+ require_relative "data_drain/version"
5
+ require_relative "data_drain/errors"
6
+ require_relative "data_drain/configuration"
7
+ require_relative "data_drain/storage"
8
+ require_relative "data_drain/engine"
9
+ require_relative "data_drain/record"
10
+ require_relative "data_drain/file_ingestor"
11
+
12
+ # Registramos el tipo JSON personalizado de ActiveModel
13
+ require_relative "data_drain/types/json_type"
14
+ ActiveModel::Type.register(:json, DataDrain::Types::JsonType)
15
+
16
+ module DataDrain
17
+ class << self
18
+ # @return [DataDrain::Configuration]
19
+ def configuration
20
+ @configuration ||= Configuration.new
21
+ end
22
+
23
+ # @yieldparam config [DataDrain::Configuration]
24
+ def configure
25
+ yield(configuration)
26
+ end
27
+
28
+ # @api private
29
+ def reset_configuration!
30
+ @configuration = Configuration.new
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,4 @@
1
+ module DataDrain
2
+ VERSION: String
3
+ # See the writing guide of rbs: https://github.com/ruby/rbs#guides
4
+ end
metadata ADDED
@@ -0,0 +1,120 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: data_drain
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Gabriel
8
+ autorequire:
9
+ bindir: exe
10
+ cert_chain: []
11
+ date: 2026-03-12 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: activemodel
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ">="
18
+ - !ruby/object:Gem::Version
19
+ version: '6.0'
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ">="
25
+ - !ruby/object:Gem::Version
26
+ version: '6.0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: aws-sdk-s3
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - "~>"
32
+ - !ruby/object:Gem::Version
33
+ version: '1.114'
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - "~>"
39
+ - !ruby/object:Gem::Version
40
+ version: '1.114'
41
+ - !ruby/object:Gem::Dependency
42
+ name: duckdb
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - "~>"
46
+ - !ruby/object:Gem::Version
47
+ version: '1.4'
48
+ type: :runtime
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - "~>"
53
+ - !ruby/object:Gem::Version
54
+ version: '1.4'
55
+ - !ruby/object:Gem::Dependency
56
+ name: pg
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - ">="
60
+ - !ruby/object:Gem::Version
61
+ version: '1.2'
62
+ type: :runtime
63
+ prerelease: false
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - ">="
67
+ - !ruby/object:Gem::Version
68
+ version: '1.2'
69
+ description: Extrae datos transaccionales, los archiva en un Data Lake (S3/Local)
70
+ en formato Parquet usando Hive Partitioning, y purga el origen de forma segura.
71
+ email:
72
+ - gab.edera@gmail.com
73
+ executables: []
74
+ extensions: []
75
+ extra_rdoc_files: []
76
+ files:
77
+ - ".rspec"
78
+ - ".rubocop.yml"
79
+ - CHANGELOG.md
80
+ - CODE_OF_CONDUCT.md
81
+ - LICENSE.txt
82
+ - README.md
83
+ - Rakefile
84
+ - data_drain.gemspec
85
+ - lib/data_drain.rb
86
+ - lib/data_drain/configuration.rb
87
+ - lib/data_drain/engine.rb
88
+ - lib/data_drain/errors.rb
89
+ - lib/data_drain/file_ingestor.rb
90
+ - lib/data_drain/record.rb
91
+ - lib/data_drain/storage.rb
92
+ - lib/data_drain/storage/base.rb
93
+ - lib/data_drain/storage/local.rb
94
+ - lib/data_drain/storage/s3.rb
95
+ - lib/data_drain/types/json_type.rb
96
+ - lib/data_drain/version.rb
97
+ - sig/data_drain.rbs
98
+ homepage: https://github.com/gedera/data_drain
99
+ licenses: []
100
+ metadata: {}
101
+ post_install_message:
102
+ rdoc_options: []
103
+ require_paths:
104
+ - lib
105
+ required_ruby_version: !ruby/object:Gem::Requirement
106
+ requirements:
107
+ - - ">="
108
+ - !ruby/object:Gem::Version
109
+ version: 3.0.0
110
+ required_rubygems_version: !ruby/object:Gem::Requirement
111
+ requirements:
112
+ - - ">="
113
+ - !ruby/object:Gem::Version
114
+ version: '0'
115
+ requirements: []
116
+ rubygems_version: 3.4.19
117
+ signing_key:
118
+ specification_version: 4
119
+ summary: Micro-framework para drenar datos de PostgreSQL a Parquet vía DuckDB.
120
+ test_files: []