snowpark-checkpoints-collectors 0.1.0rc1__py3-none-any.whl → 0.1.0rc2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: snowpark-checkpoints-collectors
3
- Version: 0.1.0rc1
3
+ Version: 0.1.0rc2
4
4
  Summary: Snowpark column and table statistics collection
5
5
  Project-URL: Bug Tracker, https://github.com/snowflakedb/snowpark-checkpoints/issues
6
6
  Project-URL: Source code, https://github.com/snowflakedb/snowpark-checkpoints/
@@ -243,7 +243,7 @@ Requires-Dist: setuptools>=70.0.0; extra == 'development'
243
243
  Requires-Dist: twine==5.1.1; extra == 'development'
244
244
  Description-Content-Type: text/markdown
245
245
 
246
- # Data Collection from Spark Pipelines
246
+ # snowpark-checkpoints-collectors
247
247
 
248
248
  ---
249
249
  **NOTE**
@@ -252,25 +252,96 @@ This package is on Private Preview.
252
252
 
253
253
  ---
254
254
 
255
- The `snowpark-checkpoints-collector` package can collect
256
- schema and check information from a spark pipeline and
257
- record those results into a set of JSON files corresponding to different intermediate dataframes. These files can be inspected manually
258
- and handed over to teams implementing the snowpark pipeline. The `snowpark-checkpoints-collector` package is designed to have minimal
259
- dependencies and the generated files are meant to be inspected by security
260
- teams.
255
+ **snowpark-checkpoints-collector** package offers a function for extracting information from PySpark dataframes. We can then use that data to validate against the converted Snowpark dataframes to ensure that behavioral equivalence has been achieved.
256
+ ## Features
261
257
 
262
- On the snowpark side the `snowpark-checkpoints` package can use these files to perform schema and data validation checks against snowpark dataframes at the same, intermediate logical "checkpoints".
258
+ - Schema inference collected data mode (Schema): This is the default mode, which leverages Pandera schema inference to obtain the metadata and checks that will be evaluated for the specified dataframe. This mode also collects custom data from columns of the DataFrame based on the PySpark type.
259
+ - DataFrame collected data mode (DataFrame): This mode collects the data of the PySpark dataframe. In this case, the mechanism saves all data of the given dataframe in parquet format. Using the default user Snowflake connection, it tries to upload the parquet files into the Snowflake temporal stage and create a table based on the information in the stage. The name of the file and the table is the same as the checkpoint.
263
260
 
264
- ## collect_dataframe_schema
265
261
 
262
+
263
+ ## Functionalities
264
+
265
+ ### Collect DataFrame Checkpoint
266
+
267
+
268
+
269
+ ```python
270
+ from pyspark.sql import DataFrame as SparkDataFrame
271
+ from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
272
+ from typing import Optional
273
+
274
+ # Signature of the function
275
+ def collect_dataframe_checkpoint(
276
+ df: SparkDataFrame,
277
+ checkpoint_name: str,
278
+ sample: Optional[float] = None,
279
+ mode: Optional[CheckpointMode] = None,
280
+ output_path: Optional[str] = None,
281
+ ) -> None:
282
+ ...
283
+ ```
284
+
285
+ - `df`: The input Spark dataframe to collect.
286
+ - `checkpoint_name`: Name of the checkpoint schema file or dataframe.
287
+ - `sample`: Fraction of DataFrame to sample for schema inference, defaults to 1.0.
288
+ - `mode`: The mode to execution the collection (Schema or Dataframe), defaults to CheckpointMode.Schema.
289
+ - `output_path`: The output path to save the checkpoint, defaults to current working directory.
290
+
291
+
292
+ ## Usage Example
293
+
294
+ ### Schema mode
295
+
296
+ ```python
297
+ from pyspark.sql import SparkSession
298
+ from snowflake.snowpark_checkpoints_collector import collect_dataframe_checkpoint
299
+ from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
300
+
301
+ spark_session = SparkSession.builder.getOrCreate()
302
+ sample_size = 1.0
303
+
304
+ pyspark_df = spark_session.createDataFrame(
305
+ [("apple", 21), ("lemon", 34), ("banana", 50)], schema="fruit string, age integer"
306
+ )
307
+
308
+ collect_dataframe_checkpoint(
309
+ pyspark_df,
310
+ checkpoint_name="collect_checkpoint_mode_1",
311
+ sample=sample_size,
312
+ mode=CheckpointMode.SCHEMA,
313
+ )
266
314
  ```
267
- from snowflake.snowpark_checkpoints_collector import collect_dataframe_schema;
268
- collect_dataframe_schema(df:SparkDataFrame,
269
- checkpoint_name,
270
- sample=0.1)
315
+
316
+
317
+ ### Dataframe mode
318
+
319
+ ```python
320
+ from pyspark.sql import SparkSession
321
+ from snowflake.snowpark_checkpoints_collector import collect_dataframe_checkpoint
322
+ from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
323
+ from pyspark.sql.types import StructType, StructField, ByteType, StringType, IntegerType
324
+
325
+ spark_schema = StructType(
326
+ [
327
+ StructField("BYTE", ByteType(), True),
328
+ StructField("STRING", StringType(), True),
329
+ StructField("INTEGER", IntegerType(), True)
330
+ ]
331
+ )
332
+
333
+ data = [(1, "apple", 21), (2, "lemon", 34), (3, "banana", 50)]
334
+
335
+ spark_session = SparkSession.builder.getOrCreate()
336
+ pyspark_df = spark_session.createDataFrame(data, schema=spark_schema).orderBy(
337
+ "INTEGER"
338
+ )
339
+
340
+ collect_dataframe_checkpoint(
341
+ pyspark_df,
342
+ checkpoint_name="collect_checkpoint_mode_2",
343
+ mode=CheckpointMode.DATAFRAME,
344
+ )
271
345
  ```
272
346
 
273
- - df - the spark data frame to collect the schema from
274
- - checkpoint_name - the name of the "checkpoint". Generated JSON files
275
- will have the name "snowpark-[checkpoint_name]-schema.json"
276
- - sample - sample size of the spark data frame to use to generate the schema
347
+ ------
@@ -0,0 +1,4 @@
1
+ snowpark_checkpoints_collectors-0.1.0rc2.dist-info/METADATA,sha256=8Ep1fq-1C-mR2_uhK11XBnnIOJth9ZMvACaVDKof0nw,18424
2
+ snowpark_checkpoints_collectors-0.1.0rc2.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
3
+ snowpark_checkpoints_collectors-0.1.0rc2.dist-info/licenses/LICENSE,sha256=pmjhbh6uVhV5MBXOlou_UZgFP7CYVQITkCCdvfcS5lY,11340
4
+ snowpark_checkpoints_collectors-0.1.0rc2.dist-info/RECORD,,
@@ -1,4 +0,0 @@
1
- snowpark_checkpoints_collectors-0.1.0rc1.dist-info/METADATA,sha256=0-TZAWO6EULBbXeqMNXqEVpb-q0KnZy87pJvTkjVM6g,16247
2
- snowpark_checkpoints_collectors-0.1.0rc1.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
3
- snowpark_checkpoints_collectors-0.1.0rc1.dist-info/licenses/LICENSE,sha256=pmjhbh6uVhV5MBXOlou_UZgFP7CYVQITkCCdvfcS5lY,11340
4
- snowpark_checkpoints_collectors-0.1.0rc1.dist-info/RECORD,,