pyspark-connectby 1.0.9__tar.gz → 1.0.10__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of pyspark-connectby might be problematic. Click here for more details.
- {pyspark_connectby-1.0.9 → pyspark_connectby-1.0.10}/PKG-INFO +15 -13
- {pyspark_connectby-1.0.9 → pyspark_connectby-1.0.10}/README.md +14 -12
- {pyspark_connectby-1.0.9 → pyspark_connectby-1.0.10}/pyproject.toml +1 -1
- {pyspark_connectby-1.0.9 → pyspark_connectby-1.0.10}/pyspark_connectby/__init__.py +0 -0
- {pyspark_connectby-1.0.9 → pyspark_connectby-1.0.10}/pyspark_connectby/connectby_query.py +0 -0
- {pyspark_connectby-1.0.9 → pyspark_connectby-1.0.10}/pyspark_connectby/dataframe_connectby.py +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: pyspark-connectby
|
|
3
|
-
Version: 1.0.
|
|
3
|
+
Version: 1.0.10
|
|
4
4
|
Summary: connectby hierarchy query in spark
|
|
5
5
|
Author: Chen, Yu
|
|
6
6
|
Author-email: cheny@fcc.ca
|
|
@@ -20,7 +20,7 @@ Spark currently does not support hierarchy query `connectBy` as of version 3.5.0
|
|
|
20
20
|
This is an attempt to add `connectBy` method to [DataFrame](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html)
|
|
21
21
|
|
|
22
22
|
# Concept
|
|
23
|
-
Hierarchy query is one of the important feature that many relational databases, such as Oracle, DB2, My SQL,
|
|
23
|
+
Hierarchy query is one of the important feature that many relational databases, such as [Oracle](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Hierarchical-Queries.html#GUID-0118DF1D-B9A9-41EB-8556-C6E7D6A5A84E), DB2, My SQL,
|
|
24
24
|
Snowflake, [Redshift](https://docs.aws.amazon.com/redshift/latest/dg/r_CONNECT_BY_clause.html), etc.,
|
|
25
25
|
would support directly or alternatively by using recursive CTE.
|
|
26
26
|
|
|
@@ -52,16 +52,20 @@ df2.show()
|
|
|
52
52
|
```
|
|
53
53
|
With result:
|
|
54
54
|
```
|
|
55
|
-
|
|
56
|
-
|emp_id|
|
|
57
|
-
|
|
58
|
-
| 1| 1| null|Carlos|
|
|
59
|
-
| 11| 2| 1| John|
|
|
60
|
-
| 111| 3| 11| Jorge|
|
|
61
|
-
| 112| 3| 11| Kwaku|
|
|
62
|
-
| 113| 3| 11| Liu|
|
|
63
|
-
|
|
55
|
+
+------+----------+-----+-----------------+----------+------+
|
|
56
|
+
|emp_id|START_WITH|LEVEL|CONNECT_BY_ISLEAF|manager_id| name|
|
|
57
|
+
+------+----------+-----+-----------------+----------+------+
|
|
58
|
+
| 1| 1| 1| false| null|Carlos|
|
|
59
|
+
| 11| 1| 2| false| 1| John|
|
|
60
|
+
| 111| 1| 3| true| 11| Jorge|
|
|
61
|
+
| 112| 1| 3| true| 11| Kwaku|
|
|
62
|
+
| 113| 1| 3| true| 11| Liu|
|
|
63
|
+
+------+----------+-----+-----------------+----------+------+
|
|
64
64
|
```
|
|
65
|
+
Note the peseudocolumns:
|
|
66
|
+
- START_WITH
|
|
67
|
+
- LEVEL
|
|
68
|
+
- CONNECT_BY_ISLEAF
|
|
65
69
|
|
|
66
70
|
# Installation
|
|
67
71
|
Python Version >= 3.7
|
|
@@ -82,8 +86,6 @@ df.transform(connectBy, prior='emp_id', to='manager_id', start_with='1') # or b
|
|
|
82
86
|
|
|
83
87
|
df.connectBy(prior='emp_id', to='manager_id') # without start_with, it will go through each node
|
|
84
88
|
|
|
85
|
-
df.connectBy(prior='emp_id', to='manager_id', level_col='the_level') # level column name other than `level`
|
|
86
|
-
|
|
87
89
|
df.connectBy(prior='emp_id', to='manager_id', start_with=['1', '2']) # start_with a list of top nodes ids.
|
|
88
90
|
|
|
89
91
|
```
|
|
@@ -4,7 +4,7 @@ Spark currently does not support hierarchy query `connectBy` as of version 3.5.0
|
|
|
4
4
|
This is an attempt to add `connectBy` method to [DataFrame](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html)
|
|
5
5
|
|
|
6
6
|
# Concept
|
|
7
|
-
Hierarchy query is one of the important feature that many relational databases, such as Oracle, DB2, My SQL,
|
|
7
|
+
Hierarchy query is one of the important feature that many relational databases, such as [Oracle](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Hierarchical-Queries.html#GUID-0118DF1D-B9A9-41EB-8556-C6E7D6A5A84E), DB2, My SQL,
|
|
8
8
|
Snowflake, [Redshift](https://docs.aws.amazon.com/redshift/latest/dg/r_CONNECT_BY_clause.html), etc.,
|
|
9
9
|
would support directly or alternatively by using recursive CTE.
|
|
10
10
|
|
|
@@ -36,16 +36,20 @@ df2.show()
|
|
|
36
36
|
```
|
|
37
37
|
With result:
|
|
38
38
|
```
|
|
39
|
-
|
|
40
|
-
|emp_id|
|
|
41
|
-
|
|
42
|
-
| 1| 1| null|Carlos|
|
|
43
|
-
| 11| 2| 1| John|
|
|
44
|
-
| 111| 3| 11| Jorge|
|
|
45
|
-
| 112| 3| 11| Kwaku|
|
|
46
|
-
| 113| 3| 11| Liu|
|
|
47
|
-
|
|
39
|
+
+------+----------+-----+-----------------+----------+------+
|
|
40
|
+
|emp_id|START_WITH|LEVEL|CONNECT_BY_ISLEAF|manager_id| name|
|
|
41
|
+
+------+----------+-----+-----------------+----------+------+
|
|
42
|
+
| 1| 1| 1| false| null|Carlos|
|
|
43
|
+
| 11| 1| 2| false| 1| John|
|
|
44
|
+
| 111| 1| 3| true| 11| Jorge|
|
|
45
|
+
| 112| 1| 3| true| 11| Kwaku|
|
|
46
|
+
| 113| 1| 3| true| 11| Liu|
|
|
47
|
+
+------+----------+-----+-----------------+----------+------+
|
|
48
48
|
```
|
|
49
|
+
Note the peseudocolumns:
|
|
50
|
+
- START_WITH
|
|
51
|
+
- LEVEL
|
|
52
|
+
- CONNECT_BY_ISLEAF
|
|
49
53
|
|
|
50
54
|
# Installation
|
|
51
55
|
Python Version >= 3.7
|
|
@@ -66,8 +70,6 @@ df.transform(connectBy, prior='emp_id', to='manager_id', start_with='1') # or b
|
|
|
66
70
|
|
|
67
71
|
df.connectBy(prior='emp_id', to='manager_id') # without start_with, it will go through each node
|
|
68
72
|
|
|
69
|
-
df.connectBy(prior='emp_id', to='manager_id', level_col='the_level') # level column name other than `level`
|
|
70
|
-
|
|
71
73
|
df.connectBy(prior='emp_id', to='manager_id', start_with=['1', '2']) # start_with a list of top nodes ids.
|
|
72
74
|
|
|
73
75
|
```
|
|
File without changes
|
|
File without changes
|
{pyspark_connectby-1.0.9 → pyspark_connectby-1.0.10}/pyspark_connectby/dataframe_connectby.py
RENAMED
|
File without changes
|