databricks-sqlalchemy 0.0.1b1__py3-none-any.whl → 1.0.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. CHANGELOG.md +274 -0
  2. databricks/sqlalchemy/__init__.py +4 -2
  3. databricks/sqlalchemy/_ddl.py +100 -0
  4. databricks/sqlalchemy/_parse.py +385 -0
  5. databricks/sqlalchemy/_types.py +323 -0
  6. databricks/sqlalchemy/base.py +436 -0
  7. databricks/sqlalchemy/dependency_test/test_dependency.py +22 -0
  8. databricks/sqlalchemy/py.typed +0 -0
  9. databricks/sqlalchemy/pytest.ini +4 -0
  10. databricks/sqlalchemy/requirements.py +249 -0
  11. databricks/sqlalchemy/setup.cfg +4 -0
  12. databricks/sqlalchemy/test/_extra.py +70 -0
  13. databricks/sqlalchemy/test/_future.py +331 -0
  14. databricks/sqlalchemy/test/_regression.py +311 -0
  15. databricks/sqlalchemy/test/_unsupported.py +450 -0
  16. databricks/sqlalchemy/test/conftest.py +13 -0
  17. databricks/sqlalchemy/test/overrides/_componentreflectiontest.py +189 -0
  18. databricks/sqlalchemy/test/overrides/_ctetest.py +33 -0
  19. databricks/sqlalchemy/test/test_suite.py +13 -0
  20. databricks/sqlalchemy/test_local/__init__.py +5 -0
  21. databricks/sqlalchemy/test_local/conftest.py +44 -0
  22. databricks/sqlalchemy/test_local/e2e/MOCK_DATA.xlsx +0 -0
  23. databricks/sqlalchemy/test_local/e2e/test_basic.py +543 -0
  24. databricks/sqlalchemy/test_local/test_ddl.py +96 -0
  25. databricks/sqlalchemy/test_local/test_parsing.py +160 -0
  26. databricks/sqlalchemy/test_local/test_types.py +161 -0
  27. databricks_sqlalchemy-1.0.0.dist-info/LICENSE +201 -0
  28. databricks_sqlalchemy-1.0.0.dist-info/METADATA +225 -0
  29. databricks_sqlalchemy-1.0.0.dist-info/RECORD +31 -0
  30. {databricks_sqlalchemy-0.0.1b1.dist-info → databricks_sqlalchemy-1.0.0.dist-info}/WHEEL +1 -1
  31. databricks_sqlalchemy-1.0.0.dist-info/entry_points.txt +3 -0
  32. databricks/__init__.py +0 -7
  33. databricks_sqlalchemy-0.0.1b1.dist-info/METADATA +0 -19
  34. databricks_sqlalchemy-0.0.1b1.dist-info/RECORD +0 -5
@@ -0,0 +1,225 @@
1
+ Metadata-Version: 2.1
2
+ Name: databricks-sqlalchemy
3
+ Version: 1.0.0
4
+ Summary: Databricks SQLAlchemy plugin for Python
5
+ License: Apache-2.0
6
+ Author: Databricks
7
+ Author-email: databricks-sql-connector-maintainers@databricks.com
8
+ Requires-Python: >=3.8.0,<4.0.0
9
+ Classifier: License :: OSI Approved :: Apache Software License
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Programming Language :: Python :: 3.8
12
+ Classifier: Programming Language :: Python :: 3.9
13
+ Classifier: Programming Language :: Python :: 3.10
14
+ Classifier: Programming Language :: Python :: 3.11
15
+ Classifier: Programming Language :: Python :: 3.12
16
+ Requires-Dist: databricks_sql_connector_core (>=4.0.0)
17
+ Requires-Dist: sqlalchemy (>=2.0.21)
18
+ Project-URL: Bug Tracker, https://github.com/databricks/databricks-sqlalchemy/issues
19
+ Project-URL: Homepage, https://github.com/databricks/databricks-sqlalchemy
20
+ Description-Content-Type: text/markdown
21
+
22
+ ## Databricks dialect for SQLALchemy 2.0
23
+
24
+ The Databricks dialect for SQLAlchemy serves as bridge between [SQLAlchemy](https://www.sqlalchemy.org/) and the Databricks SQL Python driver. A working example demonstrating usage can be found in `examples/sqlalchemy.py`.
25
+
26
+ ## Usage with SQLAlchemy <= 2.0
27
+ A SQLAlchemy 1.4 compatible dialect was first released in connector [version 2.4](https://github.com/databricks/databricks-sql-python/releases/tag/v2.4.0). Support for SQLAlchemy 1.4 was dropped from the dialect as part of `databricks-sql-connector==3.0.0`. To continue using the dialect with SQLAlchemy 1.x, you can use `databricks-sql-connector^2.4.0`.
28
+
29
+
30
+ ## Installation
31
+
32
+ To install the dialect and its dependencies:
33
+
34
+ ```shell
35
+ pip install databricks-sqlalchemy
36
+ ```
37
+
38
+ If you also plan to use `alembic` you can alternatively run:
39
+
40
+ ```shell
41
+ pip install alembic
42
+ ```
43
+
44
+ ## Connection String
45
+
46
+ Every SQLAlchemy application that connects to a database needs to use an [Engine](https://docs.sqlalchemy.org/en/20/tutorial/engine.html#tutorial-engine), which you can create by passing a connection string to `create_engine`. The connection string must include these components:
47
+
48
+ 1. Host
49
+ 2. HTTP Path for a compute resource
50
+ 3. API access token
51
+ 4. Initial catalog for the connection
52
+ 5. Initial schema for the connection
53
+
54
+ **Note: Our dialect is built and tested on workspaces with Unity Catalog enabled. Support for the `hive_metastore` catalog is untested.**
55
+
56
+ For example:
57
+
58
+ ```python
59
+ import os
60
+ from sqlalchemy import create_engine
61
+
62
+ host = os.getenv("DATABRICKS_SERVER_HOSTNAME")
63
+ http_path = os.getenv("DATABRICKS_HTTP_PATH")
64
+ access_token = os.getenv("DATABRICKS_TOKEN")
65
+ catalog = os.getenv("DATABRICKS_CATALOG")
66
+ schema = os.getenv("DATABRICKS_SCHEMA")
67
+
68
+ engine = create_engine(
69
+ f"databricks://token:{access_token}@{host}?http_path={http_path}&catalog={catalog}&schema={schema}"
70
+ )
71
+ ```
72
+
73
+ ## Types
74
+
75
+ The [SQLAlchemy type hierarchy](https://docs.sqlalchemy.org/en/20/core/type_basics.html) contains backend-agnostic type implementations (represented in CamelCase) and backend-specific types (represented in UPPERCASE). The majority of SQLAlchemy's [CamelCase](https://docs.sqlalchemy.org/en/20/core/type_basics.html#the-camelcase-datatypes) types are supported. This means that a SQLAlchemy application using these types should "just work" with Databricks.
76
+
77
+ |SQLAlchemy Type|Databricks SQL Type|
78
+ |-|-|
79
+ [`BigInteger`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.BigInteger)| [`BIGINT`](https://docs.databricks.com/en/sql/language-manual/data-types/bigint-type.html)
80
+ [`LargeBinary`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.LargeBinary)| (not supported)|
81
+ [`Boolean`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Boolean)| [`BOOLEAN`](https://docs.databricks.com/en/sql/language-manual/data-types/boolean-type.html)
82
+ [`Date`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Date)| [`DATE`](https://docs.databricks.com/en/sql/language-manual/data-types/date-type.html)
83
+ [`DateTime`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.DateTime)| [`TIMESTAMP_NTZ`](https://docs.databricks.com/en/sql/language-manual/data-types/timestamp-ntz-type.html)|
84
+ [`Double`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Double)| [`DOUBLE`](https://docs.databricks.com/en/sql/language-manual/data-types/double-type.html)
85
+ [`Enum`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Enum)| (not supported)|
86
+ [`Float`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Float)| [`FLOAT`](https://docs.databricks.com/en/sql/language-manual/data-types/float-type.html)
87
+ [`Integer`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Integer)| [`INT`](https://docs.databricks.com/en/sql/language-manual/data-types/int-type.html)
88
+ [`Numeric`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Numeric)| [`DECIMAL`](https://docs.databricks.com/en/sql/language-manual/data-types/decimal-type.html)|
89
+ [`PickleType`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.PickleType)| (not supported)|
90
+ [`SmallInteger`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.SmallInteger)| [`SMALLINT`](https://docs.databricks.com/en/sql/language-manual/data-types/smallint-type.html)
91
+ [`String`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.String)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
92
+ [`Text`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Text)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
93
+ [`Time`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Time)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
94
+ [`Unicode`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Unicode)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
95
+ [`UnicodeText`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.UnicodeText)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
96
+ [`Uuid`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Uuid)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)
97
+
98
+ In addition, the dialect exposes three UPPERCASE SQLAlchemy types which are specific to Databricks:
99
+
100
+ - [`databricks.sqlalchemy.TINYINT`](https://docs.databricks.com/en/sql/language-manual/data-types/tinyint-type.html)
101
+ - [`databricks.sqlalchemy.TIMESTAMP`](https://docs.databricks.com/en/sql/language-manual/data-types/timestamp-type.html)
102
+ - [`databricks.sqlalchemy.TIMESTAMP_NTZ`](https://docs.databricks.com/en/sql/language-manual/data-types/timestamp-ntz-type.html)
103
+
104
+
105
+ ### `LargeBinary()` and `PickleType()`
106
+
107
+ Databricks Runtime doesn't currently support binding of binary values in SQL queries, which is a pre-requisite for this functionality in SQLAlchemy.
108
+
109
+ ## `Enum()` and `CHECK` constraints
110
+
111
+ Support for `CHECK` constraints is not implemented in this dialect. Support is planned for a future release.
112
+
113
+ SQLAlchemy's `Enum()` type depends on `CHECK` constraints and is therefore not yet supported.
114
+
115
+ ### `DateTime()`, `TIMESTAMP_NTZ()`, and `TIMESTAMP()`
116
+
117
+ Databricks Runtime provides two datetime-like types: `TIMESTAMP` which is always timezone-aware and `TIMESTAMP_NTZ` which is timezone agnostic. Both types can be imported from `databricks.sqlalchemy` and used in your models.
118
+
119
+ The SQLAlchemy documentation indicates that `DateTime()` is not timezone-aware by default. So our dialect maps this type to `TIMESTAMP_NTZ()`. In practice, you should never need to use `TIMESTAMP_NTZ()` directly. Just use `DateTime()`.
120
+
121
+ If you need your field to be timezone-aware, you can import `TIMESTAMP()` and use it instead.
122
+
123
+ _Note that SQLAlchemy documentation suggests that you can declare a `DateTime()` with `timezone=True` on supported backends. However, if you do this with the Databricks dialect, the `timezone` argument will be ignored._
124
+
125
+ ```python
126
+ from sqlalchemy import DateTime
127
+ from databricks.sqlalchemy import TIMESTAMP
128
+
129
+ class SomeModel(Base):
130
+ some_date_without_timezone = DateTime()
131
+ some_date_with_timezone = TIMESTAMP()
132
+ ```
133
+
134
+ ### `String()`, `Text()`, `Unicode()`, and `UnicodeText()`
135
+
136
+ Databricks Runtime doesn't support length limitations for `STRING` fields. Therefore `String()` or `String(1)` or `String(255)` will all produce identical DDL. Since `Text()`, `Unicode()`, `UnicodeText()` all use the same underlying type in Databricks SQL, they will generate equivalent DDL.
137
+
138
+ ### `Time()`
139
+
140
+ Databricks Runtime doesn't have a native time-like data type. To implement this type in SQLAlchemy, our dialect stores SQLAlchemy `Time()` values in a `STRING` field. Unlike `DateTime` above, this type can optionally support timezone awareness (since the dialect is in complete control of the strings that we write to the Delta table).
141
+
142
+ ```python
143
+ from sqlalchemy import Time
144
+
145
+ class SomeModel(Base):
146
+ time_tz = Time(timezone=True)
147
+ time_ntz = Time()
148
+ ```
149
+
150
+
151
+ # Usage Notes
152
+
153
+ ## `Identity()` and `autoincrement`
154
+
155
+ Identity and generated value support is currently limited in this dialect.
156
+
157
+ When defining models, SQLAlchemy types can accept an [`autoincrement`](https://docs.sqlalchemy.org/en/20/core/metadata.html#sqlalchemy.schema.Column.params.autoincrement) argument. In our dialect, this argument is currently ignored. To create an auto-incrementing field in your model you can pass in an explicit [`Identity()`](https://docs.sqlalchemy.org/en/20/core/defaults.html#identity-ddl) instead.
158
+
159
+ Furthermore, in Databricks Runtime, only `BIGINT` fields can be configured to auto-increment. So in SQLAlchemy, you must use the `BigInteger()` type.
160
+
161
+ ```python
162
+ from sqlalchemy import Identity, String
163
+
164
+ class SomeModel(Base):
165
+ id = BigInteger(Identity())
166
+ value = String()
167
+ ```
168
+
169
+ When calling `Base.metadata.create_all()`, the executed DDL will include `GENERATED ALWAYS AS IDENTITY` for the `id` column. This is useful when using SQLAlchemy to generate tables. However, as of this writing, `Identity()` constructs are not captured when SQLAlchemy reflects a table's metadata (support for this is planned).
170
+
171
+ ## Parameters
172
+
173
+ `databricks-sql-connector` supports two approaches to parameterizing SQL queries: native and inline. Our SQLAlchemy 2.0 dialect always uses the native approach and is therefore limited to DBR 14.2 and above. If you are writing parameterized queries to be executed by SQLAlchemy, you must use the "named" paramstyle (`:param`). Read more about parameterization in `docs/parameters.md`.
174
+
175
+ ## Usage with pandas
176
+
177
+ Use [`pandas.DataFrame.to_sql`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html) and [`pandas.read_sql`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_sql.html#pandas.read_sql) to write and read from Databricks SQL. These methods both accept a SQLAlchemy connection to interact with Databricks.
178
+
179
+ ### Read from Databricks SQL into pandas
180
+ ```python
181
+ from sqlalchemy import create_engine
182
+ import pandas as pd
183
+
184
+ engine = create_engine("databricks://token:dapi***@***.cloud.databricks.com?http_path=***&catalog=main&schema=test")
185
+ with engine.connect() as conn:
186
+ # This will read the contents of `main.test.some_table`
187
+ df = pd.read_sql("some_table", conn)
188
+ ```
189
+
190
+ ### Write to Databricks SQL from pandas
191
+
192
+ ```python
193
+ from sqlalchemy import create_engine
194
+ import pandas as pd
195
+
196
+ engine = create_engine("databricks://token:dapi***@***.cloud.databricks.com?http_path=***&catalog=main&schema=test")
197
+ squares = [(i, i * i) for i in range(100)]
198
+ df = pd.DataFrame(data=squares,columns=['x','x_squared'])
199
+
200
+ with engine.connect() as conn:
201
+ # This will write the contents of `df` to `main.test.squares`
202
+ df.to_sql('squares',conn)
203
+ ```
204
+
205
+ ## [`PrimaryKey()`](https://docs.sqlalchemy.org/en/20/core/constraints.html#sqlalchemy.schema.PrimaryKeyConstraint) and [`ForeignKey()`](https://docs.sqlalchemy.org/en/20/core/constraints.html#defining-foreign-keys)
206
+
207
+ Unity Catalog workspaces in Databricks support PRIMARY KEY and FOREIGN KEY constraints. _Note that Databricks Runtime does not enforce the integrity of FOREIGN KEY constraints_. You can establish a primary key by setting `primary_key=True` when defining a column.
208
+
209
+ When building `ForeignKey` or `ForeignKeyConstraint` objects, you must specify a `name` for the constraint.
210
+
211
+ If your model definition requires a self-referential FOREIGN KEY constraint, you must include `use_alter=True` when defining the relationship.
212
+
213
+ ```python
214
+ from sqlalchemy import Table, Column, ForeignKey, BigInteger, String
215
+
216
+ users = Table(
217
+ "users",
218
+ metadata_obj,
219
+ Column("id", BigInteger, primary_key=True),
220
+ Column("name", String(), nullable=False),
221
+ Column("email", String()),
222
+ Column("manager_id", ForeignKey("users.id", name="fk_users_manager_id_x_users_id", use_alter=True))
223
+ )
224
+ ```
225
+
@@ -0,0 +1,31 @@
1
+ CHANGELOG.md,sha256=gJvDHxp-mfm8d-ROK3Y_mT9COkgbyk-Fpzy-DZEtWbM,10660
2
+ databricks/sqlalchemy/__init__.py,sha256=Gk3XC5OCzq7LuxMVpxK3t4q0rkflXJ8uJRJh9uusMqc,185
3
+ databricks/sqlalchemy/_ddl.py,sha256=c0_GwfmnrFVr4-Ls14fmdGUUFyUok_GW4Uo45hLABFc,3983
4
+ databricks/sqlalchemy/_parse.py,sha256=C0Q0_87PknCibRjs3ewPL5dimwQqaW_vr4nMxMsS220,13048
5
+ databricks/sqlalchemy/_types.py,sha256=EqC_TWWY7mDw9EM2AVZnPrw5DD6G-vBV7wiwX4tcBcM,11753
6
+ databricks/sqlalchemy/base.py,sha256=KcjfHMH0NsceYE2NRxrePtf5T1uw9u8JHofRdbnAKS4,15619
7
+ databricks/sqlalchemy/dependency_test/test_dependency.py,sha256=oFv2oW0e0ScpiKbmXHwpIuYf7mWpj4BiVShiLvw2b2k,938
8
+ databricks/sqlalchemy/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
9
+ databricks/sqlalchemy/pytest.ini,sha256=ImutflUjkhByVNWCQ18Todj6XTvgJAQX_v7fD-gWHhU,106
10
+ databricks/sqlalchemy/requirements.py,sha256=OobunAEwZ9y2dvSQLOmdgJciVn9xGlY9NAFfszPCTU0,9018
11
+ databricks/sqlalchemy/setup.cfg,sha256=ImutflUjkhByVNWCQ18Todj6XTvgJAQX_v7fD-gWHhU,106
12
+ databricks/sqlalchemy/test/_extra.py,sha256=ZMbqkdw9_sTRrcmuOssZoaZjNsaM-L1Z8tlumOoipMg,1955
13
+ databricks/sqlalchemy/test/_future.py,sha256=7ZKdl2-hPVgkNUtq-mVS1DWsI5Y8N4fEnwxXfFnTqCU,12658
14
+ databricks/sqlalchemy/test/_regression.py,sha256=MI6Jlmnw-DYmyY-mHfrscNQ8l3UEDaPXC7J3R2uKI9o,5412
15
+ databricks/sqlalchemy/test/_unsupported.py,sha256=ORi3FvzjGDx3KK62KysJFaEI4zfAw3JdbpVbT5oCCYM,16061
16
+ databricks/sqlalchemy/test/conftest.py,sha256=wauk1PYW_epp5-CKA2HbcTk_Ke3i8XpCnHB7UJLIRoE,597
17
+ databricks/sqlalchemy/test/overrides/_componentreflectiontest.py,sha256=OAaFx_l3sHuUn322NuyzpBq1SquvHCyXIvk5NxDXNv8,7042
18
+ databricks/sqlalchemy/test/overrides/_ctetest.py,sha256=u4jSIMrZY2dCSvBRhk9RsiObx1GB3RoFuLRByC212VU,1026
19
+ databricks/sqlalchemy/test/test_suite.py,sha256=kQfqmoXROaMNi6RebaPKS6MFabzSU5Rz-YPo84CImIQ,492
20
+ databricks/sqlalchemy/test_local/__init__.py,sha256=gphvzZ0Cb4Kz7rPRHHULanKyyjKgFt7zmGGYvcuGxys,131
21
+ databricks/sqlalchemy/test_local/conftest.py,sha256=b6LThokKLJrCfe7207A6NvF2MYnGOmajwtVILCWj1qY,951
22
+ databricks/sqlalchemy/test_local/e2e/MOCK_DATA.xlsx,sha256=9zqXUDGzgS2yjPz8x0uFsJU6kQTqdVRKKfJrEBHTZuY,59837
23
+ databricks/sqlalchemy/test_local/e2e/test_basic.py,sha256=wLP28vz2H9wz0dS52_iXbRwu0Zoh0wTEN9MOj2xJiOQ,16749
24
+ databricks/sqlalchemy/test_local/test_ddl.py,sha256=L5V1NoW9dT-7BHcaB97FQOw9ZFvo0g2_FIPKqOzlECM,3198
25
+ databricks/sqlalchemy/test_local/test_parsing.py,sha256=pSTAnWyA44vDTEZ-_HnfwEr3QbA2Kmzn1yU5q1GqMts,5017
26
+ databricks/sqlalchemy/test_local/test_types.py,sha256=Uey-z4ypzD5ykClBQs7XNW9KArHPbZU2cAk3EYD9jS0,6749
27
+ databricks_sqlalchemy-1.0.0.dist-info/LICENSE,sha256=WgVm2VpfZ3CsUfPndD2NeCrEIcFA4UB-YnnW4ejxcbE,11346
28
+ databricks_sqlalchemy-1.0.0.dist-info/METADATA,sha256=ppHojT19L_kv6du0XR9zFdrXCSNvwIyxaBV3Ntt3tV4,13073
29
+ databricks_sqlalchemy-1.0.0.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
30
+ databricks_sqlalchemy-1.0.0.dist-info/entry_points.txt,sha256=AAjpsvZbVcoMAcWLIesoAT5FNZhBEcIhxdKknVua3jw,74
31
+ databricks_sqlalchemy-1.0.0.dist-info/RECORD,,
@@ -1,4 +1,4 @@
1
1
  Wheel-Version: 1.0
2
- Generator: poetry-core 1.6.1
2
+ Generator: poetry-core 1.9.0
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
@@ -0,0 +1,3 @@
1
+ [sqlalchemy.dialects]
2
+ databricks=databricks.sqlalchemy:DatabricksDialect
3
+
databricks/__init__.py DELETED
@@ -1,7 +0,0 @@
1
- # See: https://packaging.python.org/guides/packaging-namespace-packages/#pkgutil-style-namespace-packages
2
- #
3
- # This file must only contain the following line, or other packages in the databricks.* namespace
4
- # may not be importable. The contents of this file must be byte-for-byte equivalent across all packages.
5
- # If they are not, parallel package installation may lead to clobbered and invalid files.
6
- # Also see https://github.com/databricks/databricks-sdk-py/issues/343.
7
- __path__ = __import__("pkgutil").extend_path(__path__, __name__)
@@ -1,19 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: databricks-sqlalchemy
3
- Version: 0.0.1b1
4
- Summary: SQLAlchemy dialect for Databricks
5
- License: Apache-2.0
6
- Author: Databricks
7
- Author-email: databricks-sql-connector-maintainers@databricks.com
8
- Requires-Python: >=3.8.0,<4.0.0
9
- Classifier: License :: OSI Approved :: Apache Software License
10
- Classifier: Programming Language :: Python :: 3
11
- Classifier: Programming Language :: Python :: 3.8
12
- Classifier: Programming Language :: Python :: 3.9
13
- Classifier: Programming Language :: Python :: 3.10
14
- Classifier: Programming Language :: Python :: 3.11
15
- Description-Content-Type: text/markdown
16
-
17
- # SQLAlchemy Dialect for Databricks
18
-
19
- To install the SQLAlchemy dialect for Databricks, see [here](https://github.com/databricks/databricks-sql-python/blob/main/src/databricks/sqlalchemy/README.sqlalchemy.md).
@@ -1,5 +0,0 @@
1
- databricks/__init__.py,sha256=CF2MJcZFwbpn9TwQER8qnCDhkPooBGQNVkX4v7g6p3g,537
2
- databricks/sqlalchemy/__init__.py,sha256=xu_eBGmUyF5lQM8CjPetRCrVaBv_J9ZT2SxeeaL4GW4,38
3
- databricks_sqlalchemy-0.0.1b1.dist-info/METADATA,sha256=sVbclBRnTLzJ2YrokJLJMspqAfu3GwN_OYB4SKhh3NY,810
4
- databricks_sqlalchemy-0.0.1b1.dist-info/WHEEL,sha256=Zb28QaM1gQi8f4VCBhsUklF61CTlNYfs9YAZn-TOGFk,88
5
- databricks_sqlalchemy-0.0.1b1.dist-info/RECORD,,