iomete-sqlalchemy 1.0.20rc3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,202 @@
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright 2026 IOMETE Inc.
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
@@ -0,0 +1,120 @@
1
+ Metadata-Version: 2.4
2
+ Name: iomete-sqlalchemy
3
+ Version: 1.0.20rc3
4
+ Summary: SQLAlchemy dialect for IOMETE via Arrow Flight SQL
5
+ License: Apache-2.0
6
+ Requires-Python: >=3.9
7
+ Description-Content-Type: text/markdown
8
+ License-File: LICENSE
9
+ Requires-Dist: sqlalchemy>=2.0.30
10
+ Requires-Dist: adbc-driver-flightsql>=1.1.0
11
+ Requires-Dist: pyarrow>=16.0
12
+ Provides-Extra: dev
13
+ Requires-Dist: pytest; extra == "dev"
14
+ Requires-Dist: pytest-cov; extra == "dev"
15
+ Requires-Dist: ruff; extra == "dev"
16
+ Requires-Dist: python-dotenv; extra == "dev"
17
+ Dynamic: license-file
18
+
19
+ # iomete-sqlalchemy
20
+
21
+ A SQLAlchemy dialect for [IOMETE](https://iomete.com) using Arrow Flight SQL (`adbc-driver-flightsql`).
22
+
23
+ All reflection methods are implemented and relational stubs (PK, FK, indexes) return graceful empty results instead of raising `NotImplementedError`, making it compatible with schema inspection tools and data catalog integrations.
24
+
25
+ ---
26
+
27
+ ## Requirements
28
+
29
+ - `Python 3.9+`
30
+ - `sqlalchemy >= 2.0.30`
31
+ - `adbc-driver-flightsql >= 1.1.0`
32
+ - `pyarrow >= 16.0`
33
+
34
+ ---
35
+
36
+ ## Installation
37
+
38
+ ```bash
39
+ pip install iomete-sqlalchemy
40
+ ```
41
+
42
+ ---
43
+
44
+ ## Connection URL
45
+
46
+ ```
47
+ iomete+flightsql://<user>:<password>@<host>:<port>/<catalog>/<schema>
48
+ ?cluster=<cluster>
49
+ &data_plane=<data_plane>
50
+ [&tls=true]
51
+ [&max_msg_size=134217728]
52
+ ```
53
+
54
+ | Parameter | Description | Default |
55
+ |---|---|---|
56
+ | `host` | IOMETE host | — |
57
+ | `port` | gRPC port | `443` |
58
+ | `catalog` | Top-level catalog (e.g. `spark_catalog`) | — |
59
+ | `schema` | Schema / database inside the catalog | — |
60
+ | `cluster` | IOMETE compute cluster name | — |
61
+ | `data_plane` | IOMETE data plane name | — |
62
+ | `tls` | Use `grpc+tls` transport | `true` |
63
+ | `max_msg_size` | Max gRPC message size in bytes | `134217728` (128 MB) |
64
+
65
+ ### Example
66
+
67
+ ```python
68
+ from sqlalchemy import create_engine, inspect, text
69
+
70
+ HOST = "dev.iomete.cloud"
71
+ PORT = 443
72
+ USERNAME = "your-username"
73
+ PASSWORD = "your-password"
74
+ CATALOG = "spark_catalog"
75
+ SCHEMA = "default"
76
+ CLUSTER = "your-cluster"
77
+ DATA_PLANE = "your-data-plane"
78
+
79
+ engine = create_engine(
80
+ f"iomete+flightsql://{USERNAME}:{PASSWORD}@{HOST}:{PORT}"
81
+ f"/{CATALOG}/{SCHEMA}"
82
+ f"?cluster={CLUSTER}&data_plane={DATA_PLANE}"
83
+ )
84
+
85
+ # Check connection
86
+ with engine.connect() as conn:
87
+ print(conn.execute(text("SELECT 1")).scalar()) # → 1
88
+
89
+ # Reflect schema
90
+ inspector = inspect(engine)
91
+ print(inspector.get_schema_names())
92
+ print(inspector.get_table_names(schema=f"{CATALOG}.{SCHEMA}"))
93
+ ```
94
+
95
+ ---
96
+
97
+ ## Supported Type Mappings
98
+
99
+ | Spark / Iceberg type | SQLAlchemy type |
100
+ |---|---|
101
+ | `boolean` | `Boolean` |
102
+ | `tinyint`, `smallint` | `SmallInteger` |
103
+ | `int`, `integer` | `Integer` |
104
+ | `bigint` | `BigInteger` |
105
+ | `float`, `real` | `Float(24)` |
106
+ | `double` | `Float(53)` |
107
+ | `decimal(p, s)` | `Numeric(p, s)` |
108
+ | `char(n)` | `CHAR(n)` |
109
+ | `varchar(n)` | `VARCHAR(n)` |
110
+ | `string`, `text` | `Text` |
111
+ | `binary` | `LargeBinary` |
112
+ | `date` | `Date` |
113
+ | `timestamp`, `timestamp_ntz` | `DateTime` |
114
+ | `array<…>`, `map<…>`, `struct<…>` | `JSON` |
115
+
116
+ ---
117
+
118
+ ## Contributing
119
+
120
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for architecture overview, codebase layout, and how to run tests.
@@ -0,0 +1,102 @@
1
+ # iomete-sqlalchemy
2
+
3
+ A SQLAlchemy dialect for [IOMETE](https://iomete.com) using Arrow Flight SQL (`adbc-driver-flightsql`).
4
+
5
+ All reflection methods are implemented and relational stubs (PK, FK, indexes) return graceful empty results instead of raising `NotImplementedError`, making it compatible with schema inspection tools and data catalog integrations.
6
+
7
+ ---
8
+
9
+ ## Requirements
10
+
11
+ - `Python 3.9+`
12
+ - `sqlalchemy >= 2.0.30`
13
+ - `adbc-driver-flightsql >= 1.1.0`
14
+ - `pyarrow >= 16.0`
15
+
16
+ ---
17
+
18
+ ## Installation
19
+
20
+ ```bash
21
+ pip install iomete-sqlalchemy
22
+ ```
23
+
24
+ ---
25
+
26
+ ## Connection URL
27
+
28
+ ```
29
+ iomete+flightsql://<user>:<password>@<host>:<port>/<catalog>/<schema>
30
+ ?cluster=<cluster>
31
+ &data_plane=<data_plane>
32
+ [&tls=true]
33
+ [&max_msg_size=134217728]
34
+ ```
35
+
36
+ | Parameter | Description | Default |
37
+ |---|---|---|
38
+ | `host` | IOMETE host | — |
39
+ | `port` | gRPC port | `443` |
40
+ | `catalog` | Top-level catalog (e.g. `spark_catalog`) | — |
41
+ | `schema` | Schema / database inside the catalog | — |
42
+ | `cluster` | IOMETE compute cluster name | — |
43
+ | `data_plane` | IOMETE data plane name | — |
44
+ | `tls` | Use `grpc+tls` transport | `true` |
45
+ | `max_msg_size` | Max gRPC message size in bytes | `134217728` (128 MB) |
46
+
47
+ ### Example
48
+
49
+ ```python
50
+ from sqlalchemy import create_engine, inspect, text
51
+
52
+ HOST = "dev.iomete.cloud"
53
+ PORT = 443
54
+ USERNAME = "your-username"
55
+ PASSWORD = "your-password"
56
+ CATALOG = "spark_catalog"
57
+ SCHEMA = "default"
58
+ CLUSTER = "your-cluster"
59
+ DATA_PLANE = "your-data-plane"
60
+
61
+ engine = create_engine(
62
+ f"iomete+flightsql://{USERNAME}:{PASSWORD}@{HOST}:{PORT}"
63
+ f"/{CATALOG}/{SCHEMA}"
64
+ f"?cluster={CLUSTER}&data_plane={DATA_PLANE}"
65
+ )
66
+
67
+ # Check connection
68
+ with engine.connect() as conn:
69
+ print(conn.execute(text("SELECT 1")).scalar()) # → 1
70
+
71
+ # Reflect schema
72
+ inspector = inspect(engine)
73
+ print(inspector.get_schema_names())
74
+ print(inspector.get_table_names(schema=f"{CATALOG}.{SCHEMA}"))
75
+ ```
76
+
77
+ ---
78
+
79
+ ## Supported Type Mappings
80
+
81
+ | Spark / Iceberg type | SQLAlchemy type |
82
+ |---|---|
83
+ | `boolean` | `Boolean` |
84
+ | `tinyint`, `smallint` | `SmallInteger` |
85
+ | `int`, `integer` | `Integer` |
86
+ | `bigint` | `BigInteger` |
87
+ | `float`, `real` | `Float(24)` |
88
+ | `double` | `Float(53)` |
89
+ | `decimal(p, s)` | `Numeric(p, s)` |
90
+ | `char(n)` | `CHAR(n)` |
91
+ | `varchar(n)` | `VARCHAR(n)` |
92
+ | `string`, `text` | `Text` |
93
+ | `binary` | `LargeBinary` |
94
+ | `date` | `Date` |
95
+ | `timestamp`, `timestamp_ntz` | `DateTime` |
96
+ | `array<…>`, `map<…>`, `struct<…>` | `JSON` |
97
+
98
+ ---
99
+
100
+ ## Contributing
101
+
102
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for architecture overview, codebase layout, and how to run tests.
@@ -0,0 +1,39 @@
1
+ [build-system]
2
+ requires = ["setuptools>=68", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "iomete-sqlalchemy"
7
+ version = "1.0.20rc3"
8
+ description = "SQLAlchemy dialect for IOMETE via Arrow Flight SQL"
9
+ readme = "README.md"
10
+ license = { text = "Apache-2.0" }
11
+ requires-python = ">=3.9"
12
+ dependencies = [
13
+ "sqlalchemy>=2.0.30",
14
+ "adbc-driver-flightsql>=1.1.0",
15
+ "pyarrow>=16.0",
16
+ ]
17
+
18
+ [project.optional-dependencies]
19
+ dev = [
20
+ "pytest",
21
+ "pytest-cov",
22
+ "ruff",
23
+ "python-dotenv",
24
+ ]
25
+
26
+ [project.entry-points."sqlalchemy.dialects"]
27
+ "iomete.flightsql" = "iomete_sqlalchemy.dialect:IOMETEDialect"
28
+
29
+ [tool.setuptools.packages.find]
30
+ where = ["src"]
31
+
32
+ [tool.pytest.ini_options]
33
+ filterwarnings = [
34
+ "ignore:Cannot disable autocommit:Warning",
35
+ ]
36
+ addopts = ""
37
+
38
+ [tool.coverage.report]
39
+ fail_under = 80
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,4 @@
1
+ from iomete_sqlalchemy.dialect import IOMETEDialect
2
+
3
+ __version__ = "0.1.0"
4
+ __all__ = ["IOMETEDialect"]
@@ -0,0 +1,38 @@
1
+ """
2
+ Thin DBAPI-2 wrapper around adbc_driver_flightsql.dbapi.
3
+
4
+ SQLAlchemy's DefaultDialect calls ``dialect.dbapi()`` (a classmethod) to get
5
+ the DBAPI module, then uses ``module.connect(**kwargs)`` to open connections.
6
+ We simply re-export ``adbc_driver_flightsql.dbapi`` so that all PEP-249
7
+ symbols (connect, Error, Warning, …) are available at the module level.
8
+ """
9
+ from adbc_driver_flightsql.dbapi import ( # noqa: F401
10
+ BINARY,
11
+ DATETIME,
12
+ NUMBER,
13
+ ROWID,
14
+ STRING,
15
+ Connection,
16
+ Cursor,
17
+ DataError,
18
+ DatabaseError,
19
+ Date,
20
+ DateFromTicks,
21
+ Error,
22
+ IntegrityError,
23
+ InterfaceError,
24
+ InternalError,
25
+ NotSupportedError,
26
+ OperationalError,
27
+ ProgrammingError,
28
+ Time,
29
+ TimeFromTicks,
30
+ Timestamp,
31
+ TimestampFromTicks,
32
+ Warning,
33
+ connect,
34
+ )
35
+
36
+ apilevel = "2.0"
37
+ threadsafety = 1
38
+ paramstyle = "qmark"
@@ -0,0 +1,439 @@
1
+ """
2
+ SQLAlchemy dialect for IOMETE via Arrow Flight SQL.
3
+
4
+ Connection URL format
5
+ ---------------------
6
+ iomete+flightsql://<user>:<password>@<host>:<port>/<catalog>/<database>
7
+ ?cluster=<cluster_name>
8
+ &data_plane=<data_plane_name>
9
+ [&tls=true]
10
+ [&max_msg_size=134217728]
11
+
12
+ IOMETE uses a three-level namespace: catalog → database → table.
13
+ SQLAlchemy maps the second level to its ``schema`` parameter throughout
14
+ the reflection API — so wherever SQLAlchemy says "schema", IOMETE means
15
+ "database" (e.g. ``spark_catalog.default``).
16
+
17
+ Example
18
+ -------
19
+ iomete+flightsql://alice:secret@dev.iomete.cloud:443/spark_catalog/default
20
+ ?cluster=dwh&data_plane=spark-resources
21
+ """
22
+
23
+ from __future__ import annotations
24
+
25
+ import logging
26
+ import re
27
+ from typing import List, Optional
28
+ from urllib.parse import unquote_plus
29
+
30
+ from sqlalchemy import pool, text
31
+ from sqlalchemy.engine import URL
32
+ from sqlalchemy.engine.default import DefaultDialect
33
+ from sqlalchemy.engine.interfaces import ReflectedColumn
34
+ from sqlalchemy.exc import DatabaseError
35
+ from sqlalchemy.sql.compiler import GenericTypeCompiler, IdentifierPreparer, SQLCompiler
36
+
37
+ from iomete_sqlalchemy import dbapi as iomete_dbapi
38
+ from iomete_sqlalchemy.types import spark_type_to_sqla
39
+
40
+ logger = logging.getLogger(__name__)
41
+
42
+ _NOT_FOUND_PATTERNS = (
43
+ "TABLE_OR_VIEW_NOT_FOUND",
44
+ "SCHEMA_NOT_FOUND",
45
+ "NoSuchNamespaceException",
46
+ "NoSuchDatabaseException",
47
+ )
48
+
49
+ def _is_not_found(exc: Exception) -> bool:
50
+ """Return True if the exception message indicates a missing object.
51
+
52
+ Used to distinguish expected "object does not exist" errors (which should
53
+ result in graceful empty/None returns) from unexpected failures (which
54
+ should be logged as warnings).
55
+ """
56
+ msg = str(exc)
57
+ return any(pattern in msg for pattern in _NOT_FOUND_PATTERNS)
58
+
59
+
60
+ class SparkIdentifierPreparer(IdentifierPreparer):
61
+ """Use backtick quoting for identifiers.
62
+
63
+ Spark SQL does not accept ANSI double-quote identifiers (``"name"``);
64
+ backticks (`` `name` ``) are the only supported quoting style.
65
+ """
66
+
67
+ def __init__(self, dialect):
68
+ super().__init__(dialect, initial_quote="`", final_quote="`")
69
+
70
+ def quote_schema(self, schema, force=None):
71
+ """Quote each dot-separated part of the schema individually.
72
+
73
+ SQLAlchemy encodes IOMETE's two-level schema as ``"catalog.database"``.
74
+ The base implementation would quote this as a single token
75
+ (`` `catalog.database` ``), which Spark rejects. Splitting on ``.``
76
+ and quoting each part produces `` `catalog`.`database` `` instead.
77
+ """
78
+ return ".".join(self.quote(part, force) for part in schema.split("."))
79
+
80
+
81
+ class SparkSQLCompiler(SQLCompiler):
82
+ """Compiler overrides for Spark SQL dialect quirks.
83
+
84
+ Spark requires integer literals for LIMIT/OFFSET (e.g. ``LIMIT 10``),
85
+ not bind parameters (``LIMIT ?``). The base SQLCompiler emits bind params,
86
+ so we override ``limit_clause`` to force ``literal_binds=True``.
87
+ """
88
+
89
+ def limit_clause(self, select, **kw):
90
+ limit = select._limit_clause
91
+ offset = select._offset_clause
92
+ clause = ""
93
+ if limit is not None:
94
+ clause += "\n LIMIT " + self.process(limit, literal_binds=True)
95
+ if offset is not None:
96
+ clause += "\n OFFSET " + self.process(offset, literal_binds=True)
97
+ return clause
98
+
99
+ def fetch_clause(self, select, **kw):
100
+ # Spark doesn't support FETCH FIRST n ROWS either
101
+ return self.limit_clause(select, **kw)
102
+
103
+
104
+ class SparkTypeCompiler(GenericTypeCompiler):
105
+ """DDL type compiler that maps SQLAlchemy types to Spark SQL DDL equivalents.
106
+
107
+ Spark SQL quirks handled here:
108
+
109
+ - ``DATETIME`` is unsupported; ``TIMESTAMP`` is the correct keyword.
110
+ - ``VARCHAR(n)`` is valid in Spark 3.x, but bare ``VARCHAR`` without a length
111
+ is never accepted — Spark has no unbounded VARCHAR type. SQLAlchemy's
112
+ ``String()`` (no length) must therefore map to ``STRING``, which is Spark's
113
+ native unbounded string type.
114
+ """
115
+
116
+ def visit_DATETIME(self, type_, **kw):
117
+ return "TIMESTAMP"
118
+
119
+ def visit_datetime(self, type_, **kw):
120
+ return "TIMESTAMP"
121
+
122
+ def visit_VARCHAR(self, type_, **kw):
123
+ if type_.length:
124
+ return f"VARCHAR({type_.length})"
125
+ return "STRING"
126
+
127
+ def visit_NVARCHAR(self, type_, **kw):
128
+ if type_.length:
129
+ return f"VARCHAR({type_.length})"
130
+ return "STRING"
131
+
132
+ def visit_TEXT(self, type_, **kw):
133
+ return "STRING"
134
+
135
+
136
+ class IOMETEDialect(DefaultDialect):
137
+ # ── dialect identity ────────────────────────────────────────────────────
138
+ name = "iomete"
139
+ driver = "flightsql"
140
+ preparer = SparkIdentifierPreparer
141
+ statement_compiler = SparkSQLCompiler
142
+ type_compiler_cls = SparkTypeCompiler
143
+
144
+ # ── capabilities ────────────────────────────────────────────────────────
145
+ supports_alter = False
146
+ supports_empty_insert = False
147
+ supports_native_boolean = True
148
+ supports_multivalues_insert = True
149
+ supports_statement_cache = True # must be explicit — SQLAlchemy checks __dict__, not inheritance
150
+
151
+ default_schema_name = "default"
152
+
153
+ def __init__(self, **kwargs):
154
+ super().__init__(**kwargs)
155
+ self._catalog: str | None = None
156
+
157
+ # ── pooling ─────────────────────────────────────────────────────────────
158
+ @classmethod
159
+ def import_dbapi(cls):
160
+ return iomete_dbapi
161
+
162
+ # backwards-compat alias for SQLAlchemy < 2.0
163
+ @classmethod
164
+ def dbapi(cls):
165
+ return cls.import_dbapi()
166
+
167
+ # Use NullPool by default: Flight SQL connections are stateful gRPC
168
+ # streams; connection pooling doesn't add value here.
169
+ @classmethod
170
+ def get_pool_class(cls, url):
171
+ return pool.NullPool
172
+
173
+ # ── URL → connect args ──────────────────────────────────────────────────
174
+ def create_connect_args(self, url: URL):
175
+ # url.database is "catalog/schema"; extract only the catalog part.
176
+ database = url.database or ""
177
+ catalog_part, _, _ = database.partition("/")
178
+ self._catalog = catalog_part or None
179
+ query_params = url.query
180
+
181
+ def get_query_param(key: str, default: str | None = None) -> str | None:
182
+ value = query_params.get(key, default)
183
+ if isinstance(value, (list, tuple)):
184
+ *_, value = value
185
+ return unquote_plus(value) if value is not None else None
186
+
187
+ tls = get_query_param("tls", "true").lower() not in ("false", "0", "no")
188
+ scheme = "grpc+tls" if tls else "grpc"
189
+ host = url.host or "localhost"
190
+ port = url.port or 443
191
+ uri = f"{scheme}://{host}:{port}"
192
+
193
+ db_kwargs: dict[str, str] = {}
194
+ if url.username:
195
+ db_kwargs["username"] = unquote_plus(url.username)
196
+ if url.password:
197
+ db_kwargs["password"] = unquote_plus(str(url.password))
198
+
199
+ cluster = get_query_param("cluster")
200
+ if cluster:
201
+ db_kwargs["adbc.flight.sql.rpc.call_header.cluster"] = cluster
202
+
203
+ data_plane = get_query_param("data_plane")
204
+ if data_plane:
205
+ db_kwargs["adbc.flight.sql.rpc.call_header.data-plane"] = data_plane
206
+
207
+ max_msg_size = get_query_param("max_msg_size", "134217728")
208
+ db_kwargs["adbc.flight.sql.client_option.with_max_msg_size"] = max_msg_size
209
+
210
+ return [], {"uri": uri, "db_kwargs": db_kwargs}
211
+
212
+ # ── schema helpers ──────────────────────────────────────────────────────
213
+ @staticmethod
214
+ def _catalog_schema(schema: str | None) -> tuple[str | None, str | None]:
215
+ """
216
+ IOMETE uses a three-level namespace: catalog.database.table.
217
+ SQLAlchemy's ``schema`` parameter encodes both levels as
218
+ ``"catalog.database"`` (dot-separated), e.g. ``"spark_catalog.default"``.
219
+ If only one token is given it is treated as the database name within
220
+ the default catalog.
221
+ """
222
+ if not schema:
223
+ return None, None
224
+ catalog, sep, db = schema.partition(".")
225
+ if sep:
226
+ return catalog, db
227
+ return None, catalog
228
+
229
+ def _schema_qualifier(self, schema: str | None) -> str:
230
+ """Return a backtick-quoted ``catalog.database`` qualifier, or empty string."""
231
+ catalog, db = self._catalog_schema(schema)
232
+ if not db:
233
+ return ""
234
+ return f"`{catalog}`.`{db}`" if catalog else f"`{db}`"
235
+
236
+ def _full_name(self, schema: str | None, object_name: str) -> str:
237
+ """Build a fully-qualified backtick-quoted name from schema + object."""
238
+ catalog, db = self._catalog_schema(schema)
239
+ parts = []
240
+ if catalog:
241
+ parts.append(f"`{catalog}`")
242
+ if db:
243
+ parts.append(f"`{db}`")
244
+ parts.append(f"`{object_name}`")
245
+ return ".".join(parts)
246
+
247
+ # ── dialect initialisation ──────────────────────────────────────────────
248
+ def on_connect(self):
249
+ def connect(dbapi_connection):
250
+ catalog = self._catalog
251
+ if catalog:
252
+ cursor = dbapi_connection.cursor()
253
+ cursor.execute(f"USE {catalog}")
254
+ cursor.close()
255
+
256
+ return connect
257
+
258
+ def initialize(self, connection):
259
+ pass # nothing to probe at startup
260
+
261
+ def do_executemany(self, cursor, statement, parameters, context=None):
262
+ """Execute a DML statement one row at a time.
263
+
264
+ IOMETE's Arrow Flight SQL server rejects multi-row batches, so each row is
265
+ sent individually via ``cursor.executemany(statement, [row])``.
266
+ https://github.com/iomete/spark/blob/18a173f3b11cdbf21cb20b2130572a57dbdb1941/iomete/arrow-server/src/main/java/com/iomete/spark/arrow/flight/sql/handlers/PreparedStatementHandler.java#L296
267
+ TODO: remove this workaround once the server supports batch parameter binding.
268
+ """
269
+ for params in parameters:
270
+ cursor.executemany(statement, [params])
271
+
272
+ def do_execute(self, cursor, statement, parameters, context=None):
273
+ is_write = context is not None and (context.isddl or context.is_crud)
274
+ if is_write and parameters:
275
+ # Parameterized DML — DoPut write path, synchronous.
276
+ cursor.executemany(statement, [parameters])
277
+ elif is_write:
278
+ # TODO: investigate DML/DDL failure on dev cluster for parameterless DDL/DML statements.
279
+ # Works fine against my locally running Flight SQL server 3.5.5, but fails on the dev cluster.
280
+ # Not investigated deeply — may be caused by an unknown deployment difference from
281
+ # another team member. Draining the cursor is a workaround for now.
282
+ cursor.execute(statement, parameters)
283
+ try:
284
+ cursor.fetchall()
285
+ except Exception as e:
286
+ logger.debug("Ignoring error while draining cursor: %s", e)
287
+ else:
288
+ cursor.execute(statement, parameters)
289
+
290
+ def do_rollback(self, dbapi_connection):
291
+ pass
292
+
293
+ # ── server version ──────────────────────────────────────────────────────
294
+ def _get_server_version_info(self, connection):
295
+ try:
296
+ row = connection.execute(text("SELECT version()")).fetchone()
297
+ if row:
298
+ version_match = re.search(r"(\d+)\.(\d+)\.(\d+)", row[0])
299
+ if version_match:
300
+ return tuple(int(x) for x in version_match.groups())
301
+ except DatabaseError as e:
302
+ logger.debug("Could not retrieve server version: %s", e)
303
+ return 0, 0, 0
304
+
305
+ # ── schema / table listing ──────────────────────────────────────────────
306
+ def get_schema_names(self, connection, **kw) -> List[str]:
307
+ """Return all schemas visible in the current catalog."""
308
+ rows = connection.execute(text("SHOW SCHEMAS")).fetchall()
309
+ return [row[0] for row in rows]
310
+
311
+ def get_table_names(self, connection, schema: str | None = None, **kw) -> List[str]:
312
+ qualifier = self._schema_qualifier(schema)
313
+ sql = f"SHOW TABLES{' IN ' + qualifier if qualifier else ''}"
314
+ rows = connection.execute(text(sql)).fetchall()
315
+ # Iomete SHOW TABLES returns (namespace, tableName, isTemporary)
316
+ return [row[1] if len(row) > 1 else row[0] for row in rows]
317
+
318
+ def get_view_definition(
319
+ self,
320
+ connection,
321
+ view_name: str,
322
+ schema: str | None = None,
323
+ **kw,
324
+ ) -> Optional[str]:
325
+ full_name = self._full_name(schema, view_name)
326
+ try:
327
+ rows = connection.execute(text(f"SHOW CREATE TABLE {full_name}")).fetchall()
328
+ if rows:
329
+ return "\n".join(row[0] for row in rows)
330
+ return None
331
+ except DatabaseError as e:
332
+ if _is_not_found(e):
333
+ return None
334
+ logger.warning(
335
+ "get_view_definition(%r) failed unexpectedly: %s", view_name, e
336
+ )
337
+ return None
338
+
339
+ def get_view_names(self, connection, schema: str | None = None, **kw) -> List[str]:
340
+ qualifier = self._schema_qualifier(schema)
341
+ sql = f"SHOW VIEWS{' IN ' + qualifier if qualifier else ''}"
342
+ try:
343
+ rows = connection.execute(text(sql)).fetchall()
344
+ return [row[1] if len(row) > 1 else row[0] for row in rows]
345
+ except DatabaseError as e:
346
+ logger.warning("get_view_names(schema=%r) failed: %s", schema, e)
347
+ return []
348
+
349
+ # ── column reflection ───────────────────────────────────────────────────
350
+ def get_columns(
351
+ self,
352
+ connection,
353
+ table_name: str,
354
+ schema: str | None = None,
355
+ **kw,
356
+ ) -> List[ReflectedColumn]:
357
+ full_name = self._full_name(schema, table_name)
358
+ rows = connection.execute(text(f"DESCRIBE TABLE {full_name}")).fetchall()
359
+ columns: List[ReflectedColumn] = []
360
+ for row in rows:
361
+ col_name = row[0]
362
+ type_str = row[1] if len(row) > 1 else "string"
363
+ comment = row[2] if len(row) > 2 else None
364
+
365
+ # DESCRIBE sometimes includes partition/metadata separators
366
+ if not col_name or col_name.startswith("#"):
367
+ break
368
+
369
+ col: ReflectedColumn = {
370
+ "name": col_name,
371
+ "type": spark_type_to_sqla(type_str),
372
+ "nullable": True, # DESCRIBE TABLE doesn't expose nullability; default to True
373
+ "default": None,
374
+ "autoincrement": False,
375
+ }
376
+ if comment:
377
+ col["comment"] = comment
378
+ columns.append(col)
379
+ return columns
380
+
381
+ # ── graceful stubs (Lakehouse has no PK / FK / index concepts) ──────────
382
+ def get_pk_constraint(self, connection, table_name, schema=None, **kw):
383
+ return {"constrained_columns": [], "name": None}
384
+
385
+ def get_foreign_keys(self, connection, table_name, schema=None, **kw):
386
+ return []
387
+
388
+ def get_indexes(self, connection, table_name, schema=None, **kw):
389
+ return []
390
+
391
+ def get_unique_constraints(self, connection, table_name, schema=None, **kw):
392
+ return []
393
+
394
+ def get_check_constraints(self, connection, table_name, schema=None, **kw):
395
+ return []
396
+
397
+ def get_table_comment(self, connection, table_name, schema=None, **kw):
398
+ full_name = self._full_name(schema, table_name)
399
+ try:
400
+ rows = connection.execute(
401
+ text(f"DESCRIBE TABLE EXTENDED {full_name}")
402
+ ).fetchall()
403
+ for row in rows:
404
+ if row[0].strip().lower() == "comment":
405
+ comment = row[1].strip() if row[1] else None
406
+ return {"text": comment or None}
407
+ except DatabaseError as e:
408
+ if not _is_not_found(e):
409
+ logger.warning(
410
+ "get_table_comment(%r) failed unexpectedly: %s", table_name, e
411
+ )
412
+ return {"text": None}
413
+
414
+ # ── existence checks ────────────────────────────────────────────────────
415
+ def has_table(
416
+ self, connection, table_name: str, schema: str | None = None, **kw
417
+ ) -> bool:
418
+ try:
419
+ return table_name in self.get_table_names(connection, schema=schema)
420
+ except DatabaseError as e:
421
+ logger.warning("has_table(%r) check failed: %s", table_name, e)
422
+ return False
423
+
424
+ def has_schema(self, connection, schema_name: str, **kw) -> bool:
425
+ try:
426
+ return schema_name in self.get_schema_names(connection)
427
+ except DatabaseError as e:
428
+ logger.warning("has_schema(%r) check failed: %s", schema_name, e)
429
+ return False
430
+
431
+ # ── default schema detection ─────────────────────────────────────────────
432
+ def _get_default_schema_name(self, connection):
433
+ try:
434
+ row = connection.execute(text("SELECT current_database()")).fetchone()
435
+ if row:
436
+ return row[0]
437
+ except DatabaseError as e:
438
+ logger.debug("Could not retrieve default schema name: %s", e)
439
+ return self.default_schema_name
@@ -0,0 +1,117 @@
1
+ """
2
+ Mapping from Spark/Iceberg type strings to SQLAlchemy types.
3
+ """
4
+
5
+ import re
6
+
7
+ from sqlalchemy import types as sqltypes
8
+
9
+ # Spark/Iceberg types that map directly to a SQLAlchemy type with no arguments.
10
+ # Spark often has multiple aliases for the same type (e.g. "int" / "integer",
11
+ # "bigint" / "long"), so all aliases are listed here.
12
+ _KEYWORD_TYPE_MAP: dict[str, sqltypes.TypeEngine] = {
13
+ # booleans
14
+ "boolean": sqltypes.Boolean(),
15
+ "bool": sqltypes.Boolean(),
16
+ # integers
17
+ "tinyint": sqltypes.SmallInteger(),
18
+ "byte": sqltypes.SmallInteger(),
19
+ "smallint": sqltypes.SmallInteger(),
20
+ "short": sqltypes.SmallInteger(),
21
+ "int": sqltypes.Integer(),
22
+ "integer": sqltypes.Integer(),
23
+ "bigint": sqltypes.BigInteger(),
24
+ "long": sqltypes.BigInteger(),
25
+ # floating-point
26
+ "float": sqltypes.Float(precision=24),
27
+ "real": sqltypes.Float(precision=24),
28
+ "double": sqltypes.Float(precision=53),
29
+ "double precision": sqltypes.Float(precision=53),
30
+ # strings / binary
31
+ "string": sqltypes.Text(),
32
+ "text": sqltypes.Text(),
33
+ "binary": sqltypes.LargeBinary(),
34
+ "bytes": sqltypes.LargeBinary(),
35
+ # date / time
36
+ "date": sqltypes.Date(),
37
+ "timestamp": sqltypes.DateTime(),
38
+ "timestamp_ntz": sqltypes.DateTime(),
39
+ "interval": sqltypes.Interval(),
40
+ # misc
41
+ "void": sqltypes.NullType(),
42
+ "null": sqltypes.NullType(),
43
+ }
44
+
45
+ # Matches decimal(precision, scale) e.g. "decimal(10, 2)"
46
+ _DECIMAL_WITH_SCALE = re.compile(r"decimal\s*\(\s*(\d+)\s*,\s*(\d+)\s*\)")
47
+
48
+ # Matches decimal(precision) only e.g. "decimal(10)"
49
+ _DECIMAL_PRECISION_ONLY = re.compile(r"decimal\s*\(\s*(\d+)\s*\)")
50
+
51
+ # Matches char(length) e.g. "char(64)"
52
+ _CHAR_WITH_LENGTH = re.compile(r"char\s*\(\s*(\d+)\s*\)")
53
+
54
+ # Matches varchar(length) e.g. "varchar(255)"
55
+ _VARCHAR_WITH_LENGTH = re.compile(r"varchar\s*\(\s*(\d+)\s*\)")
56
+
57
+ # Spark complex type prefixes — no SQLAlchemy equivalent, mapped to JSON
58
+ _COMPLEX_TYPE_PREFIXES = ("array<", "map<", "struct<")
59
+
60
+ def spark_type_to_sqla(type_str: str) -> sqltypes.TypeEngine:
61
+ """
62
+ Convert a Spark/Iceberg type string (Spark → SQLAlchemy direction).
63
+
64
+ Called during reflection: ``DESCRIBE TABLE`` returns a type string such as
65
+ ``"decimal(10, 2)"`` or ``"array<string>"``; this function maps it to the
66
+ corresponding SQLAlchemy ``TypeEngine`` instance.
67
+
68
+ Handles parameterised forms such as:
69
+ - decimal(10, 2)
70
+ - char(64)
71
+ - varchar(255)
72
+ - array<string>
73
+ - map<string, int>
74
+ - struct<field:type, …>
75
+ """
76
+ if not type_str:
77
+ return sqltypes.NullType()
78
+
79
+ normalized = type_str.strip().lower()
80
+
81
+ # ── decimal(precision, scale) ──────────────────────────────────────────
82
+ match = _DECIMAL_WITH_SCALE.fullmatch(normalized)
83
+ if match:
84
+ precision, scale = int(match.group(1)), int(match.group(2))
85
+ return sqltypes.Numeric(precision=precision, scale=scale)
86
+
87
+ match = _DECIMAL_PRECISION_ONLY.fullmatch(normalized)
88
+ if match:
89
+ return sqltypes.Numeric(precision=int(match.group(1)))
90
+
91
+ if normalized in ("decimal", "numeric"):
92
+ return sqltypes.Numeric()
93
+
94
+ # ── char(length) / varchar(length) ────────────────────────────────────
95
+ match = _CHAR_WITH_LENGTH.fullmatch(normalized)
96
+ if match:
97
+ return sqltypes.CHAR(length=int(match.group(1)))
98
+
99
+ match = _VARCHAR_WITH_LENGTH.fullmatch(normalized)
100
+ if match:
101
+ return sqltypes.VARCHAR(length=int(match.group(1)))
102
+
103
+ if normalized in ("char", "varchar", "character varying"):
104
+ return sqltypes.Text()
105
+
106
+ # ── complex types: array<…>, map<…>, struct<…> ────────────────────────
107
+ # SQLAlchemy has no native equivalent; JSON is the closest approximation
108
+ # for schema reflection and metadata tooling.
109
+ if normalized.startswith(_COMPLEX_TYPE_PREFIXES):
110
+ return sqltypes.JSON()
111
+
112
+ # ── plain keyword lookup ───────────────────────────────────────────────
113
+ if normalized in _KEYWORD_TYPE_MAP:
114
+ return _KEYWORD_TYPE_MAP[normalized]
115
+
116
+ # ── unknown type → Text so reflection never crashes ───────────────────
117
+ return sqltypes.Text()
@@ -0,0 +1,120 @@
1
+ Metadata-Version: 2.4
2
+ Name: iomete-sqlalchemy
3
+ Version: 1.0.20rc3
4
+ Summary: SQLAlchemy dialect for IOMETE via Arrow Flight SQL
5
+ License: Apache-2.0
6
+ Requires-Python: >=3.9
7
+ Description-Content-Type: text/markdown
8
+ License-File: LICENSE
9
+ Requires-Dist: sqlalchemy>=2.0.30
10
+ Requires-Dist: adbc-driver-flightsql>=1.1.0
11
+ Requires-Dist: pyarrow>=16.0
12
+ Provides-Extra: dev
13
+ Requires-Dist: pytest; extra == "dev"
14
+ Requires-Dist: pytest-cov; extra == "dev"
15
+ Requires-Dist: ruff; extra == "dev"
16
+ Requires-Dist: python-dotenv; extra == "dev"
17
+ Dynamic: license-file
18
+
19
+ # iomete-sqlalchemy
20
+
21
+ A SQLAlchemy dialect for [IOMETE](https://iomete.com) using Arrow Flight SQL (`adbc-driver-flightsql`).
22
+
23
+ All reflection methods are implemented and relational stubs (PK, FK, indexes) return graceful empty results instead of raising `NotImplementedError`, making it compatible with schema inspection tools and data catalog integrations.
24
+
25
+ ---
26
+
27
+ ## Requirements
28
+
29
+ - `Python 3.9+`
30
+ - `sqlalchemy >= 2.0.30`
31
+ - `adbc-driver-flightsql >= 1.1.0`
32
+ - `pyarrow >= 16.0`
33
+
34
+ ---
35
+
36
+ ## Installation
37
+
38
+ ```bash
39
+ pip install iomete-sqlalchemy
40
+ ```
41
+
42
+ ---
43
+
44
+ ## Connection URL
45
+
46
+ ```
47
+ iomete+flightsql://<user>:<password>@<host>:<port>/<catalog>/<schema>
48
+ ?cluster=<cluster>
49
+ &data_plane=<data_plane>
50
+ [&tls=true]
51
+ [&max_msg_size=134217728]
52
+ ```
53
+
54
+ | Parameter | Description | Default |
55
+ |---|---|---|
56
+ | `host` | IOMETE host | — |
57
+ | `port` | gRPC port | `443` |
58
+ | `catalog` | Top-level catalog (e.g. `spark_catalog`) | — |
59
+ | `schema` | Schema / database inside the catalog | — |
60
+ | `cluster` | IOMETE compute cluster name | — |
61
+ | `data_plane` | IOMETE data plane name | — |
62
+ | `tls` | Use `grpc+tls` transport | `true` |
63
+ | `max_msg_size` | Max gRPC message size in bytes | `134217728` (128 MB) |
64
+
65
+ ### Example
66
+
67
+ ```python
68
+ from sqlalchemy import create_engine, inspect, text
69
+
70
+ HOST = "dev.iomete.cloud"
71
+ PORT = 443
72
+ USERNAME = "your-username"
73
+ PASSWORD = "your-password"
74
+ CATALOG = "spark_catalog"
75
+ SCHEMA = "default"
76
+ CLUSTER = "your-cluster"
77
+ DATA_PLANE = "your-data-plane"
78
+
79
+ engine = create_engine(
80
+ f"iomete+flightsql://{USERNAME}:{PASSWORD}@{HOST}:{PORT}"
81
+ f"/{CATALOG}/{SCHEMA}"
82
+ f"?cluster={CLUSTER}&data_plane={DATA_PLANE}"
83
+ )
84
+
85
+ # Check connection
86
+ with engine.connect() as conn:
87
+ print(conn.execute(text("SELECT 1")).scalar()) # → 1
88
+
89
+ # Reflect schema
90
+ inspector = inspect(engine)
91
+ print(inspector.get_schema_names())
92
+ print(inspector.get_table_names(schema=f"{CATALOG}.{SCHEMA}"))
93
+ ```
94
+
95
+ ---
96
+
97
+ ## Supported Type Mappings
98
+
99
+ | Spark / Iceberg type | SQLAlchemy type |
100
+ |---|---|
101
+ | `boolean` | `Boolean` |
102
+ | `tinyint`, `smallint` | `SmallInteger` |
103
+ | `int`, `integer` | `Integer` |
104
+ | `bigint` | `BigInteger` |
105
+ | `float`, `real` | `Float(24)` |
106
+ | `double` | `Float(53)` |
107
+ | `decimal(p, s)` | `Numeric(p, s)` |
108
+ | `char(n)` | `CHAR(n)` |
109
+ | `varchar(n)` | `VARCHAR(n)` |
110
+ | `string`, `text` | `Text` |
111
+ | `binary` | `LargeBinary` |
112
+ | `date` | `Date` |
113
+ | `timestamp`, `timestamp_ntz` | `DateTime` |
114
+ | `array<…>`, `map<…>`, `struct<…>` | `JSON` |
115
+
116
+ ---
117
+
118
+ ## Contributing
119
+
120
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for architecture overview, codebase layout, and how to run tests.
@@ -0,0 +1,13 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ src/iomete_sqlalchemy/__init__.py
5
+ src/iomete_sqlalchemy/dbapi.py
6
+ src/iomete_sqlalchemy/dialect.py
7
+ src/iomete_sqlalchemy/types.py
8
+ src/iomete_sqlalchemy.egg-info/PKG-INFO
9
+ src/iomete_sqlalchemy.egg-info/SOURCES.txt
10
+ src/iomete_sqlalchemy.egg-info/dependency_links.txt
11
+ src/iomete_sqlalchemy.egg-info/entry_points.txt
12
+ src/iomete_sqlalchemy.egg-info/requires.txt
13
+ src/iomete_sqlalchemy.egg-info/top_level.txt
@@ -0,0 +1,2 @@
1
+ [sqlalchemy.dialects]
2
+ iomete.flightsql = iomete_sqlalchemy.dialect:IOMETEDialect
@@ -0,0 +1,9 @@
1
+ sqlalchemy>=2.0.30
2
+ adbc-driver-flightsql>=1.1.0
3
+ pyarrow>=16.0
4
+
5
+ [dev]
6
+ pytest
7
+ pytest-cov
8
+ ruff
9
+ python-dotenv