AxiomQuery 0.1.0__tar.gz → 0.3.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (29) hide show
  1. axiomquery-0.3.0/CHANGELOG.md +63 -0
  2. {axiomquery-0.1.0 → axiomquery-0.3.0}/PKG-INFO +22 -1
  3. {axiomquery-0.1.0 → axiomquery-0.3.0}/README.md +21 -0
  4. {axiomquery-0.1.0 → axiomquery-0.3.0}/examples/example_async.py +80 -14
  5. {axiomquery-0.1.0 → axiomquery-0.3.0}/examples/example_sync.py +81 -13
  6. {axiomquery-0.1.0 → axiomquery-0.3.0}/pyproject.toml +1 -1
  7. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/__init__.py +1 -1
  8. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/compiler.py +30 -13
  9. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/engine.py +76 -29
  10. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/schema.py +29 -3
  11. {axiomquery-0.1.0 → axiomquery-0.3.0}/tests/conftest.py +24 -3
  12. axiomquery-0.3.0/tests/test_asearch.py +82 -0
  13. {axiomquery-0.1.0 → axiomquery-0.3.0}/tests/test_list.py +39 -1
  14. axiomquery-0.3.0/tests/test_search.py +97 -0
  15. axiomquery-0.1.0/CHANGELOG.md +0 -29
  16. {axiomquery-0.1.0 → axiomquery-0.3.0}/.github/workflows/python-publish.yml +0 -0
  17. {axiomquery-0.1.0 → axiomquery-0.3.0}/.gitignore +0 -0
  18. {axiomquery-0.1.0 → axiomquery-0.3.0}/CONTRIBUTING.md +0 -0
  19. {axiomquery-0.1.0 → axiomquery-0.3.0}/LICENSE +0 -0
  20. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/aggregation.py +0 -0
  21. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/aggregation_parser.py +0 -0
  22. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/ast.py +0 -0
  23. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/compiler_aggregate.py +0 -0
  24. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/errors.py +0 -0
  25. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/operators.py +0 -0
  26. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/parser.py +0 -0
  27. {axiomquery-0.1.0 → axiomquery-0.3.0}/src/axiom_query/py.typed +0 -0
  28. {axiomquery-0.1.0 → axiomquery-0.3.0}/tests/test_async.py +0 -0
  29. {axiomquery-0.1.0 → axiomquery-0.3.0}/tests/test_read_group.py +0 -0
@@ -0,0 +1,63 @@
1
+ # Changelog
2
+
3
+ All notable changes to `AxiomQuery` are documented here.
4
+
5
+ Format follows [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
6
+ Versioning follows [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ---
9
+
10
+ ## [0.3.0] — 2026-04-26
11
+
12
+ ### Added
13
+
14
+ - `search()` / `asearch()` — streaming iteration over large result sets via SQLAlchemy server-side cursors with `yield_per=1000`
15
+ - Sync `search()` returns a Python iterator (consume with `for`); async `asearch()` returns an `AsyncScalarResult` (consume with `async for`)
16
+ - Single-pass; no `limit` / `offset` (use `list()` / `alist()` for paginated/materialised access)
17
+ - `DEFAULT_PREFETCH = 1000` module-level constant in `engine.py`
18
+ - Internal `_build_stmt()` helper consolidating the `select + where + order_by + limit + offset` block shared by `list` / `alist` / `search` / `asearch` / `read_group`
19
+
20
+ ### Changed
21
+
22
+ - `list()` / `alist()` continue to return a materialised `list` — behaviour and signature unchanged from 0.2.0 callers' perspective
23
+
24
+ ---
25
+
26
+ ## [0.2.0] — 2026-04-13
27
+
28
+ ### Added
29
+
30
+ - M2O (Many-to-One) relational field filtering via dot-notation (e.g. `["customer.name", "ilike", "%Alice%"]`)
31
+ - Generates an `EXISTS` subquery joining the referenced table on the local FK column
32
+ - Composes freely with scalar filters and O2M child filters in the same domain
33
+ - `RelatedSchema` dataclass in `schema.py` to capture M2O relationship metadata (referenced table, local FK column, columns)
34
+ - `related` field on `ModelSchema` mapping relationship attribute names to their `RelatedSchema`
35
+
36
+ ---
37
+
38
+ ## [0.1.1] — 2026-04-05
39
+ Acknowledgement and Inspiration Added
40
+
41
+ ### Added
42
+ - updated the README.md file with details
43
+
44
+ ## [0.1.0] — 2026-03-29
45
+
46
+ Initial release.
47
+
48
+ ### Added
49
+
50
+ - `QueryEngine` — specification-based query facade for any SQLAlchemy 2.0 ORM model
51
+ - `list()` / `alist()` — filtered record retrieval with `limit`, `offset`, `order_by`
52
+ - `read_group()` / `aread_group()` — grouped aggregation (GROUP BY + HAVING) with `__domain` drill-down per group
53
+ - Domain filter DSL — composable JSON expressions: `[field, op, value]`, `{"and": [...]}`, `{"or": [...]}`, `{"not": ...}`
54
+ - Full operator set: `=` `!=` `>` `<` `>=` `<=` `in` `not in` `like` `ilike` `is_null`
55
+ - Child field filtering via EXISTS subquery (dot notation: `"lines.quantity"`)
56
+ - Child field aggregation via LEFT JOIN (`"lines.quantity:sum"`)
57
+ - Date granularity grouping: `day` `week` `month` `quarter` `year`
58
+ - Aggregate functions: `count` `sum` `avg` `min` `max`
59
+ - HAVING filter on aggregate aliases
60
+ - Schema auto-derived from `inspect(model_class)` — O2M relationships are children by convention
61
+ - Sync (`Session`) and async (`AsyncSession`) APIs
62
+ - `QueryError` with `code` and `message` — field validation at compile time, before DB
63
+ - `py.typed` marker — PEP 561 inline type annotations
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: AxiomQuery
3
- Version: 0.1.0
3
+ Version: 0.3.0
4
4
  Summary: Specification-based query and aggregation engine for SQLAlchemy 2.0 ORM models
5
5
  Project-URL: Source Code, https://github.com/Axiom-Dev-Labs/AxiomQuery
6
6
  Project-URL: Bug Tracker, https://github.com/Axiom-Dev-Labs/AxiomQuery/issues
@@ -230,3 +230,24 @@ python examples/example_async.py
230
230
  ```
231
231
 
232
232
  Both cover: simple filters, AND / OR / NOT, combined nesting, child EXISTS filtering, pagination, `read_group` with domain / date granularity / child aggregation / HAVING, and `__domain` drill-down.
233
+
234
+ Here is an improved, livelier version of your acknowledgement note, complete with emojis and the added context about the Specification pattern acting as the core inspiration for the library. It is formatted directly for your `README.md`.
235
+
236
+ ***
237
+
238
+ ## 🙌 Acknowledgements & Inspirations
239
+
240
+ The creation of AxiomQuery was sparked by a desire to cleanly bridge pure domain logic with robust data access. The conceptual "trigger point" for this library came from **Martin Fowler and Eric Evans' Specification Pattern**—a brilliant blueprint for encapsulating business rules. However, it was the phenomenal foundation of **SQLAlchemy 2.0** that provided the mechanical reality, making it possible to seamlessly translate those decoupled domain specifications into highly optimized SQL.
241
+
242
+ A huge thank you to the maintainers and contributors of SQLAlchemy. AxiomQuery is built explicitly as a specification-based query and aggregation engine for SQLAlchemy 2.0 ORM models, and it relies entirely on several of their most powerful features:
243
+
244
+ * 🔍 **Incredible Introspection (`inspect()`):** AxiomQuery automatically derives all necessary schema data—including `mapper.columns`, one-to-many relationships (`RelationshipDirection.ONETOMANY`), and foreign key synchronization pairs—directly from SQLAlchemy's introspection tools. This allows the engine to extract everything the compiler needs without ever forcing the developer to write duplicate descriptor code.
245
+ * 🏗️ **Robust Expression Language:** Our underlying AST compiler relies heavily on SQLAlchemy's composable query constructs. Mapping our 11 supported operators to native methods makes it incredibly easy to safely compile complex SQL `WHERE` clauses. It seamlessly handles advanced requirements, such as utilizing `EXISTS` subqueries for parent-child filtering and executing `LEFT JOIN` aggregations with database-specific date truncations.
246
+ * 🔌 **Decoupled Session Management:** Because SQLAlchemy cleanly separates the ORM models from the active database connection, AxiomQuery can operate as a thin, highly reusable facade. The library expects a caller-owned session (whether standard or an `AsyncSession`), allowing developers to easily manage transactions across multiple engines without friction.
247
+
248
+ Thank you for providing the introspection and query-building tools that make translating dynamic JSON expressions into complex SQL queries a reality! ✨
249
+
250
+ ### 📚 References
251
+
252
+ * **The Specification Pattern:** [Specifications by Martin Fowler & Eric Evans (PDF)](https://martinfowler.com/apsupp/spec.pdf) - The foundational paper that inspired the core domain-driven architecture of this library.
253
+ * **SQLAlchemy 2.0:** [Official Documentation](https://docs.sqlalchemy.org/en/20/) - The robust ORM and toolkit that powers the AxiomQuery engine.
@@ -187,3 +187,24 @@ python examples/example_async.py
187
187
  ```
188
188
 
189
189
  Both cover: simple filters, AND / OR / NOT, combined nesting, child EXISTS filtering, pagination, `read_group` with domain / date granularity / child aggregation / HAVING, and `__domain` drill-down.
190
+
191
+ Here is an improved, livelier version of your acknowledgement note, complete with emojis and the added context about the Specification pattern acting as the core inspiration for the library. It is formatted directly for your `README.md`.
192
+
193
+ ***
194
+
195
+ ## 🙌 Acknowledgements & Inspirations
196
+
197
+ The creation of AxiomQuery was sparked by a desire to cleanly bridge pure domain logic with robust data access. The conceptual "trigger point" for this library came from **Martin Fowler and Eric Evans' Specification Pattern**—a brilliant blueprint for encapsulating business rules. However, it was the phenomenal foundation of **SQLAlchemy 2.0** that provided the mechanical reality, making it possible to seamlessly translate those decoupled domain specifications into highly optimized SQL.
198
+
199
+ A huge thank you to the maintainers and contributors of SQLAlchemy. AxiomQuery is built explicitly as a specification-based query and aggregation engine for SQLAlchemy 2.0 ORM models, and it relies entirely on several of their most powerful features:
200
+
201
+ * 🔍 **Incredible Introspection (`inspect()`):** AxiomQuery automatically derives all necessary schema data—including `mapper.columns`, one-to-many relationships (`RelationshipDirection.ONETOMANY`), and foreign key synchronization pairs—directly from SQLAlchemy's introspection tools. This allows the engine to extract everything the compiler needs without ever forcing the developer to write duplicate descriptor code.
202
+ * 🏗️ **Robust Expression Language:** Our underlying AST compiler relies heavily on SQLAlchemy's composable query constructs. Mapping our 11 supported operators to native methods makes it incredibly easy to safely compile complex SQL `WHERE` clauses. It seamlessly handles advanced requirements, such as utilizing `EXISTS` subqueries for parent-child filtering and executing `LEFT JOIN` aggregations with database-specific date truncations.
203
+ * 🔌 **Decoupled Session Management:** Because SQLAlchemy cleanly separates the ORM models from the active database connection, AxiomQuery can operate as a thin, highly reusable facade. The library expects a caller-owned session (whether standard or an `AsyncSession`), allowing developers to easily manage transactions across multiple engines without friction.
204
+
205
+ Thank you for providing the introspection and query-building tools that make translating dynamic JSON expressions into complex SQL queries a reality! ✨
206
+
207
+ ### 📚 References
208
+
209
+ * **The Specification Pattern:** [Specifications by Martin Fowler & Eric Evans (PDF)](https://martinfowler.com/apsupp/spec.pdf) - The foundational paper that inspired the core domain-driven architecture of this library.
210
+ * **SQLAlchemy 2.0:** [Official Documentation](https://docs.sqlalchemy.org/en/20/) - The robust ORM and toolkit that powers the AxiomQuery engine.
@@ -5,7 +5,8 @@ methods. Demonstrates the same domain styles:
5
5
  - Simple equality / comparison
6
6
  - AND, OR, NOT
7
7
  - Combined nested conditions
8
- - Child-field EXISTS filtering
8
+ - Child-field EXISTS filtering (O2M)
9
+ - Many-to-One field filtering (M2O EXISTS subquery)
9
10
  - alist() options: limit, offset, order_by
10
11
  - aread_group() with domain, date granularity, child aggregate, HAVING
11
12
  - __domain drill-down with alist()
@@ -18,7 +19,7 @@ from __future__ import annotations
18
19
 
19
20
  import asyncio
20
21
  from datetime import datetime
21
- from typing import List
22
+ from typing import List, Optional
22
23
 
23
24
  from sqlalchemy import DateTime, ForeignKey, String
24
25
  from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
@@ -34,12 +35,25 @@ class Base(DeclarativeBase):
34
35
  pass
35
36
 
36
37
 
38
+ class Customer(Base):
39
+ __tablename__ = "customers"
40
+ id: Mapped[int] = mapped_column(primary_key=True)
41
+ name: Mapped[str] = mapped_column(String(100))
42
+
43
+ def __repr__(self) -> str:
44
+ return f"Customer(id={self.id}, name={self.name!r})"
45
+
46
+
37
47
  class Order(Base):
38
48
  __tablename__ = "orders"
39
49
  id: Mapped[int] = mapped_column(primary_key=True)
40
50
  status: Mapped[str] = mapped_column(String(20))
41
51
  total: Mapped[int] = mapped_column(default=0)
42
52
  created_at: Mapped[datetime] = mapped_column(DateTime)
53
+ customer_id: Mapped[Optional[int]] = mapped_column(
54
+ ForeignKey("customers.id"), nullable=True
55
+ )
56
+ customer: Mapped[Optional["Customer"]] = relationship()
43
57
  lines: Mapped[List["OrderLine"]] = relationship(back_populates="order")
44
58
 
45
59
  def __repr__(self) -> str:
@@ -62,14 +76,41 @@ class OrderLine(Base):
62
76
  async def seed(session: AsyncSession) -> None:
63
77
  session.add_all(
64
78
  [
79
+ Customer(id=1, name="Joy"),
80
+ Customer(id=2, name="Bob"),
81
+ ]
82
+ )
83
+ await session.flush()
84
+ session.add_all(
85
+ [
86
+ Order(
87
+ id=1,
88
+ status="CONFIRMED",
89
+ total=100,
90
+ created_at=datetime(2026, 1, 15),
91
+ customer_id=1,
92
+ ),
93
+ Order(
94
+ id=2,
95
+ status="CONFIRMED",
96
+ total=200,
97
+ created_at=datetime(2026, 2, 20),
98
+ customer_id=2,
99
+ ),
65
100
  Order(
66
- id=1, status="CONFIRMED", total=100, created_at=datetime(2026, 1, 15)
101
+ id=3,
102
+ status="DRAFT",
103
+ total=50,
104
+ created_at=datetime(2026, 1, 25),
105
+ customer_id=1,
67
106
  ),
68
107
  Order(
69
- id=2, status="CONFIRMED", total=200, created_at=datetime(2026, 2, 20)
108
+ id=4,
109
+ status="CANCELLED",
110
+ total=75,
111
+ created_at=datetime(2026, 3, 10),
112
+ customer_id=None,
70
113
  ),
71
- Order(id=3, status="DRAFT", total=50, created_at=datetime(2026, 1, 25)),
72
- Order(id=4, status="CANCELLED", total=75, created_at=datetime(2026, 3, 10)),
73
114
  ]
74
115
  )
75
116
  await session.flush()
@@ -302,9 +343,34 @@ async def main() -> None:
302
343
  ),
303
344
  )
304
345
 
305
- # ── Section 8: alist() options ────────────────────────────────────
346
+ # ── Section 8: M2O field — EXISTS on referenced table ────────────
347
+
348
+ section("8. M2O field filtering (EXISTS on referenced table)")
349
+
350
+ show(
351
+ "orders where customer.name = 'Joy'",
352
+ await engine.alist(session, domain=[["customer.name", "=", "Joy"]]),
353
+ )
354
+
355
+ show(
356
+ "orders where customer.name ilike '%ob%'",
357
+ await engine.alist(session, domain=[["customer.name", "ilike", "%ob%"]]),
358
+ )
359
+
360
+ show(
361
+ "CONFIRMED orders for Joy (M2O + scalar)",
362
+ await engine.alist(
363
+ session,
364
+ domain=[
365
+ ["customer.name", "=", "Joy"],
366
+ ["status", "=", "CONFIRMED"],
367
+ ],
368
+ ),
369
+ )
370
+
371
+ # ── Section 9: alist() options ────────────────────────────────────
306
372
 
307
- section("8. alist() — limit, offset, order_by")
373
+ section("9. alist() — limit, offset, order_by")
308
374
 
309
375
  show(
310
376
  "top 2 orders by total desc",
@@ -327,7 +393,7 @@ async def main() -> None:
327
393
 
328
394
  # ── Section 9: aread_group — basic ────────────────────────────────
329
395
 
330
- section("9. aread_group — basic groupby + count")
396
+ section("10. aread_group — basic groupby + count")
331
397
 
332
398
  groups, total = await engine.aread_group(
333
399
  session,
@@ -339,7 +405,7 @@ async def main() -> None:
339
405
 
340
406
  # ── Section 10: aread_group with domain ───────────────────────────
341
407
 
342
- section("10. aread_group with domain filter")
408
+ section("11. aread_group with domain filter")
343
409
 
344
410
  groups, total = await engine.aread_group(
345
411
  session,
@@ -351,7 +417,7 @@ async def main() -> None:
351
417
 
352
418
  # ── Section 11: aread_group — date granularity ────────────────────
353
419
 
354
- section("11. aread_group — date granularity (month)")
420
+ section("12. aread_group — date granularity (month)")
355
421
 
356
422
  groups, total = await engine.aread_group(
357
423
  session,
@@ -363,7 +429,7 @@ async def main() -> None:
363
429
 
364
430
  # ── Section 12: aread_group — child aggregate (LEFT JOIN) ─────────
365
431
 
366
- section("12. aread_group — child aggregate (LEFT JOIN)")
432
+ section("13. aread_group — child aggregate (LEFT JOIN)")
367
433
 
368
434
  groups, total = await engine.aread_group(
369
435
  session,
@@ -374,7 +440,7 @@ async def main() -> None:
374
440
 
375
441
  # ── Section 13: aread_group — HAVING ──────────────────────────────
376
442
 
377
- section("13. aread_group — HAVING filter on aggregate")
443
+ section("14. aread_group — HAVING filter on aggregate")
378
444
 
379
445
  groups, total = await engine.aread_group(
380
446
  session,
@@ -386,7 +452,7 @@ async def main() -> None:
386
452
 
387
453
  # ── Section 14: __domain drill-down with alist() ──────────────────
388
454
 
389
- section("14. __domain drill-down — group → records via alist()")
455
+ section("15. __domain drill-down — group → records via alist()")
390
456
 
391
457
  groups, _ = await engine.aread_group(
392
458
  session,
@@ -6,7 +6,8 @@ Demonstrates all domain condition styles:
6
6
  - OR — any condition must hold
7
7
  - NOT — negation
8
8
  - Combined — nested AND / OR / NOT
9
- - Child-field EXISTS filtering
9
+ - Child-field EXISTS filtering (O2M)
10
+ - Many-to-One field filtering (M2O EXISTS subquery)
10
11
  - list() options: limit, offset, order_by
11
12
  - read_group() with domain, date granularity, child aggregate, HAVING
12
13
  - __domain drill-down: group result → list of matching records
@@ -20,6 +21,8 @@ from __future__ import annotations
20
21
  from datetime import datetime
21
22
  from typing import List
22
23
 
24
+ from typing import Optional
25
+
23
26
  from sqlalchemy import DateTime, ForeignKey, String, create_engine
24
27
  from sqlalchemy.orm import DeclarativeBase, Mapped, Session, mapped_column, relationship
25
28
 
@@ -33,12 +36,25 @@ class Base(DeclarativeBase):
33
36
  pass
34
37
 
35
38
 
39
+ class Customer(Base):
40
+ __tablename__ = "customers"
41
+ id: Mapped[int] = mapped_column(primary_key=True)
42
+ name: Mapped[str] = mapped_column(String(100))
43
+
44
+ def __repr__(self) -> str:
45
+ return f"Customer(id={self.id}, name={self.name!r})"
46
+
47
+
36
48
  class Order(Base):
37
49
  __tablename__ = "orders"
38
50
  id: Mapped[int] = mapped_column(primary_key=True)
39
51
  status: Mapped[str] = mapped_column(String(20))
40
52
  total: Mapped[int] = mapped_column(default=0)
41
53
  created_at: Mapped[datetime] = mapped_column(DateTime)
54
+ customer_id: Mapped[Optional[int]] = mapped_column(
55
+ ForeignKey("customers.id"), nullable=True
56
+ )
57
+ customer: Mapped[Optional["Customer"]] = relationship()
42
58
  lines: Mapped[List["OrderLine"]] = relationship(back_populates="order")
43
59
 
44
60
  def __repr__(self) -> str:
@@ -61,14 +77,41 @@ class OrderLine(Base):
61
77
  def seed(session: Session) -> None:
62
78
  session.add_all(
63
79
  [
80
+ Customer(id=1, name="Joy"),
81
+ Customer(id=2, name="Bob"),
82
+ ]
83
+ )
84
+ session.flush()
85
+ session.add_all(
86
+ [
87
+ Order(
88
+ id=1,
89
+ status="CONFIRMED",
90
+ total=100,
91
+ created_at=datetime(2026, 1, 15),
92
+ customer_id=1,
93
+ ),
94
+ Order(
95
+ id=2,
96
+ status="CONFIRMED",
97
+ total=200,
98
+ created_at=datetime(2026, 2, 20),
99
+ customer_id=2,
100
+ ),
64
101
  Order(
65
- id=1, status="CONFIRMED", total=100, created_at=datetime(2026, 1, 15)
102
+ id=3,
103
+ status="DRAFT",
104
+ total=50,
105
+ created_at=datetime(2026, 1, 25),
106
+ customer_id=1,
66
107
  ),
67
108
  Order(
68
- id=2, status="CONFIRMED", total=200, created_at=datetime(2026, 2, 20)
109
+ id=4,
110
+ status="CANCELLED",
111
+ total=75,
112
+ created_at=datetime(2026, 3, 10),
113
+ customer_id=None,
69
114
  ),
70
- Order(id=3, status="DRAFT", total=50, created_at=datetime(2026, 1, 25)),
71
- Order(id=4, status="CANCELLED", total=75, created_at=datetime(2026, 3, 10)),
72
115
  ]
73
116
  )
74
117
  session.flush()
@@ -296,9 +339,34 @@ def main() -> None:
296
339
  ),
297
340
  )
298
341
 
299
- # ── Section 8: list() options ─────────────────────────────────────
342
+ # ── Section 8: M2O field — EXISTS on referenced table ────────────
343
+
344
+ section("8. M2O field filtering (EXISTS on referenced table)")
345
+
346
+ show(
347
+ "orders where customer.name = 'Joy'",
348
+ engine.list(session, domain=[["customer.name", "=", "Joy"]]),
349
+ )
350
+
351
+ show(
352
+ "orders where customer.name ilike '%ob%'",
353
+ engine.list(session, domain=[["customer.name", "ilike", "%ob%"]]),
354
+ )
355
+
356
+ show(
357
+ "CONFIRMED orders for Joy (M2O + scalar)",
358
+ engine.list(
359
+ session,
360
+ domain=[
361
+ ["customer.name", "=", "Joy"],
362
+ ["status", "=", "CONFIRMED"],
363
+ ],
364
+ ),
365
+ )
366
+
367
+ # ── Section 9: list() options ─────────────────────────────────────
300
368
 
301
- section("8. list() — limit, offset, order_by")
369
+ section("9. list() — limit, offset, order_by")
302
370
 
303
371
  show(
304
372
  "top 2 by total desc",
@@ -312,7 +380,7 @@ def main() -> None:
312
380
 
313
381
  # ── Section 9: read_group — basic ─────────────────────────────────
314
382
 
315
- section("9. read_group — basic groupby + count")
383
+ section("10. read_group — basic groupby + count")
316
384
 
317
385
  groups, total = engine.read_group(
318
386
  session,
@@ -324,7 +392,7 @@ def main() -> None:
324
392
 
325
393
  # ── Section 10: read_group with domain ───────────────────────────
326
394
 
327
- section("10. read_group with domain filter")
395
+ section("11. read_group with domain filter")
328
396
 
329
397
  groups, total = engine.read_group(
330
398
  session,
@@ -336,7 +404,7 @@ def main() -> None:
336
404
 
337
405
  # ── Section 11: read_group — date granularity ─────────────────────
338
406
 
339
- section("11. read_group — date granularity (month)")
407
+ section("12. read_group — date granularity (month)")
340
408
 
341
409
  groups, total = engine.read_group(
342
410
  session,
@@ -348,7 +416,7 @@ def main() -> None:
348
416
 
349
417
  # ── Section 12: read_group — child aggregate (LEFT JOIN) ──────────
350
418
 
351
- section("12. read_group — child aggregate (LEFT JOIN)")
419
+ section("13. read_group — child aggregate (LEFT JOIN)")
352
420
 
353
421
  groups, total = engine.read_group(
354
422
  session,
@@ -359,7 +427,7 @@ def main() -> None:
359
427
 
360
428
  # ── Section 13: read_group — HAVING ──────────────────────────────
361
429
 
362
- section("13. read_group — HAVING filter on aggregate")
430
+ section("14. read_group — HAVING filter on aggregate")
363
431
 
364
432
  groups, total = engine.read_group(
365
433
  session,
@@ -371,7 +439,7 @@ def main() -> None:
371
439
 
372
440
  # ── Section 14: __domain drill-down ──────────────────────────────
373
441
 
374
- section("14. __domain drill-down — group → records")
442
+ section("15. __domain drill-down — group → records")
375
443
 
376
444
  groups, _ = engine.read_group(
377
445
  session,
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "AxiomQuery"
3
- version = "0.1.0"
3
+ version = "0.3.0"
4
4
  description = "Specification-based query and aggregation engine for SQLAlchemy 2.0 ORM models"
5
5
  readme = "README.md"
6
6
  license = { file = "LICENSE" }
@@ -1,6 +1,6 @@
1
1
  """axiom_query — standalone specification-based query engine for SQLAlchemy ORM models."""
2
2
 
3
- __version__ = "0.1.0"
3
+ __version__ = "0.3.0"
4
4
 
5
5
  from axiom_query.engine import QueryEngine
6
6
  from axiom_query.errors import QueryError
@@ -94,22 +94,39 @@ def _make_table_resolver(schema: ModelSchema) -> SAResolver:
94
94
 
95
95
  def resolve(fp: str, op: Op, val: Any) -> ColumnElement:
96
96
  if "." in fp:
97
- child_name, field_name = fp.split(".", 1)
97
+ rel_name, field_name = fp.split(".", 1)
98
98
  from axiom_query.errors import QueryError
99
99
 
100
- child = schema.children.get(child_name)
101
- if child is None:
102
- raise QueryError(
103
- "INVALID_FILTER_FIELD",
104
- f"No child relation '{child_name}' on {schema.model_class.__name__}",
100
+ # O2M: FK is on the child table; use EXISTS over child rows
101
+ child = schema.children.get(rel_name)
102
+ if child is not None:
103
+ fk_col = child.table.c[child.fk_field]
104
+ field_col = child.table.c[field_name]
105
+ condition = _apply_operator(field_col, op, val)
106
+ return exists(
107
+ select(1)
108
+ .select_from(child.table)
109
+ .where(and_(fk_col == schema.table.c.id, condition))
105
110
  )
106
- fk_col = child.table.c[child.fk_field]
107
- field_col = child.table.c[field_name]
108
- condition = _apply_operator(field_col, op, val)
109
- return exists(
110
- select(1)
111
- .select_from(child.table)
112
- .where(and_(fk_col == schema.table.c.id, condition))
111
+
112
+ # M2O: FK is on the current table; use EXISTS over the referenced table
113
+ related = schema.related.get(rel_name)
114
+ if related is not None:
115
+ local_fk_col = schema.table.c[related.fk_field]
116
+ ref_pk = next(iter(related.table.primary_key))
117
+ field_col = related.table.c[field_name]
118
+ condition = _apply_operator(field_col, op, val)
119
+ return exists(
120
+ select(1)
121
+ .select_from(related.table)
122
+ .where(and_(ref_pk == local_fk_col, condition))
123
+ )
124
+
125
+ all_relations = list(schema.children.keys()) + list(schema.related.keys())
126
+ raise QueryError(
127
+ "INVALID_FILTER_FIELD",
128
+ f"No relation '{rel_name}' on {schema.model_class.__name__}. "
129
+ f"Available: {', '.join(all_relations) or 'none'}",
113
130
  )
114
131
  else:
115
132
  col = _resolve_column(schema, fp)
@@ -15,6 +15,9 @@ from axiom_query.parser import parse_domain
15
15
  from axiom_query.schema import ModelSchema, derive_schema
16
16
 
17
17
 
18
+ DEFAULT_PREFETCH = 1000
19
+
20
+
18
21
  def _get_dialect_name(session) -> str:
19
22
  """Extract dialect name from a sync or async SQLAlchemy session."""
20
23
  # AsyncSession has a .bind attribute (AsyncEngine)
@@ -38,7 +41,14 @@ class QueryEngine:
38
41
  Usage::
39
42
 
40
43
  engine = QueryEngine(Order)
41
- records = engine.list(session, domain=[["status", "=", "CONFIRMED"]])
44
+
45
+ # Materialised page
46
+ page = engine.list(session, domain=[["status", "=", "CONFIRMED"]], limit=20)
47
+
48
+ # Streaming iteration over every match (no len/indexing)
49
+ for order in engine.search(session, domain=[["status", "=", "CONFIRMED"]]):
50
+ process(order)
51
+
42
52
  groups, total = engine.read_group(session, groupby=["status"], aggregates=["__count"])
43
53
  """
44
54
 
@@ -46,6 +56,21 @@ class QueryEngine:
46
56
  self._model = model_class
47
57
  self._schema: ModelSchema = derive_schema(model_class)
48
58
 
59
+ # ── Statement builder ────────────────────────────────────────────────
60
+
61
+ def _build_stmt(self, domain, order_by, limit, offset):
62
+ stmt = select(self._model)
63
+ if domain is not None:
64
+ spec = parse_domain(domain)
65
+ stmt = stmt.where(compile_domain(spec, self._schema))
66
+ if order_by is not None:
67
+ stmt = self._apply_order_by(stmt, order_by)
68
+ if limit is not None:
69
+ stmt = stmt.limit(limit)
70
+ if offset is not None:
71
+ stmt = stmt.offset(offset)
72
+ return stmt
73
+
49
74
  # ── Sync API ─────────────────────────────────────────────────────────
50
75
 
51
76
  def list(
@@ -56,24 +81,36 @@ class QueryEngine:
56
81
  offset: int | None = None,
57
82
  order_by: list | None = None,
58
83
  ) -> list:
59
- """Return all records matching the optional domain filter."""
60
- stmt = select(self._model)
84
+ """Return all records matching the optional domain filter as a list.
61
85
 
62
- if domain is not None:
63
- spec = parse_domain(domain)
64
- where = compile_domain(spec, self._schema)
65
- stmt = stmt.where(where)
86
+ Materialises the full result. Use ``search()`` for streaming over large
87
+ result sets.
88
+ """
89
+ stmt = self._build_stmt(domain, order_by, limit, offset)
90
+ return list(session.execute(stmt).scalars().all())
66
91
 
67
- if order_by is not None:
68
- stmt = self._apply_order_by(stmt, order_by)
92
+ def search(
93
+ self,
94
+ session: Session,
95
+ domain: Any = None,
96
+ order_by: list | None = None,
97
+ ):
98
+ """Stream records for batch processing.
69
99
 
70
- if limit is not None:
71
- stmt = stmt.limit(limit)
72
- if offset is not None:
73
- stmt = stmt.offset(offset)
100
+ Returns an iterator yielding ORM instances from a server-side cursor,
101
+ fetched in batches of ``DEFAULT_PREFETCH`` (1000) rows. Single-pass —
102
+ iterate once and don't store the iterator for re-use.
74
103
 
75
- result = session.execute(stmt)
76
- return list(result.scalars().all())
104
+ No ``limit`` / ``offset``: this method is for processing every matching
105
+ row. Use ``list()`` if you need pagination, ``len()``, or indexing.
106
+
107
+ Driver note: true streaming requires a database driver with server-side
108
+ cursor support (psycopg2, asyncpg). SQLite degrades to client-side
109
+ iteration but remains correct.
110
+ """
111
+ stmt = self._build_stmt(domain, order_by, limit=None, offset=None)
112
+ streaming_stmt = stmt.execution_options(yield_per=DEFAULT_PREFETCH)
113
+ return iter(session.scalars(streaming_stmt))
77
114
 
78
115
  def read_group(
79
116
  self,
@@ -128,24 +165,34 @@ class QueryEngine:
128
165
  offset: int | None = None,
129
166
  order_by: list | None = None,
130
167
  ) -> list:
131
- """Async variant of list()."""
132
- stmt = select(self._model)
168
+ """Async variant of ``list()`` — returns a materialised list."""
169
+ stmt = self._build_stmt(domain, order_by, limit, offset)
170
+ result = await session.execute(stmt)
171
+ return list(result.scalars().all())
133
172
 
134
- if domain is not None:
135
- spec = parse_domain(domain)
136
- where = compile_domain(spec, self._schema)
137
- stmt = stmt.where(where)
173
+ async def asearch(
174
+ self,
175
+ session,
176
+ domain: Any = None,
177
+ order_by: list | None = None,
178
+ ):
179
+ """Async variant of ``search()`` — returns an async iterator.
138
180
 
139
- if order_by is not None:
140
- stmt = self._apply_order_by(stmt, order_by)
181
+ Consume with ``async for``::
141
182
 
142
- if limit is not None:
143
- stmt = stmt.limit(limit)
144
- if offset is not None:
145
- stmt = stmt.offset(offset)
183
+ async for record in await engine.asearch(session, domain=...):
184
+ process(record)
146
185
 
147
- result = await session.execute(stmt)
148
- return list(result.scalars().all())
186
+ Streams ORM instances in batches of ``DEFAULT_PREFETCH`` (1000) rows
187
+ from a server-side cursor. Single-pass; no ``limit`` / ``offset``.
188
+
189
+ Driver note: true streaming requires a server-side cursor capable
190
+ driver (asyncpg). aiosqlite iterates correctly but without driver-level
191
+ streaming.
192
+ """
193
+ stmt = self._build_stmt(domain, order_by, limit=None, offset=None)
194
+ streaming_stmt = stmt.execution_options(yield_per=DEFAULT_PREFETCH)
195
+ return await session.stream_scalars(streaming_stmt)
149
196
 
150
197
  async def aread_group(
151
198
  self,
@@ -17,12 +17,27 @@ class ChildSchema:
17
17
  columns: dict[str, Column]
18
18
 
19
19
 
20
+ @dataclass
21
+ class RelatedSchema:
22
+ """Represents a Many-to-One (M2O) relationship on the inspected model.
23
+
24
+ The FK column lives on the *current* table (e.g. orders.customer_id),
25
+ and the referenced table holds the PK that FK points to.
26
+ """
27
+
28
+ name: str
29
+ table: Table
30
+ fk_field: str # FK column name on the current (owning) table
31
+ columns: dict[str, Column]
32
+
33
+
20
34
  @dataclass
21
35
  class ModelSchema:
22
36
  model_class: type
23
37
  table: Table
24
38
  columns: dict[str, Column]
25
39
  children: dict[str, ChildSchema] = field(default_factory=dict)
40
+ related: dict[str, RelatedSchema] = field(default_factory=dict)
26
41
 
27
42
 
28
43
  def derive_schema(model_class: type) -> ModelSchema:
@@ -32,24 +47,35 @@ def derive_schema(model_class: type) -> ModelSchema:
32
47
  columns = {col.key: col for col in mapper.columns}
33
48
 
34
49
  children: dict[str, ChildSchema] = {}
50
+ related: dict[str, RelatedSchema] = {}
35
51
  for rel_name, rel in mapper.relationships.items():
36
52
  if rel.direction == RelationshipDirection.ONETOMANY:
37
53
  child_table = rel.mapper.local_table
38
54
  child_columns = {col.key: col for col in rel.mapper.columns}
39
- # Find the FK column on the child table via synchronize_pairs
40
55
  # synchronize_pairs: list of (parent_col, child_col) tuples
41
56
  _, child_fk_col = next(iter(rel.synchronize_pairs))
42
- fk_field = child_fk_col.key
43
57
  children[rel_name] = ChildSchema(
44
58
  name=rel_name,
45
59
  table=child_table,
46
- fk_field=fk_field,
60
+ fk_field=child_fk_col.key,
47
61
  columns=child_columns,
48
62
  )
63
+ elif rel.direction == RelationshipDirection.MANYTOONE:
64
+ ref_table = rel.mapper.local_table
65
+ ref_columns = {col.key: col for col in rel.mapper.columns}
66
+ # synchronize_pairs for M2O: (referenced_pk_col, local_fk_col)
67
+ _, local_fk_col = next(iter(rel.synchronize_pairs))
68
+ related[rel_name] = RelatedSchema(
69
+ name=rel_name,
70
+ table=ref_table,
71
+ fk_field=local_fk_col.key,
72
+ columns=ref_columns,
73
+ )
49
74
 
50
75
  return ModelSchema(
51
76
  model_class=model_class,
52
77
  table=table,
53
78
  columns=columns,
54
79
  children=children,
80
+ related=related,
55
81
  )
@@ -16,12 +16,20 @@ class Base(DeclarativeBase):
16
16
  pass
17
17
 
18
18
 
19
+ class Customer(Base):
20
+ __tablename__ = "customers"
21
+ id: Mapped[int] = mapped_column(primary_key=True)
22
+ name: Mapped[str] = mapped_column(String(100))
23
+
24
+
19
25
  class Order(Base):
20
26
  __tablename__ = "orders"
21
27
  id: Mapped[int] = mapped_column(primary_key=True)
22
28
  status: Mapped[str] = mapped_column(String(20))
23
29
  total: Mapped[int] = mapped_column(default=0)
24
30
  created_at: Mapped[datetime] = mapped_column(DateTime)
31
+ customer_id: Mapped[Optional[int]] = mapped_column(ForeignKey("customers.id"), nullable=True)
32
+ customer: Mapped[Optional["Customer"]] = relationship()
25
33
  lines: Mapped[List["OrderLine"]] = relationship(back_populates="order")
26
34
 
27
35
 
@@ -47,23 +55,31 @@ def db_engine():
47
55
  def seeded_engine(db_engine):
48
56
  """Seed test data once per session."""
49
57
  with Session(db_engine) as sess:
58
+ c1 = Customer(id=1, name="Alice")
59
+ c2 = Customer(id=2, name="Bob")
60
+ sess.add_all([c1, c2])
61
+ sess.flush()
62
+
50
63
  o1 = Order(
51
64
  id=1,
52
65
  status="CONFIRMED",
53
66
  total=100,
54
67
  created_at=datetime(2026, 1, 15),
68
+ customer_id=1,
55
69
  )
56
70
  o2 = Order(
57
71
  id=2,
58
72
  status="CONFIRMED",
59
73
  total=200,
60
74
  created_at=datetime(2026, 2, 20),
75
+ customer_id=2,
61
76
  )
62
77
  o3 = Order(
63
78
  id=3,
64
79
  status="DRAFT",
65
80
  total=50,
66
81
  created_at=datetime(2026, 1, 25),
82
+ customer_id=None,
67
83
  )
68
84
  sess.add_all([o1, o2, o3])
69
85
  sess.flush()
@@ -108,9 +124,14 @@ async def seeded_async_engine(async_db_engine):
108
124
  from sqlalchemy.ext.asyncio import AsyncSession
109
125
 
110
126
  async with AsyncSession(async_db_engine) as sess:
111
- o1 = Order(id=1, status="CONFIRMED", total=100, created_at=datetime(2026, 1, 15))
112
- o2 = Order(id=2, status="CONFIRMED", total=200, created_at=datetime(2026, 2, 20))
113
- o3 = Order(id=3, status="DRAFT", total=50, created_at=datetime(2026, 1, 25))
127
+ c1 = Customer(id=1, name="Alice")
128
+ c2 = Customer(id=2, name="Bob")
129
+ sess.add_all([c1, c2])
130
+ await sess.flush()
131
+
132
+ o1 = Order(id=1, status="CONFIRMED", total=100, created_at=datetime(2026, 1, 15), customer_id=1)
133
+ o2 = Order(id=2, status="CONFIRMED", total=200, created_at=datetime(2026, 2, 20), customer_id=2)
134
+ o3 = Order(id=3, status="DRAFT", total=50, created_at=datetime(2026, 1, 25), customer_id=None)
114
135
  sess.add_all([o1, o2, o3])
115
136
  await sess.flush()
116
137
 
@@ -0,0 +1,82 @@
1
+ """Tests for QueryEngine.asearch() — async streaming iteration."""
2
+
3
+ from __future__ import annotations
4
+
5
+ import pytest
6
+ from sqlalchemy import event
7
+
8
+ from axiom_query.engine import DEFAULT_PREFETCH
9
+ from conftest import Order
10
+
11
+
12
+ @pytest.mark.asyncio
13
+ async def test_asearch_iterates_all_records(async_session, engine):
14
+ result = await engine.asearch(async_session)
15
+ records = [r async for r in result]
16
+ assert len(records) == 3
17
+ assert all(isinstance(r, Order) for r in records)
18
+
19
+
20
+ @pytest.mark.asyncio
21
+ async def test_asearch_with_domain(async_session, engine):
22
+ result = await engine.asearch(async_session, domain=[["status", "=", "CONFIRMED"]])
23
+ records = [r async for r in result]
24
+ assert len(records) == 2
25
+ assert all(r.status == "CONFIRMED" for r in records)
26
+
27
+
28
+ @pytest.mark.asyncio
29
+ async def test_asearch_with_m2o_domain(async_session, engine):
30
+ result = await engine.asearch(async_session, domain=[["customer.name", "=", "Alice"]])
31
+ records = [r async for r in result]
32
+ assert len(records) == 1
33
+ assert records[0].id == 1
34
+
35
+
36
+ @pytest.mark.asyncio
37
+ async def test_asearch_supports_order_by(async_session, engine):
38
+ result = await engine.asearch(async_session, order_by=[["total", "desc"]])
39
+ records = [r async for r in result]
40
+ assert [r.total for r in records] == [200, 100, 50]
41
+
42
+
43
+ @pytest.mark.asyncio
44
+ async def test_asearch_empty_result(async_session, engine):
45
+ result = await engine.asearch(async_session, domain=[["status", "=", "NONEXISTENT"]])
46
+ records = [r async for r in result]
47
+ assert records == []
48
+
49
+
50
+ @pytest.mark.asyncio
51
+ async def test_asearch_no_pagination_args(async_session, engine):
52
+ with pytest.raises(TypeError):
53
+ await engine.asearch(async_session, limit=10)
54
+ with pytest.raises(TypeError):
55
+ await engine.asearch(async_session, offset=5)
56
+
57
+
58
+ @pytest.mark.asyncio
59
+ async def test_asearch_uses_yield_per(async_session, engine, seeded_async_engine):
60
+ """Verify the SQL is issued with yield_per=DEFAULT_PREFETCH."""
61
+ captured = []
62
+
63
+ def listener(conn, cursor, statement, parameters, context, executemany):
64
+ captured.append(context.execution_options)
65
+
66
+ # AsyncEngine wraps a sync Engine; events attach to the sync engine
67
+ sync_engine = seeded_async_engine.sync_engine
68
+ event.listen(sync_engine, "before_cursor_execute", listener)
69
+ try:
70
+ result = await engine.asearch(async_session)
71
+ records = [r async for r in result]
72
+ assert len(records) == 3
73
+ finally:
74
+ event.remove(sync_engine, "before_cursor_execute", listener)
75
+
76
+ assert len(captured) >= 1, f"expected >=1 statement, got {len(captured)}"
77
+ # Find our SELECT statement (there may be other queries on the connection)
78
+ yield_per_stmts = [opts for opts in captured if opts.get("yield_per") == DEFAULT_PREFETCH]
79
+ assert len(yield_per_stmts) == 1, (
80
+ f"expected 1 statement with yield_per={DEFAULT_PREFETCH}, "
81
+ f"got {len(yield_per_stmts)}"
82
+ )
@@ -1,7 +1,9 @@
1
- """Tests for QueryEngine.list() — slices 1-5."""
1
+ """Tests for QueryEngine.list() — slices 1-5 + M2O filtering."""
2
2
 
3
3
  from __future__ import annotations
4
4
 
5
+ import pytest
6
+
5
7
  from conftest import Order
6
8
 
7
9
 
@@ -51,3 +53,39 @@ def test_list_with_order_by(session, engine):
51
53
  records = engine.list(session, order_by=[["total", "desc"]])
52
54
  assert records[0].total == 200
53
55
  assert records[1].total == 100
56
+
57
+
58
+ # Slice 6 — M2O field filtering (EXISTS on referenced table)
59
+ def test_list_filters_by_m2o_field(session, engine):
60
+ # Order 1 → customer Alice; Order 2 → customer Bob; Order 3 → no customer
61
+ records = engine.list(session, domain=[["customer.name", "=", "Alice"]])
62
+ assert len(records) == 1
63
+ assert records[0].id == 1
64
+
65
+
66
+ def test_list_m2o_ilike(session, engine):
67
+ records = engine.list(session, domain=[["customer.name", "ilike", "%ob%"]])
68
+ assert len(records) == 1
69
+ assert records[0].id == 2
70
+
71
+
72
+ def test_list_m2o_no_match(session, engine):
73
+ records = engine.list(session, domain=[["customer.name", "=", "Nobody"]])
74
+ assert records == []
75
+
76
+
77
+ def test_list_m2o_combined_with_scalar(session, engine):
78
+ # Alice's order is CONFIRMED → should match
79
+ records = engine.list(
80
+ session,
81
+ domain=[["customer.name", "=", "Alice"], ["status", "=", "CONFIRMED"]],
82
+ )
83
+ assert len(records) == 1
84
+ assert records[0].id == 1
85
+
86
+
87
+ def test_list_m2o_unknown_relation_raises(session, engine):
88
+ from axiom_query.errors import QueryError
89
+
90
+ with pytest.raises(QueryError):
91
+ engine.list(session, domain=[["nonexistent.name", "=", "x"]])
@@ -0,0 +1,97 @@
1
+ """Tests for QueryEngine.search() — streaming iteration."""
2
+
3
+ from __future__ import annotations
4
+
5
+ from collections.abc import Iterator
6
+
7
+ import pytest
8
+ from sqlalchemy import event
9
+
10
+ from axiom_query.engine import DEFAULT_PREFETCH
11
+ from conftest import Order
12
+
13
+
14
+ def test_search_returns_iterator(session, engine):
15
+ result = engine.search(session)
16
+ assert isinstance(result, Iterator)
17
+
18
+
19
+ def test_search_iterates_all_records(session, engine):
20
+ records = list(engine.search(session))
21
+ assert len(records) == 3
22
+ assert all(isinstance(r, Order) for r in records)
23
+
24
+
25
+ def test_search_with_domain(session, engine):
26
+ records = list(engine.search(session, domain=[["status", "=", "CONFIRMED"]]))
27
+ assert len(records) == 2
28
+ assert all(r.status == "CONFIRMED" for r in records)
29
+
30
+
31
+ def test_search_with_or_domain(session, engine):
32
+ records = list(
33
+ engine.search(
34
+ session,
35
+ domain={"or": [["status", "=", "CONFIRMED"], ["status", "=", "DRAFT"]]},
36
+ )
37
+ )
38
+ assert len(records) == 3
39
+
40
+
41
+ def test_search_with_m2o_domain(session, engine):
42
+ records = list(engine.search(session, domain=[["customer.name", "=", "Alice"]]))
43
+ assert len(records) == 1
44
+ assert records[0].id == 1
45
+
46
+
47
+ def test_search_supports_order_by(session, engine):
48
+ records = list(engine.search(session, order_by=[["total", "desc"]]))
49
+ assert [r.total for r in records] == [200, 100, 50]
50
+
51
+
52
+ def test_search_empty_result(session, engine):
53
+ records = list(engine.search(session, domain=[["status", "=", "NONEXISTENT"]]))
54
+ assert records == []
55
+
56
+
57
+ def test_search_no_pagination_args(session, engine):
58
+ with pytest.raises(TypeError):
59
+ engine.search(session, limit=10)
60
+ with pytest.raises(TypeError):
61
+ engine.search(session, offset=5)
62
+
63
+
64
+ def test_search_uses_yield_per(session, engine, seeded_engine):
65
+ """Verify the SQL is issued with yield_per=DEFAULT_PREFETCH."""
66
+ captured = []
67
+
68
+ def listener(conn, cursor, statement, parameters, context, executemany):
69
+ captured.append(context.execution_options)
70
+
71
+ event.listen(seeded_engine, "before_cursor_execute", listener)
72
+ try:
73
+ list(engine.search(session))
74
+ finally:
75
+ event.remove(seeded_engine, "before_cursor_execute", listener)
76
+
77
+ assert len(captured) == 1, f"expected 1 statement, got {len(captured)}"
78
+ assert captured[0].get("yield_per") == DEFAULT_PREFETCH
79
+
80
+
81
+ def test_search_is_single_pass(session, engine):
82
+ """Iterating the same result a second time yields nothing (iterator exhausted)."""
83
+ result = engine.search(session)
84
+ first_pass = list(result)
85
+ second_pass = list(result)
86
+ assert len(first_pass) == 3
87
+ assert second_pass == []
88
+
89
+
90
+ def test_search_break_does_not_block_session(session, engine):
91
+ """Breaking out of iteration should not leave the session in a bad state."""
92
+ for record in engine.search(session):
93
+ if record.id == 1:
94
+ break
95
+ # Session should still be usable
96
+ records = engine.list(session)
97
+ assert len(records) == 3
@@ -1,29 +0,0 @@
1
- # Changelog
2
-
3
- All notable changes to `AxiomQuery` are documented here.
4
-
5
- Format follows [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
6
- Versioning follows [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
-
8
- ---
9
-
10
- ## [0.1.0] — 2026-03-29
11
-
12
- Initial release.
13
-
14
- ### Added
15
-
16
- - `QueryEngine` — specification-based query facade for any SQLAlchemy 2.0 ORM model
17
- - `list()` / `alist()` — filtered record retrieval with `limit`, `offset`, `order_by`
18
- - `read_group()` / `aread_group()` — grouped aggregation (GROUP BY + HAVING) with `__domain` drill-down per group
19
- - Domain filter DSL — composable JSON expressions: `[field, op, value]`, `{"and": [...]}`, `{"or": [...]}`, `{"not": ...}`
20
- - Full operator set: `=` `!=` `>` `<` `>=` `<=` `in` `not in` `like` `ilike` `is_null`
21
- - Child field filtering via EXISTS subquery (dot notation: `"lines.quantity"`)
22
- - Child field aggregation via LEFT JOIN (`"lines.quantity:sum"`)
23
- - Date granularity grouping: `day` `week` `month` `quarter` `year`
24
- - Aggregate functions: `count` `sum` `avg` `min` `max`
25
- - HAVING filter on aggregate aliases
26
- - Schema auto-derived from `inspect(model_class)` — O2M relationships are children by convention
27
- - Sync (`Session`) and async (`AsyncSession`) APIs
28
- - `QueryError` with `code` and `message` — field validation at compile time, before DB
29
- - `py.typed` marker — PEP 561 inline type annotations
File without changes
File without changes
File without changes