jerry-thomas 1.0.3__py3-none-any.whl → 2.0.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (194) hide show
  1. datapipeline/analysis/vector/collector.py +0 -1
  2. datapipeline/build/tasks/config.py +0 -2
  3. datapipeline/build/tasks/metadata.py +0 -2
  4. datapipeline/build/tasks/scaler.py +0 -2
  5. datapipeline/build/tasks/schema.py +0 -2
  6. datapipeline/build/tasks/utils.py +0 -2
  7. datapipeline/cli/app.py +201 -81
  8. datapipeline/cli/commands/contract.py +145 -283
  9. datapipeline/cli/commands/demo.py +13 -0
  10. datapipeline/cli/commands/domain.py +4 -4
  11. datapipeline/cli/commands/dto.py +11 -0
  12. datapipeline/cli/commands/filter.py +2 -2
  13. datapipeline/cli/commands/inspect.py +0 -68
  14. datapipeline/cli/commands/list_.py +30 -13
  15. datapipeline/cli/commands/loader.py +11 -0
  16. datapipeline/cli/commands/mapper.py +82 -0
  17. datapipeline/cli/commands/parser.py +45 -0
  18. datapipeline/cli/commands/run_config.py +1 -3
  19. datapipeline/cli/commands/serve_pipeline.py +5 -7
  20. datapipeline/cli/commands/source.py +106 -18
  21. datapipeline/cli/commands/stream.py +292 -0
  22. datapipeline/cli/visuals/common.py +0 -2
  23. datapipeline/cli/visuals/sections.py +0 -2
  24. datapipeline/cli/workspace_utils.py +0 -3
  25. datapipeline/config/context.py +0 -2
  26. datapipeline/config/dataset/feature.py +1 -0
  27. datapipeline/config/metadata.py +0 -2
  28. datapipeline/config/project.py +0 -2
  29. datapipeline/config/resolution.py +10 -2
  30. datapipeline/config/tasks.py +9 -9
  31. datapipeline/domain/feature.py +3 -0
  32. datapipeline/domain/record.py +7 -7
  33. datapipeline/domain/sample.py +0 -2
  34. datapipeline/domain/vector.py +6 -8
  35. datapipeline/integrations/ml/adapter.py +0 -2
  36. datapipeline/integrations/ml/pandas_support.py +0 -2
  37. datapipeline/integrations/ml/rows.py +0 -2
  38. datapipeline/integrations/ml/torch_support.py +0 -2
  39. datapipeline/io/output.py +0 -2
  40. datapipeline/io/serializers.py +26 -16
  41. datapipeline/mappers/synthetic/time.py +9 -2
  42. datapipeline/pipeline/artifacts.py +3 -5
  43. datapipeline/pipeline/observability.py +0 -2
  44. datapipeline/pipeline/pipelines.py +118 -34
  45. datapipeline/pipeline/stages.py +54 -18
  46. datapipeline/pipeline/utils/spool_cache.py +142 -0
  47. datapipeline/pipeline/utils/transform_utils.py +27 -2
  48. datapipeline/services/artifacts.py +1 -4
  49. datapipeline/services/constants.py +1 -0
  50. datapipeline/services/factories.py +4 -6
  51. datapipeline/services/paths.py +10 -1
  52. datapipeline/services/project_paths.py +0 -2
  53. datapipeline/services/runs.py +0 -2
  54. datapipeline/services/scaffold/contract_yaml.py +76 -0
  55. datapipeline/services/scaffold/demo.py +141 -0
  56. datapipeline/services/scaffold/discovery.py +115 -0
  57. datapipeline/services/scaffold/domain.py +21 -13
  58. datapipeline/services/scaffold/dto.py +31 -0
  59. datapipeline/services/scaffold/filter.py +2 -1
  60. datapipeline/services/scaffold/layout.py +96 -0
  61. datapipeline/services/scaffold/loader.py +61 -0
  62. datapipeline/services/scaffold/mapper.py +116 -0
  63. datapipeline/services/scaffold/parser.py +56 -0
  64. datapipeline/services/scaffold/plugin.py +14 -2
  65. datapipeline/services/scaffold/source_yaml.py +91 -0
  66. datapipeline/services/scaffold/stream_plan.py +129 -0
  67. datapipeline/services/scaffold/utils.py +187 -0
  68. datapipeline/sources/data_loader.py +0 -2
  69. datapipeline/sources/decoders.py +49 -8
  70. datapipeline/sources/factory.py +9 -6
  71. datapipeline/sources/foreach.py +18 -3
  72. datapipeline/sources/synthetic/time/parser.py +1 -1
  73. datapipeline/sources/transports.py +10 -4
  74. datapipeline/templates/demo_skeleton/demo/contracts/equity.ohlcv.yaml +33 -0
  75. datapipeline/templates/demo_skeleton/demo/contracts/time.ticks.hour_sin.yaml +22 -0
  76. datapipeline/templates/demo_skeleton/demo/contracts/time.ticks.linear.yaml +22 -0
  77. datapipeline/templates/demo_skeleton/demo/data/APPL.jsonl +19 -0
  78. datapipeline/templates/demo_skeleton/demo/data/MSFT.jsonl +19 -0
  79. datapipeline/templates/demo_skeleton/demo/dataset.yaml +19 -0
  80. datapipeline/templates/demo_skeleton/demo/postprocess.yaml +19 -0
  81. datapipeline/templates/demo_skeleton/demo/project.yaml +19 -0
  82. datapipeline/templates/demo_skeleton/demo/sources/sandbox.ohlcv.yaml +17 -0
  83. datapipeline/templates/{plugin_skeleton/example → demo_skeleton/demo}/sources/synthetic.ticks.yaml +1 -1
  84. datapipeline/templates/demo_skeleton/demo/tasks/metadata.yaml +2 -0
  85. datapipeline/templates/demo_skeleton/demo/tasks/scaler.yaml +3 -0
  86. datapipeline/templates/demo_skeleton/demo/tasks/schema.yaml +2 -0
  87. datapipeline/templates/demo_skeleton/demo/tasks/serve.test.yaml +4 -0
  88. datapipeline/templates/demo_skeleton/demo/tasks/serve.train.yaml +4 -0
  89. datapipeline/templates/demo_skeleton/demo/tasks/serve.val.yaml +4 -0
  90. datapipeline/templates/demo_skeleton/scripts/run_dataframe.py +20 -0
  91. datapipeline/templates/demo_skeleton/scripts/run_torch.py +23 -0
  92. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/__init__.py +0 -0
  93. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/domains/equity/__init__.py +0 -0
  94. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/domains/equity/model.py +18 -0
  95. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/dtos/__init__.py +0 -0
  96. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/dtos/sandbox_ohlcv_dto.py +14 -0
  97. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/mappers/__init__.py +0 -0
  98. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/mappers/map_sandbox_ohlcv_dto_to_equity.py +26 -0
  99. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/parsers/__init__.py +0 -0
  100. datapipeline/templates/demo_skeleton/src/{{PACKAGE_NAME}}/parsers/sandbox_ohlcv_dto_parser.py +46 -0
  101. datapipeline/templates/plugin_skeleton/README.md +57 -136
  102. datapipeline/templates/plugin_skeleton/jerry.yaml +12 -24
  103. datapipeline/templates/plugin_skeleton/reference/jerry.yaml +28 -0
  104. datapipeline/templates/plugin_skeleton/reference/reference/contracts/composed.reference.yaml +29 -0
  105. datapipeline/templates/plugin_skeleton/reference/reference/contracts/ingest.reference.yaml +31 -0
  106. datapipeline/templates/plugin_skeleton/reference/reference/contracts/overview.reference.yaml +34 -0
  107. datapipeline/templates/plugin_skeleton/reference/reference/dataset.yaml +29 -0
  108. datapipeline/templates/plugin_skeleton/reference/reference/postprocess.yaml +25 -0
  109. datapipeline/templates/plugin_skeleton/reference/reference/project.yaml +32 -0
  110. datapipeline/templates/plugin_skeleton/reference/reference/sources/foreach.http.reference.yaml +24 -0
  111. datapipeline/templates/plugin_skeleton/reference/reference/sources/foreach.reference.yaml +21 -0
  112. datapipeline/templates/plugin_skeleton/reference/reference/sources/fs.reference.yaml +16 -0
  113. datapipeline/templates/plugin_skeleton/reference/reference/sources/http.reference.yaml +17 -0
  114. datapipeline/templates/plugin_skeleton/reference/reference/sources/overview.reference.yaml +18 -0
  115. datapipeline/templates/plugin_skeleton/reference/reference/sources/synthetic.reference.yaml +15 -0
  116. datapipeline/templates/plugin_skeleton/reference/reference/tasks/metadata.reference.yaml +11 -0
  117. datapipeline/templates/plugin_skeleton/reference/reference/tasks/scaler.reference.yaml +10 -0
  118. datapipeline/templates/plugin_skeleton/reference/reference/tasks/schema.reference.yaml +10 -0
  119. datapipeline/templates/plugin_skeleton/reference/reference/tasks/serve.reference.yaml +28 -0
  120. datapipeline/templates/plugin_skeleton/src/{{PACKAGE_NAME}}/domains/__init__.py +2 -0
  121. datapipeline/templates/plugin_skeleton/src/{{PACKAGE_NAME}}/dtos/__init__.py +0 -0
  122. datapipeline/templates/plugin_skeleton/src/{{PACKAGE_NAME}}/loaders/__init__.py +0 -0
  123. datapipeline/templates/plugin_skeleton/src/{{PACKAGE_NAME}}/mappers/__init__.py +1 -0
  124. datapipeline/templates/plugin_skeleton/src/{{PACKAGE_NAME}}/parsers/__init__.py +0 -0
  125. datapipeline/templates/plugin_skeleton/your-dataset/dataset.yaml +12 -11
  126. datapipeline/templates/plugin_skeleton/your-dataset/postprocess.yaml +4 -13
  127. datapipeline/templates/plugin_skeleton/your-dataset/project.yaml +9 -11
  128. datapipeline/templates/plugin_skeleton/your-dataset/tasks/metadata.yaml +1 -2
  129. datapipeline/templates/plugin_skeleton/your-dataset/tasks/scaler.yaml +1 -7
  130. datapipeline/templates/plugin_skeleton/your-dataset/tasks/schema.yaml +1 -1
  131. datapipeline/templates/plugin_skeleton/your-dataset/tasks/serve.test.yaml +1 -1
  132. datapipeline/templates/plugin_skeleton/your-dataset/tasks/serve.train.yaml +1 -25
  133. datapipeline/templates/plugin_skeleton/your-dataset/tasks/serve.val.yaml +1 -1
  134. datapipeline/templates/plugin_skeleton/your-interim-data-builder/dataset.yaml +9 -0
  135. datapipeline/templates/plugin_skeleton/your-interim-data-builder/postprocess.yaml +1 -0
  136. datapipeline/templates/plugin_skeleton/your-interim-data-builder/project.yaml +15 -0
  137. datapipeline/templates/plugin_skeleton/your-interim-data-builder/tasks/serve.all.yaml +8 -0
  138. datapipeline/templates/stubs/contracts/composed.yaml.j2 +10 -0
  139. datapipeline/templates/stubs/contracts/ingest.yaml.j2 +25 -0
  140. datapipeline/templates/stubs/dto.py.j2 +2 -2
  141. datapipeline/templates/stubs/filter.py.j2 +1 -1
  142. datapipeline/templates/stubs/loaders/basic.py.j2 +11 -0
  143. datapipeline/templates/stubs/mappers/composed.py.j2 +13 -0
  144. datapipeline/templates/stubs/mappers/ingest.py.j2 +20 -0
  145. datapipeline/templates/stubs/parser.py.j2 +5 -1
  146. datapipeline/templates/stubs/record.py.j2 +1 -1
  147. datapipeline/templates/stubs/source.yaml.j2 +1 -1
  148. datapipeline/transforms/debug/identity.py +34 -16
  149. datapipeline/transforms/debug/lint.py +14 -11
  150. datapipeline/transforms/feature/scaler.py +5 -12
  151. datapipeline/transforms/filter.py +73 -17
  152. datapipeline/transforms/interfaces.py +58 -0
  153. datapipeline/transforms/record/floor_time.py +10 -7
  154. datapipeline/transforms/record/lag.py +8 -10
  155. datapipeline/transforms/sequence.py +2 -3
  156. datapipeline/transforms/stream/dedupe.py +5 -7
  157. datapipeline/transforms/stream/ensure_ticks.py +39 -24
  158. datapipeline/transforms/stream/fill.py +34 -25
  159. datapipeline/transforms/stream/filter.py +25 -0
  160. datapipeline/transforms/stream/floor_time.py +16 -0
  161. datapipeline/transforms/stream/granularity.py +52 -30
  162. datapipeline/transforms/stream/lag.py +17 -0
  163. datapipeline/transforms/stream/rolling.py +72 -0
  164. datapipeline/transforms/utils.py +42 -10
  165. datapipeline/transforms/vector/drop/horizontal.py +0 -3
  166. datapipeline/transforms/vector/drop/orchestrator.py +0 -3
  167. datapipeline/transforms/vector/drop/vertical.py +0 -2
  168. datapipeline/transforms/vector/ensure_schema.py +0 -2
  169. datapipeline/utils/paths.py +0 -2
  170. datapipeline/utils/placeholders.py +0 -2
  171. datapipeline/utils/rich_compat.py +0 -3
  172. datapipeline/utils/window.py +0 -2
  173. jerry_thomas-2.0.1.dist-info/METADATA +269 -0
  174. jerry_thomas-2.0.1.dist-info/RECORD +264 -0
  175. {jerry_thomas-1.0.3.dist-info → jerry_thomas-2.0.1.dist-info}/WHEEL +1 -1
  176. {jerry_thomas-1.0.3.dist-info → jerry_thomas-2.0.1.dist-info}/entry_points.txt +7 -3
  177. datapipeline/services/scaffold/mappers.py +0 -55
  178. datapipeline/services/scaffold/source.py +0 -191
  179. datapipeline/templates/plugin_skeleton/example/contracts/time.ticks.hour_sin.yaml +0 -31
  180. datapipeline/templates/plugin_skeleton/example/contracts/time.ticks.linear.yaml +0 -30
  181. datapipeline/templates/plugin_skeleton/example/dataset.yaml +0 -18
  182. datapipeline/templates/plugin_skeleton/example/postprocess.yaml +0 -29
  183. datapipeline/templates/plugin_skeleton/example/project.yaml +0 -23
  184. datapipeline/templates/plugin_skeleton/example/tasks/metadata.yaml +0 -3
  185. datapipeline/templates/plugin_skeleton/example/tasks/scaler.yaml +0 -9
  186. datapipeline/templates/plugin_skeleton/example/tasks/schema.yaml +0 -2
  187. datapipeline/templates/plugin_skeleton/example/tasks/serve.test.yaml +0 -4
  188. datapipeline/templates/plugin_skeleton/example/tasks/serve.train.yaml +0 -28
  189. datapipeline/templates/plugin_skeleton/example/tasks/serve.val.yaml +0 -4
  190. datapipeline/templates/stubs/mapper.py.j2 +0 -22
  191. jerry_thomas-1.0.3.dist-info/METADATA +0 -827
  192. jerry_thomas-1.0.3.dist-info/RECORD +0 -198
  193. {jerry_thomas-1.0.3.dist-info → jerry_thomas-2.0.1.dist-info}/licenses/LICENSE +0 -0
  194. {jerry_thomas-1.0.3.dist-info → jerry_thomas-2.0.1.dist-info}/top_level.txt +0 -0
@@ -0,0 +1,72 @@
1
+ from collections import deque
2
+ from itertools import groupby
3
+ from statistics import mean, median
4
+ from typing import Iterator
5
+
6
+ from datapipeline.domain.record import TemporalRecord
7
+ from datapipeline.transforms.interfaces import FieldStreamTransformBase
8
+ from datapipeline.transforms.utils import (
9
+ get_field,
10
+ is_missing,
11
+ clone_record_with_field,
12
+ partition_key,
13
+ )
14
+
15
+
16
+ class RollingTransformer(FieldStreamTransformBase):
17
+ """Compute a rolling statistic over record field values.
18
+
19
+ - window: number of recent ticks to consider (including missing ticks).
20
+ - min_samples: minimum number of valid samples required to emit a value.
21
+ - statistic: 'mean' (default) or 'median'.
22
+ - field: record attribute to read.
23
+ - to: record attribute to write (defaults to field).
24
+ """
25
+
26
+ def __init__(
27
+ self,
28
+ *,
29
+ field: str,
30
+ to: str | None = None,
31
+ window: int,
32
+ min_samples: int | None = None,
33
+ statistic: str = "mean",
34
+ partition_by: str | list[str] | None = None,
35
+ ) -> None:
36
+ super().__init__(field=field, to=to, partition_by=partition_by)
37
+ if window <= 0:
38
+ raise ValueError("window must be a positive integer")
39
+ if min_samples is None:
40
+ min_samples = window
41
+ if min_samples <= 0:
42
+ raise ValueError("min_samples must be positive")
43
+ if statistic == "mean":
44
+ self.statistic = mean
45
+ elif statistic == "median":
46
+ self.statistic = median
47
+ else:
48
+ raise ValueError(f"Unsupported statistic: {statistic!r}")
49
+
50
+ self.window = window
51
+ self.min_samples = min_samples
52
+
53
+ def apply(self, stream: Iterator[TemporalRecord]) -> Iterator[TemporalRecord]:
54
+ grouped = groupby(stream, key=lambda rec: partition_key(rec, self.partition_by))
55
+
56
+ for _, records in grouped:
57
+ tick_window: deque[float | None] = deque(maxlen=self.window)
58
+
59
+ for record in records:
60
+ value = get_field(record, self.field)
61
+ if is_missing(value):
62
+ tick_window.append(None)
63
+ else:
64
+ tick_window.append(float(value))
65
+
66
+ valid_vals = [v for v in tick_window if v is not None]
67
+ if len(valid_vals) >= self.min_samples:
68
+ rolled = float(self.statistic(valid_vals))
69
+ else:
70
+ rolled = None
71
+
72
+ yield clone_record_with_field(record, self.to, rolled)
@@ -1,6 +1,6 @@
1
- import logging
1
+ import copy
2
2
  import math
3
- from dataclasses import is_dataclass, replace
3
+ from dataclasses import is_dataclass
4
4
  from typing import Any
5
5
 
6
6
 
@@ -12,15 +12,47 @@ def is_missing(value) -> bool:
12
12
  return False
13
13
 
14
14
 
15
- def clone_record_with_value(record: Any, value: Any) -> Any:
16
- """Return a shallow clone of *record* with its numeric value updated."""
15
+ def get_field(record: Any, field: str) -> Any:
16
+ if isinstance(record, dict):
17
+ return record.get(field)
18
+ return getattr(record, field, None)
17
19
 
18
- if hasattr(record, "value"):
19
- if is_dataclass(record):
20
- return replace(record, value=value)
21
20
 
22
- cloned = type(record)(**record.__dict__)
23
- cloned.value = value
21
+ def partition_key(record: Any, partition_by: str | list[str] | None) -> tuple:
22
+ if not partition_by:
23
+ return ()
24
+ if isinstance(partition_by, str):
25
+ return (get_field(record, partition_by),)
26
+ return tuple(get_field(record, field) for field in partition_by)
27
+
28
+
29
+ def clone_record(record: Any, **updates: Any) -> Any:
30
+ """Return a shallow clone of record with updated fields."""
31
+ if is_dataclass(record):
32
+ cloned = copy.copy(record)
33
+ for key, value in updates.items():
34
+ setattr(cloned, key, value)
35
+ post_init = getattr(cloned, "__post_init__", None)
36
+ if callable(post_init) and "time" in updates:
37
+ post_init()
24
38
  return cloned
39
+ if isinstance(record, dict):
40
+ cloned = dict(record)
41
+ cloned.update(updates)
42
+ return cloned
43
+ cloned = type(record)(**record.__dict__)
44
+ for key, value in updates.items():
45
+ setattr(cloned, key, value)
46
+ return cloned
47
+
48
+
49
+ def clone_record_with_field(record: Any, field: str, value: Any) -> Any:
50
+ """Return a shallow clone of record with a specific field updated."""
51
+ return clone_record(record, **{field: value})
52
+
53
+
54
+ def floor_record_time(record: Any, cadence: str) -> Any:
55
+ """Return a cloned record with time floored to cadence."""
56
+ from datapipeline.config.dataset.normalize import floor_time_to_bucket
25
57
 
26
- raise TypeError(f"clone_record_with_value expects an object with 'value'; got {type(record)!r}")
58
+ return clone_record(record, time=floor_time_to_bucket(record.time, cadence))
@@ -1,5 +1,3 @@
1
- from __future__ import annotations
2
-
3
1
  from collections.abc import Iterator
4
2
  from typing import Literal
5
3
 
@@ -76,4 +74,3 @@ class VectorDropHorizontalTransform(VectorPostprocessBase):
76
74
  value = vector.values.get(fid)
77
75
  total += cell_coverage(value)
78
76
  return total / float(len(baseline))
79
-
@@ -1,5 +1,3 @@
1
- from __future__ import annotations
2
-
3
1
  from collections.abc import Iterator
4
2
  from typing import Literal
5
3
 
@@ -56,4 +54,3 @@ class VectorDropTransform:
56
54
 
57
55
  def apply(self, stream: Iterator[Sample]) -> Iterator[Sample]:
58
56
  return getattr(self._impl, "apply")(stream)
59
-
@@ -1,5 +1,3 @@
1
- from __future__ import annotations
2
-
3
1
  from collections.abc import Iterator
4
2
  from typing import Literal
5
3
 
@@ -1,5 +1,3 @@
1
- from __future__ import annotations
2
-
3
1
  from collections import OrderedDict
4
2
  from collections.abc import Iterator
5
3
  from typing import Any, Literal
@@ -1,5 +1,3 @@
1
- from __future__ import annotations
2
-
3
1
  from pathlib import Path
4
2
 
5
3
  DEFAULT_BUILD_DIR = Path("build")
@@ -1,5 +1,3 @@
1
- from __future__ import annotations
2
-
3
1
  from dataclasses import dataclass
4
2
  from typing import Any
5
3
 
@@ -1,6 +1,3 @@
1
- from __future__ import annotations
2
-
3
-
4
1
  def suppress_file_proxy_shutdown_errors() -> None:
5
2
  """Patch rich.file_proxy.FileProxy.flush to ignore shutdown ImportErrors.
6
3
 
@@ -1,5 +1,3 @@
1
- from __future__ import annotations
2
-
3
1
  from datetime import datetime
4
2
 
5
3
  from datapipeline.services.artifacts import (
@@ -0,0 +1,269 @@
1
+ Metadata-Version: 2.4
2
+ Name: jerry-thomas
3
+ Version: 2.0.1
4
+ Summary: Jerry-Thomas: a stream-first, plugin-friendly data pipeline (mixology-themed CLI)
5
+ Author: Anders Skott Lind
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/mr-lovalova/datapipeline
8
+ Project-URL: Repository, https://github.com/mr-lovalova/datapipeline
9
+ Project-URL: Issues, https://github.com/mr-lovalova/datapipeline/issues
10
+ Requires-Python: >=3.10
11
+ Description-Content-Type: text/markdown
12
+ License-File: LICENSE
13
+ Requires-Dist: numpy<3.0,>=1.24
14
+ Requires-Dist: pydantic>=2.0
15
+ Requires-Dist: PyYAML>=5.4
16
+ Requires-Dist: tqdm>=4.0
17
+ Requires-Dist: jinja2>=3.0
18
+ Requires-Dist: rich>=13
19
+ Provides-Extra: ml
20
+ Requires-Dist: pandas>=2.0; extra == "ml"
21
+ Requires-Dist: torch>=2.0; extra == "ml"
22
+ Dynamic: license-file
23
+
24
+ # Datapipeline Runtime
25
+
26
+ Named after the famous bartender, Jerry Thomas is a time-series-first data
27
+ pipeline runtime that mixes disparate data sources into fresh, ready-to-serve
28
+ vectors using declarative YAML recipes. Everything is on-demand, iterator-first:
29
+ data streams through the pipeline without pre-batching the whole dataset in
30
+ memory. Like any good bartender, Jerry obsesses over quality control and
31
+ service, offering stage-by-stage observability along the way. And no bar is
32
+ complete without proper tools: deterministic artifacts and plugin scaffolding
33
+ for custom loaders, parsers, transforms, and filters.
34
+
35
+ Contributing: PRs welcome on [GitHub](https://github.com/mr-lovalova/datapipeline).
36
+
37
+ > **Core assumptions**
38
+ >
39
+ > - Every record carries a timezone-aware `time` attribute and a numeric
40
+ > `value`. The time-zone awareness is a quality gate to ensure correct vector assembly.
41
+ > - Grouping is purely temporal. Dimensional splits belong in `partition_by`.
42
+
43
+ ---
44
+
45
+ ## Why You Might Use It
46
+
47
+ - Materialize canonical time-series datasets from disparate sources.
48
+ - Preview and debug each stage of the pipeline without writing ad-hoc scripts.
49
+ - Enforce coverage/quality gates and publish artifacts (schema, scaler stats)
50
+ for downstream ML teams.
51
+ - Extend the runtime with entry-point driven plugins for domain-specific I/O or
52
+ feature engineering.
53
+ - Consume vectors directly from Python via iterators, Pandas DataFrames, or
54
+ `torch.utils.data.Dataset`.
55
+
56
+ ---
57
+
58
+ ## Quick Start
59
+
60
+ ### Serve The Demo Plugin (Recommended)
61
+
62
+ ```bash
63
+ python -m pip install -U jerry-thomas
64
+ jerry demo init
65
+ python -m pip install -e demo
66
+ jerry serve --dataset demo --limit 3
67
+ ```
68
+
69
+ Note: `jerry demo init` creates a workspace `jerry.yaml`. If you later run
70
+ `jerry plugin init`, it won’t overwrite that file. Remove or edit
71
+ `jerry.yaml` (or pass `--project`) to point at your new plugin.
72
+ For example: `jerry serve --project lib/my-datapipeline/project.yaml`.
73
+
74
+ ### Create Your Own Plugin + First Ingest
75
+
76
+ ```bash
77
+ jerry plugin init my-datapipeline --out lib/
78
+
79
+ # Note: import paths use the package name (hyphens become underscores), e.g.
80
+ # `my_datapipeline` even if the dist folder is `my-datapipeline`.
81
+
82
+ # One-stop wizard: scaffolds source YAML + DTO/parser + domain + mapper + contract.
83
+ # See `docs/cli.md` for wizard tips and identity vs custom guidance.
84
+ jerry inflow create
85
+
86
+ # Reinstall after commands that update entry points (pyproject.toml).
87
+ python -m pip install -e lib/my-datapipeline
88
+
89
+ # -> fill in your templates generated by 'jerry inflow create' and get ready to serve
90
+ jerry serve --limit 3
91
+ ```
92
+
93
+ ---
94
+
95
+ ## Pipeline Stages (serve --stage)
96
+
97
+ Stages 0-6 operate on a single stream at a time (per feature/target config). Stages 7-8 assemble full vectors across all configured features.
98
+
99
+ - Stage 0 (DTO stream)
100
+ - Input: raw source rows (loader transport + decoder)
101
+ - Ops: loader -> decoder -> parser (raw -> DTO; return None to drop rows)
102
+ - Output: DTO objects yielded by the parser
103
+
104
+ - Stage 1 (record stream)
105
+ - Input: DTO stream
106
+ - Ops: mapper (DTO -> domain TemporalRecord)
107
+ - Output: TemporalRecord instances (must have timezone-aware `time`)
108
+
109
+ - Stage 2 (record transforms)
110
+ - Input: TemporalRecord stream
111
+ - Ops: contract `record:` transforms (e.g. filter, floor_time); per-record only (no history)
112
+ - Output: TemporalRecord stream (possibly filtered/mutated)
113
+
114
+ - Stage 3 (ordered record stream)
115
+ - Input: TemporalRecord stream
116
+ - Ops:
117
+ - sort by `(partition_key, record.time)` (batch/in-memory sort; typically the expensive step)
118
+ - Output: TemporalRecord stream (sorted by partition,time)
119
+
120
+ - Stage 4 (stream transforms)
121
+ - Input: ordered TemporalRecord stream
122
+ - Ops:
123
+ - apply contract `stream:` transforms (per-partition history; e.g. ensure_cadence, rolling, fill)
124
+ - apply contract `debug:` transforms (validation only; e.g. lint)
125
+ - Output: TemporalRecord stream (sorted by partition,time)
126
+
127
+ - Stage 5 (feature stream)
128
+ - Input: TemporalRecord stream
129
+ - Ops: wrap each record as `FeatureRecord(id, record, value)`; `id` is derived from:
130
+ - dataset `id:` (base feature id), and
131
+ - optional `partition_by:` fields (entity-specific feature ids)
132
+ - `value` is selected from `dataset.yaml` via `field: <record_attr>`
133
+ - Output: FeatureRecord stream (sorted by id,time within partitions)
134
+
135
+ - Stage 6 (feature transforms)
136
+ - Input: FeatureRecord stream (sorted by id,time)
137
+ - Ops: dataset-level feature transforms configured per feature (e.g. `scale`, `sequence`)
138
+ - Output: FeatureRecord or FeatureRecordSequence
139
+
140
+ - Stage 7 (vector assembly)
141
+ - Input: all features/targets after stage 6
142
+ - Ops:
143
+ - merge feature streams by time bucket (`group_by`)
144
+ - assemble `Vector` objects (feature_id -> value or sequence)
145
+ - assemble `Sample(key, features, targets)`
146
+ - if rectangular mode is on, align to the expected time window keys (missing buckets become empty vectors)
147
+ - Output: Sample stream (no postprocess, no split)
148
+
149
+ - Stage 8 (postprocess)
150
+ - Input: Sample stream
151
+ - Ops:
152
+ - ensure vector schema (fill missing configured feature ids, drop extras)
153
+ - apply project `postprocess.yaml` vector transforms
154
+ - Output: Sample stream (still not split)
155
+
156
+ Full run (no --stage)
157
+
158
+ - Runs stages 0-8, then applies the configured train/val/test split and optional throttling, then writes output.
159
+
160
+ Split timing (leakage note)
161
+
162
+ - Split is applied after stage 8 in `jerry serve` (postprocess runs before split).
163
+ - Feature engineering runs before split; keep it causal (no look-ahead, no future leakage).
164
+ - Scaler statistics are fit by the build task `scaler.yaml` and are typically restricted to the `train` split (configurable via `split_label`).
165
+
166
+ ---
167
+
168
+ ## CLI Cheat Sheet
169
+
170
+ - `jerry demo init`: scaffolds a standalone demo plugin at `./demo/` and wires a `demo` dataset.
171
+ - `jerry plugin init <name> --out lib/`: scaffolds `lib/<name>/` (writes workspace `jerry.yaml` when missing).
172
+ - `jerry.yaml`: sets `plugin_root` for scaffolding commands and `datasets/default_dataset` so you can omit `--project`/`--dataset`.
173
+ - `jerry serve [--dataset <alias>|--project <path>] [--limit N] [--stage 0-8] [--skip-build]`: streams output; builds required artifacts unless `--skip-build`.
174
+ - `jerry build [--dataset <alias>|--project <path>] [--force]`: materializes artifacts (schema, scaler, etc.).
175
+ - `jerry inspect report|matrix|partitions [--dataset <alias>|--project <path>]`: quality and metadata helpers.
176
+ - `jerry inflow create`: interactive wizard to scaffold an end-to-end ingest stream (source + parser/DTO + mapper + contract).
177
+ - `jerry source create <provider>.<dataset> ...`: scaffolds a source YAML (no Python code).
178
+ - `jerry domain create <domain>`: scaffolds a domain record stub.
179
+ - `jerry dto create`, `jerry parser create`, `jerry mapper create`, `jerry loader create`: scaffold Python code + register entry points (reinstall after).
180
+ - `jerry contract create [--identity]`: interactive contract scaffolder (YAML); use for canonical streams or composed streams.
181
+ - `jerry list sources|domains|parsers|mappers|loaders|dtos`: introspection helpers.
182
+ - `pip install -e lib/<name>`: rerun after commands that update `lib/<name>/pyproject.toml` (entry points), or after manual edits to it.
183
+
184
+ ---
185
+
186
+ ## MLOps & Reproducibility
187
+
188
+ - `jerry build` materializes deterministic artifacts (schema, scaler, metadata).
189
+ Builds are keyed by config hashes and skip work when nothing changed unless
190
+ you pass `--force`.
191
+ - `jerry serve` runs are named (task/run) and can write outputs to
192
+ `<out-path>/<run_name>/` for auditing, sharing, or downstream training.
193
+ - Versioning: tag the project config + plugin code in Git and pair with a data
194
+ versioning tool like DVC for raw sources. With those inputs pinned, interim
195
+ datasets and artifacts can be regenerated instead of stored.
196
+
197
+ ---
198
+
199
+ ## Concepts
200
+
201
+ ### Workspace (`jerry.yaml`)
202
+
203
+ - `datasets`: dataset aliases → `project.yaml` paths (relative to `jerry.yaml`).
204
+ - `default_dataset`: which dataset `jerry serve/build/inspect` use when you omit `--dataset/--project`.
205
+ - `plugin_root`: where scaffolding commands write Python code (`src/<package>/...`) and where they look for `pyproject.toml`.
206
+
207
+ ### Plugin Package (Python Code)
208
+
209
+ These live under `lib/<plugin>/src/<package>/`:
210
+
211
+ - `dtos/*.py`: DTO models (raw source shapes).
212
+ - `parsers/*.py`: raw -> DTO parsers (referenced by source YAML via entry point).
213
+ - `domains/<domain>/model.py`: domain record models.
214
+ - `mappers/*.py`: DTO -> domain record mapping functions (referenced by contracts via entry point).
215
+ - `loaders/*.py`: optional custom loaders (fs/http usually use the built-in core loader).
216
+ - `pyproject.toml`: entry points for loaders/parsers/mappers/transforms (rerun `pip install -e lib/<plugin>` after changes).
217
+
218
+ ### Loaders & Parsers
219
+
220
+ - A **loader** yields raw rows (bytes/dicts) from some transport (FS/HTTP/synthetic/etc.).
221
+ - A **parser** turns each raw row into a typed DTO (or returns `None` to drop a row).
222
+ - In most projects, your source YAML uses the built-in loader `core.io` and you only customize its `args` (`transport`, `format`, and a `path`/`url`).
223
+ - You typically only implement a custom loader when you need specialized behavior (auth/pagination/rate limits, proprietary formats, or non-standard protocols).
224
+ - `parser.args` are optional and only used when your parser supports configuration; many parsers don’t need any args since filtering etc is supported natively downstream.
225
+
226
+ ### DTOs & Domains
227
+
228
+ - A **DTO** (Data Transfer Object) mirrors a single source’s schema (columns/fields) and stays “raw-shaped”; it’s what parsers emit.
229
+ - A **domain record** is the canonical shape used across the pipeline. Mappers convert DTOs into domain records so multiple sources can land in the same domain model.
230
+ - The base time-series type is `TemporalRecord` (`time` + metadata fields). Domains add identity fields (e.g. `symbol`, `station_id`) that make filtering/partitioning meaningful.
231
+ - `time` must be timezone-aware (normalized to UTC); feature values are selected from record fields in `dataset.yaml` (see `field:`); remaining fields act as the record’s “identity” (used by equality/deduping and commonly by `partition_by`).
232
+
233
+ ### Transforms (Record → Stream → Feature → Vector)
234
+
235
+ - **Record transforms** run on raw canonical records before sorting or grouping (filters, time flooring, lagging). Each transform operates on one record at a time because order and partitions are not established yet. Configure in `contracts/*.yaml` under `record:`.
236
+ - **Stream transforms** run on ordered, per-stream records after record transforms (dedupe, cadence enforcement, rolling fills). These operate across a sequence of records for a partition because they depend on sorted partition/time order and cadence. Configure in `contracts/*.yaml` under `stream:`.
237
+ - **Feature transforms** run after stream regularization and shape the per-feature payload for vectorization (scalers, sequence/windowing). These occur after feature ids are finalized and payloads are wrapped. Configure in `dataset.yaml` under each feature.
238
+ - **Vector (postprocess) transforms** operate on assembled vectors (coverage/drop/fill/replace). Configure in `postprocess.yaml`.
239
+ - **Debug transforms** run after stream transforms for validation only. Configure in `contracts/*.yaml` under `debug:`.
240
+ - Custom transforms are registered in your plugin `pyproject.toml` under the matching entry-point group:
241
+ - `datapipeline.transforms.record`
242
+ - `datapipeline.transforms.stream`
243
+ - `datapipeline.transforms.feature`
244
+ - `datapipeline.transforms.vector`
245
+ - `datapipeline.transforms.debug`
246
+ Then reference them by name in the YAML.
247
+
248
+ ### Glossary
249
+
250
+ - **Source alias**: `sources/*.yaml:id` (referenced by contracts under `source:`).
251
+ - **Stream id**: `contracts/*.yaml:id` (referenced by `dataset.yaml` under `record_stream:`).
252
+ - **Partition**: dimension keys appended to feature IDs, driven by `contract.partition_by`.
253
+ - **Group**: vector “bucket” cadence set by `dataset.group_by` (controls how records become samples).
254
+ - **Stage**: debug/preview level for `jerry serve --stage 0-8` (DTOs → domain records → features → vectors).
255
+ - **Fan-out**: when multiple features reference the same `record_stream`, the pipeline spools records to disk so each feature can read independently (records must be picklable).
256
+
257
+ ## Documentation
258
+
259
+ - `docs/config.md`: config layout, resolution order, and YAML reference.
260
+ - `docs/cli.md`: CLI reference (beyond the cheat sheet).
261
+ - `docs/transforms.md`: built-in transforms and filters.
262
+ - `docs/artifacts.md`: artifacts, postprocess, and split timing.
263
+ - `docs/python.md`: Python API usage patterns.
264
+ - `docs/extending.md`: entry points and writing plugins.
265
+ - `docs/architecture.md`: pipeline diagrams.
266
+
267
+ ## Development
268
+
269
+ See `CONTRIBUTING.md`.