shepherd-data 2023.9.9__tar.gz → 2023.10.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/PKG-INFO +6 -6
  2. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/README.md +2 -2
  3. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/pyproject.toml +4 -0
  4. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/setup.cfg +4 -4
  5. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data/__init__.py +2 -3
  6. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data/cli.py +144 -109
  7. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data/ivonne.py +20 -10
  8. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data/mppt.py +9 -8
  9. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data/reader.py +36 -28
  10. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data.egg-info/PKG-INFO +6 -6
  11. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data.egg-info/requires.txt +1 -1
  12. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/test_cli.py +1 -1
  13. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data/debug_resampler.py +0 -0
  14. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data.egg-info/SOURCES.txt +0 -0
  15. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data.egg-info/dependency_links.txt +0 -0
  16. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data.egg-info/entry_points.txt +0 -0
  17. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data.egg-info/top_level.txt +0 -0
  18. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/shepherd_data.egg-info/zip-safe +0 -0
  19. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/__init__.py +0 -0
  20. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/conftest.py +0 -0
  21. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/test_cli_downsample.py +0 -0
  22. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/test_cli_extract.py +0 -0
  23. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/test_cli_plot.py +0 -0
  24. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/test_cli_validate.py +0 -0
  25. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/test_examples.py +0 -0
  26. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/test_ivonne.py +0 -0
  27. {shepherd_data-2023.9.9 → shepherd_data-2023.10.2}/tests/test_reader.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: shepherd_data
3
- Version: 2023.9.9
3
+ Version: 2023.10.2
4
4
  Summary: Programming- and CLI-Interface for the h5-dataformat of the Shepherd-Testbed
5
5
  Home-page: https://pypi.org/project/shepherd-data/
6
6
  Author: Ingmar Splitt, Kai Geissdoerfer
@@ -20,20 +20,20 @@ Classifier: Development Status :: 5 - Production/Stable
20
20
  Classifier: Intended Audience :: Developers
21
21
  Classifier: Intended Audience :: Information Technology
22
22
  Classifier: Intended Audience :: Science/Research
23
- Classifier: Programming Language :: Python :: 3.7
24
23
  Classifier: Programming Language :: Python :: 3.8
25
24
  Classifier: Programming Language :: Python :: 3.9
26
25
  Classifier: Programming Language :: Python :: 3.10
27
26
  Classifier: Programming Language :: Python :: 3.11
27
+ Classifier: Programming Language :: Python :: 3.12
28
28
  Classifier: License :: OSI Approved :: MIT License
29
29
  Classifier: Operating System :: OS Independent
30
30
  Classifier: Natural Language :: English
31
- Requires-Python: >=3.7
31
+ Requires-Python: >=3.8
32
32
  Description-Content-Type: text/markdown
33
33
  Requires-Dist: h5py
34
34
  Requires-Dist: numpy
35
35
  Requires-Dist: pyYAML
36
- Requires-Dist: shepherd-core[inventory]
36
+ Requires-Dist: shepherd-core[inventory]>=2023.10.2
37
37
  Requires-Dist: click
38
38
  Requires-Dist: matplotlib
39
39
  Requires-Dist: pandas
@@ -51,7 +51,8 @@ Requires-Dist: pytest-click; extra == "test"
51
51
  # Data Module
52
52
 
53
53
  [![PyPiVersion](https://img.shields.io/pypi/v/shepherd_data.svg)](https://pypi.org/project/shepherd_data)
54
- [![Pytest](https://github.com/orgua/shepherd-datalib/actions/workflows/python-app.yml/badge.svg)](https://github.com/orgua/shepherd-datalib/actions/workflows/python-app.yml)
54
+ [![image](https://img.shields.io/pypi/pyversions/shepherd_data.svg)](https://pypi.python.org/pypi/shepherd-data)
55
+ [![Pytest](https://github.com/orgua/shepherd-datalib/actions/workflows/py_unittest.yml/badge.svg)](https://github.com/orgua/shepherd-datalib/actions/workflows/py_unittest.yml)
55
56
  [![CodeStyle](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
56
57
 
57
58
  **Documentation**: <https://orgua.github.io/shepherd/external/shepherd_data.html>
@@ -141,7 +142,6 @@ with sd.Reader("./hrv_sawtooth_1h.h5") as db:
141
142
  - `is_valid`
142
143
  - `energy()`
143
144
  - `check_timediffs()`
144
- - `data_timediffs()`
145
145
  - `get_metadata()`
146
146
  - `save_metadata()`
147
147
 
@@ -1,7 +1,8 @@
1
1
  # Data Module
2
2
 
3
3
  [![PyPiVersion](https://img.shields.io/pypi/v/shepherd_data.svg)](https://pypi.org/project/shepherd_data)
4
- [![Pytest](https://github.com/orgua/shepherd-datalib/actions/workflows/python-app.yml/badge.svg)](https://github.com/orgua/shepherd-datalib/actions/workflows/python-app.yml)
4
+ [![image](https://img.shields.io/pypi/pyversions/shepherd_data.svg)](https://pypi.python.org/pypi/shepherd-data)
5
+ [![Pytest](https://github.com/orgua/shepherd-datalib/actions/workflows/py_unittest.yml/badge.svg)](https://github.com/orgua/shepherd-datalib/actions/workflows/py_unittest.yml)
5
6
  [![CodeStyle](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
6
7
 
7
8
  **Documentation**: <https://orgua.github.io/shepherd/external/shepherd_data.html>
@@ -91,7 +92,6 @@ with sd.Reader("./hrv_sawtooth_1h.h5") as db:
91
92
  - `is_valid`
92
93
  - `energy()`
93
94
  - `check_timediffs()`
94
- - `data_timediffs()`
95
95
  - `get_metadata()`
96
96
  - `save_metadata()`
97
97
 
@@ -1,3 +1,7 @@
1
1
  [build-system]
2
2
  requires = ["setuptools"]
3
3
  build-backend = "setuptools.build_meta"
4
+
5
+ [tool.mypy]
6
+ python_version = 3.8
7
+ ignore_missing_imports = true
@@ -20,11 +20,11 @@ classifiers =
20
20
  Intended Audience :: Developers
21
21
  Intended Audience :: Information Technology
22
22
  Intended Audience :: Science/Research
23
- Programming Language :: Python :: 3.7
24
23
  Programming Language :: Python :: 3.8
25
24
  Programming Language :: Python :: 3.9
26
25
  Programming Language :: Python :: 3.10
27
26
  Programming Language :: Python :: 3.11
27
+ Programming Language :: Python :: 3.12
28
28
  License :: OSI Approved :: MIT License
29
29
  Operating System :: OS Independent
30
30
  Natural Language :: English
@@ -35,12 +35,12 @@ package_dir =
35
35
  =.
36
36
  zip_safe = True
37
37
  include_package_data = True
38
- python_requires = >= 3.7
38
+ python_requires = >= 3.8
39
39
  install_requires =
40
40
  h5py
41
41
  numpy
42
42
  pyYAML
43
- shepherd-core[inventory]
43
+ shepherd-core[inventory]>=2023.10.2
44
44
  click
45
45
  matplotlib
46
46
  pandas # V2 is OK
@@ -74,7 +74,7 @@ shepherd_data =
74
74
  test = pytest
75
75
 
76
76
  [tool:pytest]
77
- addopts = -vvv
77
+ addopts = -vvv --stepwise
78
78
 
79
79
  [egg_info]
80
80
  tag_build =
@@ -1,5 +1,4 @@
1
- """
2
- shepherd.datalib
1
+ """shepherd.datalib
3
2
  ~~~~~
4
3
  Provides classes for storing and retrieving sampled IV data to/from
5
4
  HDF5 files.
@@ -9,7 +8,7 @@ from shepherd_core import Writer
9
8
 
10
9
  from .reader import Reader
11
10
 
12
- __version__ = "2023.9.9"
11
+ __version__ = "2023.10.2"
13
12
 
14
13
  __all__ = [
15
14
  "Reader",
@@ -1,9 +1,9 @@
1
- """
2
- Command definitions for CLI
1
+ """Command definitions for CLI
3
2
  """
4
3
  import logging
5
4
  import os
6
5
  import sys
6
+ from contextlib import suppress
7
7
  from datetime import datetime
8
8
  from pathlib import Path
9
9
  from typing import List
@@ -13,6 +13,7 @@ import click
13
13
 
14
14
  from shepherd_core import get_verbose_level
15
15
  from shepherd_core import increase_verbose_level
16
+ from shepherd_core import local_tz
16
17
  from shepherd_core.commons import samplerate_sps_default
17
18
 
18
19
  from . import Reader
@@ -23,7 +24,7 @@ logger = logging.getLogger("SHPData.cli")
23
24
 
24
25
 
25
26
  def path_to_flist(data_path: Path) -> List[Path]:
26
- """every path gets transformed to a list of paths
27
+ """Every path gets transformed to a list of paths
27
28
  - if directory: list of files inside
28
29
  - if existing file: list with 1 element
29
30
  - or else: empty list
@@ -36,7 +37,7 @@ def path_to_flist(data_path: Path) -> List[Path]:
36
37
  flist = os.listdir(data_path)
37
38
  for file in flist:
38
39
  fpath = data_path / str(file)
39
- if not fpath.is_file() or ".h5" != fpath.suffix.lower():
40
+ if not fpath.is_file() or fpath.suffix.lower() != ".h5":
40
41
  continue
41
42
  h5files.append(fpath)
42
43
  return h5files
@@ -55,7 +56,7 @@ def path_to_flist(data_path: Path) -> List[Path]:
55
56
  help="Prints version-info at start (combinable with -v)",
56
57
  )
57
58
  @click.pass_context # TODO: is the ctx-type correct?
58
- def cli(ctx: click.Context, verbose: bool, version: bool) -> None:
59
+ def cli(ctx: click.Context, verbose: bool, version: bool) -> None: # noqa: FBT001
59
60
  """Shepherd: Synchronized Energy Harvesting Emulator and Recorder"""
60
61
  if verbose:
61
62
  increase_verbose_level(3)
@@ -77,12 +78,15 @@ def validate(in_data: Path) -> None:
77
78
  for file in files:
78
79
  logger.info("Validating '%s' ...", file.name)
79
80
  valid_file = True
80
- with Reader(file, verbose=verbose_level > 2) as shpr:
81
- valid_file &= shpr.is_valid()
82
- valid_file &= shpr.check_timediffs()
83
- valid_dir &= valid_file
84
- if not valid_file:
85
- logger.error(" -> File '%s' was NOT valid", file.name)
81
+ try:
82
+ with Reader(file, verbose=verbose_level > 2) as shpr:
83
+ valid_file &= shpr.is_valid()
84
+ valid_file &= shpr.check_timediffs()
85
+ valid_dir &= valid_file
86
+ if not valid_file:
87
+ logger.error(" -> File '%s' was NOT valid", file.name)
88
+ except TypeError as _xpc:
89
+ logger.error("ERROR: will skip file, caught exception: %s", _xpc)
86
90
  sys.exit(int(not valid_dir))
87
91
 
88
92
 
@@ -111,34 +115,42 @@ def extract(in_data: Path, ds_factor: float, separator: str) -> None:
111
115
  logger.info("DS-Factor was invalid was reset to 1'000")
112
116
  for file in files:
113
117
  logger.info("Extracting IV-Samples from '%s' ...", file.name)
114
- with Reader(file, verbose=verbose_level > 2) as shpr:
115
- # will create a downsampled h5-file (if not existing) and then saving to csv
116
- ds_file = file.with_suffix(f".downsampled_x{round(ds_factor)}.h5")
117
- if not ds_file.exists():
118
- logger.info("Downsampling '%s' by factor x%f ...", file.name, ds_factor)
119
- with Writer(
120
- ds_file,
121
- mode=shpr.get_mode(),
122
- datatype=shpr.get_datatype(),
123
- window_samples=shpr.get_window_samples(),
124
- cal_data=shpr.get_calibration_data(),
125
- verbose=verbose_level > 2,
126
- ) as shpw:
127
- shpw["ds_factor"] = ds_factor
128
- shpw.store_hostname(shpr.get_hostname())
129
- shpw.store_config(shpr.get_config())
130
- shpr.downsample(
131
- shpr.ds_time, shpw.ds_time, ds_factor=ds_factor, is_time=True
132
- )
133
- shpr.downsample(
134
- shpr.ds_voltage, shpw.ds_voltage, ds_factor=ds_factor
135
- )
136
- shpr.downsample(
137
- shpr.ds_current, shpw.ds_current, ds_factor=ds_factor
118
+ try:
119
+ with Reader(file, verbose=verbose_level > 2) as shpr:
120
+ # will create a downsampled h5-file (if not existing) and then saving to csv
121
+ ds_file = file.with_suffix(f".downsampled_x{round(ds_factor)}.h5")
122
+ if not ds_file.exists():
123
+ logger.info(
124
+ "Downsampling '%s' by factor x%f ...", file.name, ds_factor
138
125
  )
126
+ with Writer(
127
+ ds_file,
128
+ mode=shpr.get_mode(),
129
+ datatype=shpr.get_datatype(),
130
+ window_samples=shpr.get_window_samples(),
131
+ cal_data=shpr.get_calibration_data(),
132
+ verbose=verbose_level > 2,
133
+ ) as shpw:
134
+ shpw["ds_factor"] = ds_factor
135
+ shpw.store_hostname(shpr.get_hostname())
136
+ shpw.store_config(shpr.get_config())
137
+ shpr.downsample(
138
+ shpr.ds_time,
139
+ shpw.ds_time,
140
+ ds_factor=ds_factor,
141
+ is_time=True,
142
+ )
143
+ shpr.downsample(
144
+ shpr.ds_voltage, shpw.ds_voltage, ds_factor=ds_factor
145
+ )
146
+ shpr.downsample(
147
+ shpr.ds_current, shpw.ds_current, ds_factor=ds_factor
148
+ )
139
149
 
140
- with Reader(ds_file, verbose=verbose_level > 2) as shpd:
141
- shpd.save_csv(shpd["data"], separator)
150
+ with Reader(ds_file, verbose=verbose_level > 2) as shpd:
151
+ shpd.save_csv(shpd["data"], separator)
152
+ except TypeError as _xpc:
153
+ logger.error("ERROR: will skip file, caught exception: %s", _xpc)
142
154
 
143
155
 
144
156
  @cli.command(
@@ -158,23 +170,22 @@ def extract_meta(in_data: Path, separator: str) -> None:
158
170
  verbose_level = get_verbose_level()
159
171
  for file in files:
160
172
  logger.info("Extracting metadata & logs from '%s' ...", file.name)
161
- with Reader(file, verbose=verbose_level > 2) as shpr:
162
- elements = shpr.save_metadata()
163
- # TODO: add default exports (user-centric) and allow specifying --all or specific ones
164
- # TODO: could also be combined with other extractors (just have one)
165
- if "sysutil" in elements:
166
- shpr.save_csv(shpr["sysutil"], separator)
167
- if "timesync" in elements:
168
- shpr.save_csv(shpr["timesync"], separator)
169
-
170
- if "shepherd-log" in elements:
171
- shpr.save_log(shpr["shepherd-log"])
172
- if "dmesg" in elements:
173
- shpr.save_log(shpr["dmesg"])
174
- if "exceptions" in elements:
175
- shpr.save_log(shpr["exceptions"])
176
- if "uart" in elements:
177
- shpr.save_log(shpr["uart"])
173
+ # TODO: add default exports (user-centric) and allow specifying --all or specific ones
174
+ # TODO: could also be combined with other extractors (just have one)
175
+ # TODO remove deprecated: timesync; "shepherd-log", "dmesg", "exceptions"
176
+ try:
177
+ with Reader(file, verbose=verbose_level > 2) as shpr:
178
+ elements = shpr.save_metadata()
179
+ for element in ["ptp", "sysutil", "timesync"]:
180
+ if element in elements:
181
+ shpr.save_csv(shpr[element], separator)
182
+ logs_depr = ["shepherd-log", "dmesg", "exceptions"]
183
+ logs = ["sheep", "kernel", "phc2sys", "uart"]
184
+ for element in logs + logs_depr:
185
+ if element in elements:
186
+ shpr.save_log(shpr[element])
187
+ except TypeError as _xpc:
188
+ logger.error("ERROR: will skip file, caught exception: %s", _xpc)
178
189
 
179
190
 
180
191
  @cli.command(
@@ -187,24 +198,29 @@ def extract_uart(in_data: Path) -> None:
187
198
  verbose_level = get_verbose_level()
188
199
  for file in files:
189
200
  logger.info("Extracting uart from gpio-trace from from '%s' ...", file.name)
190
- with Reader(file, verbose=verbose_level > 2) as shpr:
191
- # TODO: move into separate fn OR add to h5-file and use .save_log(), ALSO TEST
192
- lines = shpr.gpio_to_uart()
193
- # TODO: could also add parameter to get symbols instead of lines
194
- log_path = Path(file).with_suffix(".uart_from_wf.log")
195
- if log_path.exists():
196
- logger.warning("%s already exists, will skip", log_path)
197
- continue
201
+ try:
202
+ with Reader(file, verbose=verbose_level > 2) as shpr:
203
+ # TODO: move into separate fn OR add to h5-file and use .save_log(), ALSO TEST
204
+ lines = shpr.gpio_to_uart()
205
+ # TODO: could also add parameter to get symbols instead of lines
206
+ log_path = Path(file).with_suffix(".uart_from_wf.log")
207
+ if log_path.exists():
208
+ logger.warning("%s already exists, will skip", log_path)
209
+ continue
198
210
 
199
- with open(log_path, "w") as log_file:
200
- for line in lines:
201
- try:
202
- timestamp = datetime.utcfromtimestamp(float(line[0]))
203
- log_file.write(timestamp.strftime("%Y-%m-%d %H:%M:%S.%f") + ":")
204
- log_file.write(f"\t{str.encode(line[1])}")
205
- log_file.write("\n")
206
- except TypeError:
207
- continue
211
+ with log_path.open("w") as log_file:
212
+ for line in lines:
213
+ with suppress(TypeError):
214
+ timestamp = datetime.fromtimestamp(
215
+ float(line[0]), tz=local_tz()
216
+ )
217
+ log_file.write(
218
+ timestamp.strftime("%Y-%m-%d %H:%M:%S.%f") + ":"
219
+ )
220
+ log_file.write(f"\t{str.encode(line[1])}")
221
+ log_file.write("\n")
222
+ except TypeError as _xpc:
223
+ logger.error("ERROR: will skip file, caught exception: %s", _xpc)
208
224
 
209
225
 
210
226
  @cli.command(
@@ -224,10 +240,13 @@ def extract_gpio(in_data: Path, separator: str) -> None:
224
240
  verbose_level = get_verbose_level()
225
241
  for file in files:
226
242
  logger.info("Extracting gpio-trace from from '%s' ...", file.name)
227
- with Reader(file, verbose=verbose_level > 2) as shpr:
228
- wfs = shpr.gpio_to_waveforms()
229
- for name, wf in wfs.items():
230
- shpr.waveform_to_csv(name, wf, separator)
243
+ try:
244
+ with Reader(file, verbose=verbose_level > 2) as shpr:
245
+ wfs = shpr.gpio_to_waveforms()
246
+ for name, wf in wfs.items():
247
+ shpr.waveform_to_csv(name, wf, separator)
248
+ except TypeError as _xpc:
249
+ logger.error("ERROR: will skip file, caught exception: %s", _xpc)
231
250
 
232
251
 
233
252
  @cli.command(
@@ -253,7 +272,8 @@ def downsample(
253
272
  in_data: Path, ds_factor: Optional[float], sample_rate: Optional[int]
254
273
  ) -> None:
255
274
  """Creates an array of downsampling-files from file
256
- or directory containing shepherd-recordings"""
275
+ or directory containing shepherd-recordings
276
+ """
257
277
  if ds_factor is None and sample_rate is not None and sample_rate >= 1:
258
278
  ds_factor = int(samplerate_sps_default / sample_rate)
259
279
  # TODO: shouldn't current sps be based on file rather than default?
@@ -265,34 +285,44 @@ def downsample(
265
285
  files = path_to_flist(in_data)
266
286
  verbose_level = get_verbose_level()
267
287
  for file in files:
268
- with Reader(file, verbose=verbose_level > 2) as shpr:
269
- for _factor in ds_list:
270
- if shpr.ds_time.shape[0] / _factor < 1000:
271
- logger.warning(
272
- "will skip downsampling for %s because resulting sample-size is too small",
273
- file.name,
274
- )
275
- break
276
- ds_file = file.with_suffix(f".downsampled_x{round(_factor)}.h5")
277
- if ds_file.exists():
278
- continue
279
- logger.info("Downsampling '%s' by factor x%f ...", file.name, _factor)
280
- with Writer(
281
- ds_file,
282
- mode=shpr.get_mode(),
283
- datatype=shpr.get_datatype(),
284
- window_samples=shpr.get_window_samples(),
285
- cal_data=shpr.get_calibration_data(),
286
- verbose=verbose_level > 2,
287
- ) as shpw:
288
- shpw["ds_factor"] = _factor
289
- shpw.store_hostname(shpr.get_hostname())
290
- shpw.store_config(shpr.get_config())
291
- shpr.downsample(
292
- shpr.ds_time, shpw.ds_time, ds_factor=_factor, is_time=True
288
+ try:
289
+ with Reader(file, verbose=verbose_level > 2) as shpr:
290
+ for _factor in ds_list:
291
+ if shpr.ds_time.shape[0] / _factor < 1000:
292
+ logger.warning(
293
+ "will skip downsampling for %s because "
294
+ "resulting sample-size is too small",
295
+ file.name,
296
+ )
297
+ break
298
+ ds_file = file.with_suffix(f".downsampled_x{round(_factor)}.h5")
299
+ if ds_file.exists():
300
+ continue
301
+ logger.info(
302
+ "Downsampling '%s' by factor x%f ...", file.name, _factor
293
303
  )
294
- shpr.downsample(shpr.ds_voltage, shpw.ds_voltage, ds_factor=_factor)
295
- shpr.downsample(shpr.ds_current, shpw.ds_current, ds_factor=_factor)
304
+ with Writer(
305
+ ds_file,
306
+ mode=shpr.get_mode(),
307
+ datatype=shpr.get_datatype(),
308
+ window_samples=shpr.get_window_samples(),
309
+ cal_data=shpr.get_calibration_data(),
310
+ verbose=verbose_level > 2,
311
+ ) as shpw:
312
+ shpw["ds_factor"] = _factor
313
+ shpw.store_hostname(shpr.get_hostname())
314
+ shpw.store_config(shpr.get_config())
315
+ shpr.downsample(
316
+ shpr.ds_time, shpw.ds_time, ds_factor=_factor, is_time=True
317
+ )
318
+ shpr.downsample(
319
+ shpr.ds_voltage, shpw.ds_voltage, ds_factor=_factor
320
+ )
321
+ shpr.downsample(
322
+ shpr.ds_current, shpw.ds_current, ds_factor=_factor
323
+ )
324
+ except TypeError as _xpc:
325
+ logger.error("ERROR: will skip file, caught exception: %s", _xpc)
296
326
 
297
327
 
298
328
  @cli.command(
@@ -339,7 +369,7 @@ def plot(
339
369
  end: Optional[float],
340
370
  width: int,
341
371
  height: int,
342
- multiplot: bool,
372
+ multiplot: bool, # noqa: FBT001
343
373
  ) -> None:
344
374
  """Plots IV-trace from file or directory containing shepherd-recordings"""
345
375
  files = path_to_flist(in_data)
@@ -348,11 +378,16 @@ def plot(
348
378
  data = []
349
379
  for file in files:
350
380
  logger.info("Generating plot for '%s' ...", file.name)
351
- with Reader(file, verbose=verbose_level > 2) as shpr:
352
- if multiplot:
353
- data.append(shpr.generate_plot_data(start, end, relative_ts=True))
354
- else:
355
- shpr.plot_to_file(start, end, width, height)
381
+ try:
382
+ with Reader(file, verbose=verbose_level > 2) as shpr:
383
+ if multiplot:
384
+ data.append(
385
+ shpr.generate_plot_data(start, end, relative_timestamp=True)
386
+ )
387
+ else:
388
+ shpr.plot_to_file(start, end, width, height)
389
+ except TypeError as _xpc:
390
+ logger.error("ERROR: will skip file, caught exception: %s", _xpc)
356
391
  if multiplot:
357
392
  logger.info("Got %d datasets to plot", len(data))
358
393
  mpl_path = Reader.multiplot_to_file(data, in_data, width, height)
@@ -1,5 +1,4 @@
1
- """
2
- prototype of a file-reader with various converters
1
+ """prototype of a file-reader with various converters
3
2
  to generate valid shepherd-data for emulation
4
3
 
5
4
  """
@@ -7,13 +6,16 @@ import errno
7
6
  import logging
8
7
  import math
9
8
  import os
10
- import pickle # noqa: S403
9
+ import pickle
11
10
  from pathlib import Path
11
+ from types import TracebackType
12
12
  from typing import Optional
13
+ from typing import Type
13
14
 
14
15
  import numpy as np
15
16
  import pandas as pd
16
17
  from tqdm import trange
18
+ from typing_extensions import Self
17
19
 
18
20
  from . import Writer
19
21
  from .mppt import MPPTracker
@@ -21,12 +23,12 @@ from .mppt import OptimalTracker
21
23
  from .mppt import iv_model
22
24
 
23
25
 
24
- def get_voc(coeffs: pd.DataFrame):
26
+ def get_voc(coeffs: pd.DataFrame): # noqa: ANN201
25
27
  """Open-circuit voltage of IV curve with given coefficients."""
26
28
  return np.log(coeffs["a"] / coeffs["b"] + 1) / coeffs["c"]
27
29
 
28
30
 
29
- def get_isc(coeffs: pd.DataFrame):
31
+ def get_isc(coeffs: pd.DataFrame): # noqa: ANN201
30
32
  """Short-circuit current of IV curve with given coefficients."""
31
33
  return coeffs["a"]
32
34
 
@@ -40,8 +42,9 @@ class Reader:
40
42
  self,
41
43
  file_path: Path,
42
44
  samplerate_sps: Optional[int] = None,
45
+ *,
43
46
  verbose: bool = True,
44
- ):
47
+ ) -> None:
45
48
  self._logger.setLevel(logging.INFO if verbose else logging.WARNING)
46
49
 
47
50
  self.file_path = Path(file_path).resolve()
@@ -58,12 +61,12 @@ class Reader:
58
61
 
59
62
  self._df: Optional[pd.DataFrame] = None
60
63
 
61
- def __enter__(self):
64
+ def __enter__(self) -> Self:
62
65
  if not self.file_path.exists():
63
66
  raise FileNotFoundError(
64
67
  errno.ENOENT, os.strerror(errno.ENOENT), self.file_path.name
65
68
  )
66
- with open(self.file_path, "rb") as ifr:
69
+ with self.file_path.open("rb") as ifr:
67
70
  self._df = pickle.load(ifr) # noqa: S301
68
71
  self._refresh_file_stats()
69
72
  self._logger.info(
@@ -78,7 +81,13 @@ class Reader:
78
81
  )
79
82
  return self
80
83
 
81
- def __exit__(self, *exc): # type: ignore
84
+ def __exit__(
85
+ self,
86
+ typ: Optional[Type[BaseException]] = None,
87
+ exc: Optional[BaseException] = None,
88
+ tb: Optional[TracebackType] = None,
89
+ extra_arg: int = 0,
90
+ ) -> None:
82
91
  pass
83
92
 
84
93
  def _refresh_file_stats(self) -> None:
@@ -170,7 +179,8 @@ class Reader:
170
179
  file and applies the specified MPPT algorithm to extract the corresponding
171
180
  voltage and current traces.
172
181
 
173
- TODO:
182
+ Todo:
183
+ ----
174
184
  - allow to use harvester-model in shepherd-code
175
185
  - generalize and put it into main code
176
186
 
@@ -1,5 +1,4 @@
1
- """
2
- Harvesters, simple and fast approach.
1
+ """Harvesters, simple and fast approach.
3
2
  Might be exchanged by shepherds py-model of pru-harvesters
4
3
  """
5
4
  import numpy as np
@@ -12,10 +11,12 @@ def iv_model(voltages: Calc_t, coeffs: pd.Series) -> Calc_t:
12
11
  """Simple diode based model of a solar panel IV curve.
13
12
 
14
13
  Args:
14
+ ----
15
15
  :param voltages: Load voltage of the solar panel
16
16
  :param coeffs: three generic coefficients
17
17
 
18
18
  Returns:
19
+ -------
19
20
  Solar current at given load voltage
20
21
  """
21
22
  currents = float(coeffs["a"]) - float(coeffs["b"]) * (
@@ -29,7 +30,7 @@ def iv_model(voltages: Calc_t, coeffs: pd.Series) -> Calc_t:
29
30
  return currents
30
31
 
31
32
 
32
- def find_oc(v_arr: np.ndarray, i_arr: np.ndarray, ratio: float = 0.05):
33
+ def find_oc(v_arr: np.ndarray, i_arr: np.ndarray, ratio: float = 0.05) -> np.ndarray:
33
34
  """Approximates opencircuit voltage.
34
35
 
35
36
  Searches last current value that is above a certain ratio of the short-circuit
@@ -45,18 +46,18 @@ class MPPTracker:
45
46
  :param pts_per_curve: resolution of internal ivcurve
46
47
  """
47
48
 
48
- def __init__(self, v_max: float = 5.0, pts_per_curve: int = 1000):
49
+ def __init__(self, v_max: float = 5.0, pts_per_curve: int = 1000) -> None:
49
50
  self.pts_per_curve: int = pts_per_curve
50
51
  self.v_max: float = v_max
51
52
  self.v_proto: np.ndarray = np.linspace(0, v_max, pts_per_curve)
52
53
 
53
54
  def process(self, coeffs: pd.DataFrame) -> pd.DataFrame:
54
- """apply harvesting model to input data
55
+ """Apply harvesting model to input data
55
56
 
56
57
  :param coeffs: ivonne coefficients
57
58
  :return:
58
59
  """
59
- return pd.DataFrame()
60
+ pass
60
61
 
61
62
 
62
63
  class OpenCircuitTracker(MPPTracker):
@@ -69,7 +70,7 @@ class OpenCircuitTracker(MPPTracker):
69
70
 
70
71
  def __init__(
71
72
  self, v_max: float = 5.0, pts_per_curve: int = 1000, ratio: float = 0.8
72
- ):
73
+ ) -> None:
73
74
  super().__init__(v_max, pts_per_curve)
74
75
  self.ratio = ratio
75
76
 
@@ -97,7 +98,7 @@ class OptimalTracker(MPPTracker):
97
98
  :param pts_per_curve: resolution of internal ivcurve
98
99
  """
99
100
 
100
- def __init__(self, v_max: float = 5.0, pts_per_curve: int = 1000):
101
+ def __init__(self, v_max: float = 5.0, pts_per_curve: int = 1000) -> None:
101
102
  super().__init__(v_max, pts_per_curve)
102
103
 
103
104
  def process(self, coeffs: pd.DataFrame) -> pd.DataFrame:
@@ -1,5 +1,4 @@
1
- """
2
- Reader-Baseclass
1
+ """Reader-Baseclass
3
2
  """
4
3
  import math
5
4
  from datetime import datetime
@@ -14,24 +13,31 @@ from matplotlib import pyplot as plt
14
13
  from tqdm import trange
15
14
 
16
15
  from shepherd_core import Reader as CoreReader
16
+ from shepherd_core import local_tz
17
17
  from shepherd_core.logger import logger
18
18
 
19
- # import samplerate # TODO: just a test-fn for now
19
+ # import samplerate # noqa: ERA001, TODO: just a test-fn for now
20
20
 
21
21
 
22
22
  class Reader(CoreReader):
23
23
  """Sequentially Reads shepherd-data from HDF5 file.
24
24
 
25
25
  Args:
26
+ ----
26
27
  file_path: Path of hdf5 file containing shepherd data with iv-samples, iv-curves or isc&voc
27
28
  verbose: more info during usage, 'None' skips the setter
28
29
  """
29
30
 
30
- def __init__(self, file_path: Optional[Path], verbose: Optional[bool] = True):
31
- super().__init__(file_path, verbose)
31
+ def __init__(
32
+ self,
33
+ file_path: Optional[Path],
34
+ *,
35
+ verbose: Optional[bool] = True,
36
+ ) -> None:
37
+ super().__init__(file_path, verbose=verbose)
32
38
 
33
39
  def save_csv(self, h5_group: h5py.Group, separator: str = ";") -> int:
34
- """extract numerical data via csv
40
+ """Extract numerical data via csv
35
41
 
36
42
  :param h5_group: can be external and should probably be downsampled
37
43
  :param separator: used between columns
@@ -47,24 +53,23 @@ class Reader(CoreReader):
47
53
  self._logger.warning("%s already exists, will skip", csv_path)
48
54
  return 0
49
55
  datasets = [
50
- key if isinstance(h5_group[key], h5py.Dataset) else []
51
- for key in h5_group.keys()
56
+ key if isinstance(h5_group[key], h5py.Dataset) else [] for key in h5_group
52
57
  ]
53
58
  datasets.remove("time")
54
- datasets = ["time"] + datasets
59
+ datasets = ["time", *datasets]
55
60
  separator = separator.strip().ljust(2)
56
61
  header = [
57
62
  h5_group[key].attrs["description"].replace(", ", separator)
58
63
  for key in datasets
59
64
  ]
60
65
  header = separator.join(header)
61
- with open(csv_path, "w", encoding="utf-8-sig") as csv_file:
66
+ with csv_path.open("w", encoding="utf-8-sig") as csv_file:
62
67
  self._logger.info(
63
68
  "CSV-Generator will save '%s' to '%s'", h5_group.name, csv_path.name
64
69
  )
65
70
  csv_file.write(header + "\n")
66
71
  for idx, time_ns in enumerate(h5_group["time"][:]):
67
- timestamp = datetime.utcfromtimestamp(time_ns / 1e9)
72
+ timestamp = datetime.fromtimestamp(time_ns / 1e9, tz=local_tz())
68
73
  csv_file.write(timestamp.strftime("%Y-%m-%d %H:%M:%S.%f"))
69
74
  for key in datasets[1:]:
70
75
  values = h5_group[key][idx]
@@ -74,10 +79,11 @@ class Reader(CoreReader):
74
79
  csv_file.write("\n")
75
80
  return h5_group["time"][:].shape[0]
76
81
 
77
- def save_log(self, h5_group: h5py.Group) -> int:
78
- """save dataset in group as log, optimal for logged dmesg and exceptions
82
+ def save_log(self, h5_group: h5py.Group, *, add_timestamp: bool = True) -> int:
83
+ """Save dataset in group as log, optimal for logged dmesg and exceptions
79
84
 
80
85
  :param h5_group: can be external
86
+ :param add_timestamp: can be external
81
87
  :return: number of processed entries
82
88
  """
83
89
  if h5_group["time"].shape[0] < 1:
@@ -90,17 +96,18 @@ class Reader(CoreReader):
90
96
  self._logger.warning("%s already exists, will skip", log_path)
91
97
  return 0
92
98
  datasets = [
93
- key if isinstance(h5_group[key], h5py.Dataset) else []
94
- for key in h5_group.keys()
99
+ key if isinstance(h5_group[key], h5py.Dataset) else [] for key in h5_group
95
100
  ]
96
101
  datasets.remove("time")
97
- with open(log_path, "w", encoding="utf-8-sig") as log_file:
102
+ with log_path.open("w", encoding="utf-8-sig") as log_file:
98
103
  self._logger.info(
99
104
  "Log-Generator will save '%s' to '%s'", h5_group.name, log_path.name
100
105
  )
101
106
  for idx, time_ns in enumerate(h5_group["time"][:]):
102
- timestamp = datetime.utcfromtimestamp(time_ns / 1e9)
103
- log_file.write(timestamp.strftime("%Y-%m-%d %H:%M:%S.%f") + ":")
107
+ if add_timestamp:
108
+ timestamp = datetime.fromtimestamp(time_ns / 1e9, local_tz())
109
+ # ⤷ TODO: these .fromtimestamp would benefit from included TZ
110
+ log_file.write(timestamp.strftime("%Y-%m-%d %H:%M:%S.%f") + ":")
104
111
  for key in datasets:
105
112
  try:
106
113
  message = str(h5_group[key][idx])
@@ -117,6 +124,7 @@ class Reader(CoreReader):
117
124
  start_n: int = 0,
118
125
  end_n: Optional[int] = None,
119
126
  ds_factor: float = 5,
127
+ *,
120
128
  is_time: bool = False,
121
129
  ) -> Union[h5py.Dataset, np.ndarray]:
122
130
  """Warning: only valid for IV-Stream, not IV-Curves
@@ -199,10 +207,10 @@ class Reader(CoreReader):
199
207
  start_n: int = 0,
200
208
  end_n: Optional[int] = None,
201
209
  samplerate_dst: float = 1000,
210
+ *,
202
211
  is_time: bool = False,
203
212
  ) -> Union[h5py.Dataset, np.ndarray]:
204
- """
205
- :param data_src:
213
+ """:param data_src:
206
214
  :param data_dst:
207
215
  :param start_n:
208
216
  :param end_n:
@@ -299,13 +307,14 @@ class Reader(CoreReader):
299
307
  self,
300
308
  start_s: Optional[float] = None,
301
309
  end_s: Optional[float] = None,
302
- relative_ts: bool = True,
310
+ *,
311
+ relative_timestamp: bool = True,
303
312
  ) -> Dict:
304
- """provides down-sampled iv-data that can be feed into plot_to_file()
313
+ """Provides down-sampled iv-data that can be feed into plot_to_file()
305
314
 
306
315
  :param start_s: time in seconds, relative to start of recording
307
316
  :param end_s: time in seconds, relative to start of recording
308
- :param relative_ts: treat
317
+ :param relative_timestamp: treat
309
318
  :return: down-sampled size of ~ self.max_elements
310
319
  """
311
320
  if self.get_datatype() == "ivcurve":
@@ -343,7 +352,7 @@ class Reader(CoreReader):
343
352
  "start_s": start_s,
344
353
  "end_s": end_s,
345
354
  }
346
- if relative_ts:
355
+ if relative_timestamp:
347
356
  data["time"] = data["time"] - self._cal.time.raw_to_si(self.ds_time[0])
348
357
  return data
349
358
 
@@ -351,8 +360,7 @@ class Reader(CoreReader):
351
360
  def assemble_plot(
352
361
  data: Union[dict, list], width: int = 20, height: int = 10
353
362
  ) -> plt.Figure:
354
- """
355
- TODO: add power (if wanted)
363
+ """TODO: add power (if wanted)
356
364
 
357
365
  :param data: plottable / down-sampled iv-data with some meta-data
358
366
  -> created with generate_plot_data()
@@ -384,7 +392,7 @@ class Reader(CoreReader):
384
392
  width: int = 20,
385
393
  height: int = 10,
386
394
  ) -> None:
387
- """creates (down-sampled) IV-Plot
395
+ """Creates (down-sampled) IV-Plot
388
396
  -> omitting start- and end-time will use the whole duration
389
397
 
390
398
  :param start_s: time in seconds, relative to start of recording, optional
@@ -415,7 +423,7 @@ class Reader(CoreReader):
415
423
  def multiplot_to_file(
416
424
  data: list, plot_path: Path, width: int = 20, height: int = 10
417
425
  ) -> Optional[Path]:
418
- """creates (down-sampled) IV-Multi-Plot
426
+ """Creates (down-sampled) IV-Multi-Plot
419
427
 
420
428
  :param data: plottable / down-sampled iv-data with some meta-data
421
429
  -> created with generate_plot_data()
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: shepherd-data
3
- Version: 2023.9.9
3
+ Version: 2023.10.2
4
4
  Summary: Programming- and CLI-Interface for the h5-dataformat of the Shepherd-Testbed
5
5
  Home-page: https://pypi.org/project/shepherd-data/
6
6
  Author: Ingmar Splitt, Kai Geissdoerfer
@@ -20,20 +20,20 @@ Classifier: Development Status :: 5 - Production/Stable
20
20
  Classifier: Intended Audience :: Developers
21
21
  Classifier: Intended Audience :: Information Technology
22
22
  Classifier: Intended Audience :: Science/Research
23
- Classifier: Programming Language :: Python :: 3.7
24
23
  Classifier: Programming Language :: Python :: 3.8
25
24
  Classifier: Programming Language :: Python :: 3.9
26
25
  Classifier: Programming Language :: Python :: 3.10
27
26
  Classifier: Programming Language :: Python :: 3.11
27
+ Classifier: Programming Language :: Python :: 3.12
28
28
  Classifier: License :: OSI Approved :: MIT License
29
29
  Classifier: Operating System :: OS Independent
30
30
  Classifier: Natural Language :: English
31
- Requires-Python: >=3.7
31
+ Requires-Python: >=3.8
32
32
  Description-Content-Type: text/markdown
33
33
  Requires-Dist: h5py
34
34
  Requires-Dist: numpy
35
35
  Requires-Dist: pyYAML
36
- Requires-Dist: shepherd-core[inventory]
36
+ Requires-Dist: shepherd-core[inventory]>=2023.10.2
37
37
  Requires-Dist: click
38
38
  Requires-Dist: matplotlib
39
39
  Requires-Dist: pandas
@@ -51,7 +51,8 @@ Requires-Dist: pytest-click; extra == "test"
51
51
  # Data Module
52
52
 
53
53
  [![PyPiVersion](https://img.shields.io/pypi/v/shepherd_data.svg)](https://pypi.org/project/shepherd_data)
54
- [![Pytest](https://github.com/orgua/shepherd-datalib/actions/workflows/python-app.yml/badge.svg)](https://github.com/orgua/shepherd-datalib/actions/workflows/python-app.yml)
54
+ [![image](https://img.shields.io/pypi/pyversions/shepherd_data.svg)](https://pypi.python.org/pypi/shepherd-data)
55
+ [![Pytest](https://github.com/orgua/shepherd-datalib/actions/workflows/py_unittest.yml/badge.svg)](https://github.com/orgua/shepherd-datalib/actions/workflows/py_unittest.yml)
55
56
  [![CodeStyle](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
56
57
 
57
58
  **Documentation**: <https://orgua.github.io/shepherd/external/shepherd_data.html>
@@ -141,7 +142,6 @@ with sd.Reader("./hrv_sawtooth_1h.h5") as db:
141
142
  - `is_valid`
142
143
  - `energy()`
143
144
  - `check_timediffs()`
144
- - `data_timediffs()`
145
145
  - `get_metadata()`
146
146
  - `save_metadata()`
147
147
 
@@ -1,7 +1,7 @@
1
1
  h5py
2
2
  numpy
3
3
  pyYAML
4
- shepherd-core[inventory]
4
+ shepherd-core[inventory]>=2023.10.2
5
5
  click
6
6
  matplotlib
7
7
  pandas
@@ -3,6 +3,6 @@ from click.testing import CliRunner
3
3
  from shepherd_data.cli import cli
4
4
 
5
5
 
6
- def test_cli_invoke_help():
6
+ def test_cli_invoke_help() -> None:
7
7
  res = CliRunner().invoke(cli, ["-h"])
8
8
  assert res.exit_code == 0