tsp 1.8.1__py3-none-any.whl → 1.10.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (127) hide show
  1. tsp/__init__.py +11 -11
  2. tsp/__meta__.py +1 -1
  3. tsp/concatenation.py +159 -153
  4. tsp/core.py +1306 -1162
  5. tsp/data/2023-01-06_755-test-Dataset_2031-Constant_Over_Interval-Hourly-Ground_Temperature-Thermistor_Automated.timeserie.csv +4 -4
  6. tsp/data/2023-01-06_755-test.metadata.txt +208 -208
  7. tsp/data/NTGS_example_csv.csv +6 -6
  8. tsp/data/NTGS_example_slash_dates.csv +6 -6
  9. tsp/data/NTGS_gtr_example_excel.xlsx +0 -0
  10. tsp/data/example_geotop.csv +5240 -5240
  11. tsp/data/example_gtnp.csv +1298 -1298
  12. tsp/data/example_permos.csv +7 -7
  13. tsp/data/ntgs-db-multi.txt +3872 -0
  14. tsp/data/ntgs-db-single.txt +2251 -0
  15. tsp/data/test_geotop_has_space.txt +5 -5
  16. tsp/data/tsp_format_long.csv +10 -0
  17. tsp/data/tsp_format_wide_1.csv +7 -0
  18. tsp/data/tsp_format_wide_2.csv +7 -0
  19. tsp/dataloggers/AbstractReader.py +43 -43
  20. tsp/dataloggers/FG2.py +110 -110
  21. tsp/dataloggers/GP5W.py +114 -114
  22. tsp/dataloggers/Geoprecision.py +34 -34
  23. tsp/dataloggers/HOBO.py +930 -914
  24. tsp/dataloggers/RBRXL800.py +190 -190
  25. tsp/dataloggers/RBRXR420.py +371 -308
  26. tsp/dataloggers/Vemco.py +84 -0
  27. tsp/dataloggers/__init__.py +15 -15
  28. tsp/dataloggers/logr.py +196 -115
  29. tsp/dataloggers/test_files/004448.DAT +2543 -2543
  30. tsp/dataloggers/test_files/004531.DAT +17106 -17106
  31. tsp/dataloggers/test_files/004531.HEX +3587 -3587
  32. tsp/dataloggers/test_files/004534.HEX +3587 -3587
  33. tsp/dataloggers/test_files/010252.dat +1731 -1731
  34. tsp/dataloggers/test_files/010252.hex +1739 -1739
  35. tsp/dataloggers/test_files/010274.hex +1291 -1291
  36. tsp/dataloggers/test_files/010278.hex +3544 -3544
  37. tsp/dataloggers/test_files/012064.dat +1286 -1286
  38. tsp/dataloggers/test_files/012064.hex +1294 -1294
  39. tsp/dataloggers/test_files/012064_modified_start.hex +1294 -0
  40. tsp/dataloggers/test_files/012081.hex +3532 -3532
  41. tsp/dataloggers/test_files/013138_recovery_stamp.hex +1123 -0
  42. tsp/dataloggers/test_files/014037-2007.hex +95 -0
  43. tsp/dataloggers/test_files/019360_20160918_1146_SlumpIslandTopofHill.hex +11253 -0
  44. tsp/dataloggers/test_files/019360_20160918_1146_SlumpIslandTopofHill.xls +0 -0
  45. tsp/dataloggers/test_files/07B1592.DAT +1483 -1483
  46. tsp/dataloggers/test_files/07B1592.HEX +1806 -1806
  47. tsp/dataloggers/test_files/07B4450.DAT +2234 -2234
  48. tsp/dataloggers/test_files/07B4450.HEX +2559 -2559
  49. tsp/dataloggers/test_files/2022018_2025-09-18T22-16-16.txt +36 -0
  50. tsp/dataloggers/test_files/2022018_2025-09-18T22-16-16_raw.csv +2074 -0
  51. tsp/dataloggers/test_files/2022018_2025-09-18T22-16-16_temp.csv +2074 -0
  52. tsp/dataloggers/test_files/2025004_2025-12-02T17-07-28_cfg.txt +30 -0
  53. tsp/dataloggers/test_files/2025004_2025-12-02T17-07-28_raw.csv +35 -0
  54. tsp/dataloggers/test_files/2025004_2025-12-02T17-07-28_temp.csv +35 -0
  55. tsp/dataloggers/test_files/204087.xlsx +0 -0
  56. tsp/dataloggers/test_files/Asc-1455As02.000 +2982 -0
  57. tsp/dataloggers/test_files/Asc-1456As02.000 +2992 -0
  58. tsp/dataloggers/test_files/Asc-1457As02.000 +2917 -0
  59. tsp/dataloggers/test_files/BGC_BH15_019362_20140610_1253.hex +1729 -0
  60. tsp/dataloggers/test_files/Bin2944.csv +759 -0
  61. tsp/dataloggers/test_files/Bin5494.csv +2972 -0
  62. tsp/dataloggers/test_files/Bin6786.csv +272 -0
  63. tsp/dataloggers/test_files/FG2_399.csv +9881 -9881
  64. tsp/dataloggers/test_files/GP5W.csv +1121 -1121
  65. tsp/dataloggers/test_files/GP5W_260.csv +1884 -1884
  66. tsp/dataloggers/test_files/GP5W_270.csv +2210 -2210
  67. tsp/dataloggers/test_files/H08-030-08_HOBOware.csv +998 -998
  68. tsp/dataloggers/test_files/Minilog-II-T_350763_20190711_1.csv +2075 -0
  69. tsp/dataloggers/test_files/Minilog-II-T_350769_20190921_1.csv +6384 -0
  70. tsp/dataloggers/test_files/Minilog-II-T_354284_20190921_1.csv +4712 -0
  71. tsp/dataloggers/test_files/Minilog-T_7943_20140920_1.csv +5826 -0
  72. tsp/dataloggers/test_files/Minilog-T_8979_20140806_1.csv +2954 -0
  73. tsp/dataloggers/test_files/Minilog-T_975_20110824_1.csv +4343 -0
  74. tsp/dataloggers/test_files/RBR_01.dat +1046 -1046
  75. tsp/dataloggers/test_files/RBR_02.dat +2426 -2426
  76. tsp/dataloggers/test_files/RI03b_062831_20240905_1801.rsk +0 -0
  77. tsp/dataloggers/test_files/RI03b_062831_20240905_1801.xlsx +0 -0
  78. tsp/dataloggers/test_files/RSTDT2055.csv +2152 -2152
  79. tsp/dataloggers/test_files/U23-001_HOBOware.csv +1001 -1001
  80. tsp/dataloggers/test_files/hobo-negative-2.txt +6396 -6396
  81. tsp/dataloggers/test_files/hobo-negative-3.txt +5593 -5593
  82. tsp/dataloggers/test_files/hobo-positive-number-1.txt +1000 -1000
  83. tsp/dataloggers/test_files/hobo-positive-number-2.csv +1003 -1003
  84. tsp/dataloggers/test_files/hobo-positive-number-3.csv +1133 -1133
  85. tsp/dataloggers/test_files/hobo-positive-number-4.csv +1209 -1209
  86. tsp/dataloggers/test_files/hobo2.csv +8702 -8702
  87. tsp/dataloggers/test_files/hobo_1_AB.csv +21732 -21732
  88. tsp/dataloggers/test_files/hobo_1_AB_Details.txt +133 -133
  89. tsp/dataloggers/test_files/hobo_1_AB_classic.csv +4373 -4373
  90. tsp/dataloggers/test_files/hobo_1_AB_defaults.csv +21732 -21732
  91. tsp/dataloggers/test_files/hobo_1_AB_minimal.txt +1358 -1358
  92. tsp/dataloggers/test_files/hobo_1_AB_var2.csv +3189 -3189
  93. tsp/dataloggers/test_files/hobo_1_AB_var3.csv +2458 -2458
  94. tsp/dataloggers/test_files/logR_ULogC16-32_1.csv +106 -106
  95. tsp/dataloggers/test_files/logR_ULogC16-32_2.csv +100 -100
  96. tsp/dataloggers/test_files/mon_3_Ta_2010-08-18_2013-02-08.txt +21724 -21724
  97. tsp/dataloggers/test_files/rbr_001.dat +1133 -1133
  98. tsp/dataloggers/test_files/rbr_001.hex +1139 -1139
  99. tsp/dataloggers/test_files/rbr_001_no_comment.dat +1132 -1132
  100. tsp/dataloggers/test_files/rbr_001_no_comment.hex +1138 -1138
  101. tsp/dataloggers/test_files/rbr_002.dat +1179 -1179
  102. tsp/dataloggers/test_files/rbr_002.hex +1185 -1185
  103. tsp/dataloggers/test_files/rbr_003.hex +1292 -1292
  104. tsp/dataloggers/test_files/rbr_xl_001.DAT +1105 -1105
  105. tsp/dataloggers/test_files/rbr_xl_002.DAT +1126 -1126
  106. tsp/dataloggers/test_files/rbr_xl_003.DAT +4622 -4622
  107. tsp/dataloggers/test_files/rbr_xl_003.HEX +3587 -3587
  108. tsp/gtnp.py +148 -148
  109. tsp/labels.py +3 -3
  110. tsp/misc.py +90 -90
  111. tsp/physics.py +101 -101
  112. tsp/plots/static.py +388 -374
  113. tsp/readers.py +829 -548
  114. tsp/standardization/__init__.py +0 -0
  115. tsp/standardization/metadata.py +95 -0
  116. tsp/standardization/metadata_ref.py +0 -0
  117. tsp/standardization/validator.py +535 -0
  118. tsp/time.py +45 -45
  119. tsp/tspwarnings.py +27 -15
  120. tsp/utils.py +131 -101
  121. tsp/version.py +1 -1
  122. {tsp-1.8.1.dist-info → tsp-1.10.2.dist-info}/METADATA +95 -86
  123. tsp-1.10.2.dist-info/RECORD +132 -0
  124. {tsp-1.8.1.dist-info → tsp-1.10.2.dist-info}/licenses/LICENSE +674 -674
  125. {tsp-1.8.1.dist-info → tsp-1.10.2.dist-info}/top_level.txt +1 -0
  126. tsp-1.8.1.dist-info/RECORD +0 -94
  127. {tsp-1.8.1.dist-info → tsp-1.10.2.dist-info}/WHEEL +0 -0
tsp/time.py CHANGED
@@ -1,46 +1,46 @@
1
- import re
2
- from datetime import datetime, tzinfo
3
-
4
- from typing import Union
5
-
6
-
7
- def get_utc_offset(offset: "Union[str,int]") -> int:
8
- """Get the UTC offset in seconds from a string or integer"""
9
-
10
- if isinstance(offset, str):
11
- if offset.lower() == "utc" or (offset.lower() == "z"):
12
- return 0
13
-
14
- pattern = re.compile(r"([+-]?)(\d{2}):(\d{2})")
15
- match = pattern.match(offset)
16
-
17
- if not match:
18
- raise ValueError("Offset must be a string in the format '+HH:MM' or '-HH:MM'")
19
-
20
- sign = match.group(1)
21
- hours = int(match.group(2))
22
- minutes = int(match.group(3))
23
- utc_offset = (hours*60 + minutes)*60
24
- if sign == "-":
25
- utc_offset *= -1
26
-
27
- elif isinstance(offset, int):
28
- utc_offset = offset
29
-
30
- else:
31
- raise ValueError("Offset must be a string in the format '+HH:MM' or '-HH:MM' or an integer in seconds")
32
-
33
- return utc_offset
34
-
35
-
36
- def format_utc_offset(offset: tzinfo) -> str:
37
- """Format a UTC offset as a string in the format '+HH:MM' or '-HH:MM'"""
38
- utc_offset = offset.utcoffset(datetime.now()).total_seconds()
39
- sign = "-" if utc_offset < 0 else "+"
40
- hours = int(abs(utc_offset)//3600)
41
- minutes = int(abs(utc_offset)%3600/60)
42
-
43
- if hours == 0 and minutes == 0:
44
- return "UTC"
45
-
1
+ import re
2
+ from datetime import datetime, tzinfo
3
+
4
+ from typing import Union
5
+
6
+
7
+ def get_utc_offset(offset: "Union[str,int]") -> int:
8
+ """Get the UTC offset in seconds from a string or integer"""
9
+
10
+ if isinstance(offset, str):
11
+ if offset.lower() == "utc" or (offset.lower() == "z"):
12
+ return 0
13
+
14
+ pattern = re.compile(r"([+-]?)(\d{2}):(\d{2})")
15
+ match = pattern.match(offset)
16
+
17
+ if not match:
18
+ raise ValueError("Offset must be a string in the format '+HH:MM' or '-HH:MM'")
19
+
20
+ sign = match.group(1)
21
+ hours = int(match.group(2))
22
+ minutes = int(match.group(3))
23
+ utc_offset = (hours*60 + minutes)*60
24
+ if sign == "-":
25
+ utc_offset *= -1
26
+
27
+ elif isinstance(offset, int):
28
+ utc_offset = offset
29
+
30
+ else:
31
+ raise ValueError("Offset must be a string in the format '+HH:MM' or '-HH:MM' or an integer in seconds")
32
+
33
+ return utc_offset
34
+
35
+
36
+ def format_utc_offset(offset: tzinfo) -> str:
37
+ """Format a UTC offset as a string in the format '+HH:MM' or '-HH:MM'"""
38
+ utc_offset = offset.utcoffset(datetime.now()).total_seconds()
39
+ sign = "-" if utc_offset < 0 else "+"
40
+ hours = int(abs(utc_offset)//3600)
41
+ minutes = int(abs(utc_offset)%3600/60)
42
+
43
+ if hours == 0 and minutes == 0:
44
+ return "UTC"
45
+
46
46
  return f"{sign}{hours:02d}:{minutes:02d}"
tsp/tspwarnings.py CHANGED
@@ -1,15 +1,27 @@
1
- import numpy as np
2
-
3
-
4
- class DuplicateTimesWarning(UserWarning):
5
- """For when duplicate times are found in a file."""
6
- def __init__(self, times):
7
- self.times = times
8
-
9
- def _msg(self, times) -> str:
10
- m = f"Duplicate timestamps found: {times[np.where(times.duplicated())[0]]}. That's bad."
11
- return m
12
-
13
- def __str__(self):
14
- return self._msg(self.times)
15
-
1
+ import numpy as np
2
+
3
+
4
+ class DuplicateTimesWarning(UserWarning):
5
+ """For when duplicate times are found in a file."""
6
+ def __init__(self, times):
7
+ self.times = times
8
+
9
+ def _msg(self, times) -> str:
10
+ m = f"Duplicate timestamps found: {times[np.where(times.duplicated())[0]]}. That's bad."
11
+ return m
12
+
13
+ def __str__(self):
14
+ return self._msg(self.times)
15
+
16
+ class NonIncreasingTimesWarning(UserWarning):
17
+ """For when non-increasing times are found in a file."""
18
+ def __init__(self, times):
19
+ self.times = times
20
+
21
+ def _msg(self, times) -> str:
22
+ n_bad = np.sum(np.diff(times.values) <= np.timedelta64(0, 'ns'))
23
+ m = f"{n_bad} non-increasing timestamps found. That's bad."
24
+ return m
25
+
26
+ def __str__(self):
27
+ return self._msg(self.times)
tsp/utils.py CHANGED
@@ -1,101 +1,131 @@
1
- import pandas as pd
2
- import numpy as np
3
-
4
- import tsp
5
- from tsp import TSP
6
-
7
-
8
- def resolve_duplicate_times(t: TSP, keep="first") -> TSP:
9
- """Eliminate duplicate times in a TSP.
10
-
11
- Parameters
12
- ----------
13
- tsp : TSP
14
- TSP to resolve duplicate times in.
15
- keep : str, optional
16
- Method to resolve duplicate times. Chosen from "first", "average", "last", "strip"
17
- by default "first"
18
-
19
- Returns
20
- -------
21
- TSP
22
- TSP with no duplicated times."""
23
- resolver = _get_duplicate_resolver(keep)
24
- return resolver(t)
25
-
26
-
27
- def _get_duplicate_resolver(keep: str):
28
- if keep == "first":
29
- return _first_duplicate_time
30
- elif keep == "average":
31
- return _average_duplicate_time
32
- elif keep == "last":
33
- return _last_duplicate_time
34
- elif keep == "strip":
35
- return _strip_duplicate_time
36
- else:
37
- raise ValueError(f"Unknown duplicate resolver method: {keep}")
38
-
39
-
40
- def _first_duplicate_time(t: TSP):
41
- df = t.wide
42
- df = df[~df.index.duplicated(keep="first")]
43
-
44
- time = df.index
45
- values = df.drop(['time'], axis=1).values
46
- depths = df.drop(['time'], axis=1).columns
47
-
48
- t_new = TSP(times=time, values=values, depths=depths,
49
- latitude=t.latitude, longitude=t.longitude,
50
- site_id=t.site_id, metadata=t.metadata)
51
-
52
- return t_new
53
-
54
-
55
- def _last_duplicate_time(t: TSP):
56
- df = t.wide
57
- df = df[~df.index.duplicated(keep="last")]
58
-
59
- time = df.index
60
- values = df.drop(['time'], axis=1).values
61
- depths = df.drop(['time'], axis=1).columns
62
-
63
- t_new = TSP(times=time, values=values, depths=depths,
64
- latitude=t.latitude, longitude=t.longitude,
65
- site_id=t.site_id, metadata=t.metadata)
66
-
67
- return t_new
68
-
69
-
70
- def _strip_duplicate_time(t: TSP):
71
- df = t.wide
72
- df = df[~df.index.duplicated(keep=False)]
73
-
74
- time = df.index
75
- values = df.drop(['time'], axis=1).values
76
- depths = df.drop(['time'], axis=1).columns
77
-
78
- t_new = TSP(times=time, values=values, depths=depths,
79
- latitude=t.latitude, longitude=t.longitude,
80
- site_id=t.site_id, metadata=t.metadata)
81
-
82
- return t_new
83
-
84
-
85
- def _average_duplicate_time(t: TSP):
86
- singleton = t.wide[~t.wide.index.duplicated(keep=False)]
87
- duplicated = t.wide[t.wide.index.duplicated(keep=False)].drop(['time'], axis=1).reset_index()
88
- averaged = duplicated.groupby(duplicated['index']).apply(lambda x: x[~x.isna()].mean(numeric_only=True))
89
- averaged.insert(0, 'time',averaged.index)
90
-
91
- df = pd.concat([singleton, averaged], ignore_index=False).sort_index()
92
-
93
- time = df.index
94
- values = df.drop(['time'], axis=1).values
95
- depths = df.drop(['time'], axis=1).columns
96
-
97
- t_new = TSP(times=time, values=values, depths=depths,
98
- latitude=t.latitude, longitude=t.longitude,
99
- site_id=t.site_id, metadata=t.metadata)
100
-
101
- return t_new
1
+ import pandas as pd
2
+ import numpy as np
3
+ import warnings
4
+
5
+ import tsp
6
+ from tsp import TSP
7
+
8
+
9
+ def resolve_duplicate_times(t: TSP, keep="first") -> TSP:
10
+ """Eliminate duplicate times in a TSP.
11
+
12
+ Parameters
13
+ ----------
14
+ tsp : TSP
15
+ TSP to resolve duplicate times in.
16
+ keep : str, optional
17
+ Method to resolve duplicate times. Chosen from "first", "average", "last", "strip"
18
+ by default "first"
19
+
20
+ Returns
21
+ -------
22
+ TSP
23
+ TSP with no duplicated times."""
24
+ resolver = _get_duplicate_resolver(keep)
25
+ return resolver(t)
26
+
27
+
28
+ def _get_duplicate_resolver(keep: str):
29
+ if keep == "first":
30
+ return _first_duplicate_time
31
+ elif keep == "average":
32
+ return _average_duplicate_time
33
+ elif keep == "last":
34
+ return _last_duplicate_time
35
+ elif keep == "strip":
36
+ return _strip_duplicate_time
37
+ else:
38
+ raise ValueError(f"Unknown duplicate resolver method: {keep}")
39
+
40
+
41
+ def _first_duplicate_time(t: TSP):
42
+ df = t.wide
43
+ df = df[~df.index.duplicated(keep="first")]
44
+
45
+ time = df.index
46
+ values = df.drop(['time'], axis=1).values
47
+ depths = df.drop(['time'], axis=1).columns
48
+
49
+ t_new = TSP(times=time, values=values, depths=depths,
50
+ latitude=t.latitude, longitude=t.longitude,
51
+ site_id=t.site_id, metadata=t.metadata)
52
+
53
+ return t_new
54
+
55
+
56
+ def _last_duplicate_time(t: TSP):
57
+ df = t.wide
58
+ df = df[~df.index.duplicated(keep="last")]
59
+
60
+ time = df.index
61
+ values = df.drop(['time'], axis=1).values
62
+ depths = df.drop(['time'], axis=1).columns
63
+
64
+ t_new = TSP(times=time, values=values, depths=depths,
65
+ latitude=t.latitude, longitude=t.longitude,
66
+ site_id=t.site_id, metadata=t.metadata)
67
+
68
+ return t_new
69
+
70
+
71
+ def _strip_duplicate_time(t: TSP):
72
+ df = t.wide
73
+ df = df[~df.index.duplicated(keep=False)]
74
+
75
+ time = df.index
76
+ values = df.drop(['time'], axis=1).values
77
+ depths = df.drop(['time'], axis=1).columns
78
+
79
+ t_new = TSP(times=time, values=values, depths=depths,
80
+ latitude=t.latitude, longitude=t.longitude,
81
+ site_id=t.site_id, metadata=t.metadata)
82
+
83
+ return t_new
84
+
85
+
86
+ def _average_duplicate_time(t: TSP):
87
+ singleton = t.wide[~t.wide.index.duplicated(keep=False)]
88
+ duplicated = t.wide[t.wide.index.duplicated(keep=False)].drop(['time'], axis=1).reset_index()
89
+ averaged = duplicated.groupby(duplicated['index']).apply(lambda x: x[~x.isna()].mean(numeric_only=True))
90
+ averaged.insert(0, 'time',averaged.index)
91
+
92
+ df = pd.concat([singleton, averaged], ignore_index=False).sort_index()
93
+
94
+ time = df.index
95
+ values = df.drop(['time'], axis=1).values
96
+ depths = df.drop(['time'], axis=1).columns
97
+
98
+ t_new = TSP(times=time, values=values, depths=depths,
99
+ latitude=t.latitude, longitude=t.longitude,
100
+ site_id=t.site_id, metadata=t.metadata)
101
+
102
+ return t_new
103
+
104
+
105
+ def midnight_offset_24_hr(t: TSP, reverse: bool = False) -> TSP:
106
+ """ (de-)increment any timestamps that are exactly at midnight by 24 hours.
107
+
108
+ Description:
109
+ -----------
110
+ Some dataloggers misattribute data at midnight to the previous day
111
+ which can cause problems when plotting or analyzing data. This function
112
+ corrects any timestamps that are exactly at midnight by adding/subtracting 24 hours.
113
+ """
114
+
115
+ values = t.values.copy()
116
+ midnight_indices = np.where(t.times.hour==0)[0]
117
+ times = t.times.to_numpy().copy()
118
+ if len(midnight_indices) > 0:
119
+ if reverse:
120
+ times[midnight_indices] -= np.timedelta64(24, 'h')
121
+ else:
122
+ times[midnight_indices] += np.timedelta64(24, 'h')
123
+
124
+ with warnings.catch_warnings():
125
+ warnings.filterwarnings("ignore", category=tsp.tspwarnings.NonIncreasingTimesWarning)
126
+
127
+ t_new = TSP(times=times, values=values, depths=t.depths,
128
+ latitude=t.latitude, longitude=t.longitude,
129
+ site_id=t.site_id, metadata=t.metadata)
130
+
131
+ return t_new
tsp/version.py CHANGED
@@ -1 +1 @@
1
- version="1.8.1"
1
+ version="1.10.2"
@@ -1,86 +1,95 @@
1
- Metadata-Version: 2.4
2
- Name: tsp
3
- Version: 1.8.1
4
- Summary: Making permafrost data effortless
5
- Home-page: https://gitlab.com/permafrostnet/teaspoon
6
- Author: Nick Brown
7
- Author-email: nick.brown@carleton.ca
8
- Classifier: Development Status :: 4 - Beta
9
- Classifier: Intended Audience :: Developers
10
- Classifier: License :: OSI Approved :: GNU General Public License (GPL)
11
- Classifier: Operating System :: POSIX
12
- Classifier: Programming Language :: Python
13
- Classifier: Topic :: Software Development :: Libraries
14
- Description-Content-Type: text/markdown
15
- License-File: LICENSE
16
- Requires-Dist: pandas
17
- Requires-Dist: numpy
18
- Requires-Dist: regex
19
- Requires-Dist: matplotlib
20
- Requires-Dist: setuptools
21
- Provides-Extra: nc
22
- Requires-Dist: netCDF4; extra == "nc"
23
- Requires-Dist: pfit==0.2.1; extra == "nc"
24
- Provides-Extra: plotting
25
- Requires-Dist: scipy; extra == "plotting"
26
- Provides-Extra: rbr
27
- Requires-Dist: pyrsktools; extra == "rbr"
28
- Requires-Dist: openpyxl; extra == "rbr"
29
- Provides-Extra: full
30
- Requires-Dist: scipy; extra == "full"
31
- Requires-Dist: pfit==0.2.1; extra == "full"
32
- Requires-Dist: netCDF4; extra == "full"
33
- Requires-Dist: pyrsktools; extra == "full"
34
- Requires-Dist: openpyxl; extra == "full"
35
- Provides-Extra: dev
36
- Requires-Dist: manuel; extra == "dev"
37
- Requires-Dist: pytest; extra == "dev"
38
- Requires-Dist: pytest-cov; extra == "dev"
39
- Requires-Dist: coverage; extra == "dev"
40
- Requires-Dist: mock; extra == "dev"
41
- Dynamic: author
42
- Dynamic: author-email
43
- Dynamic: classifier
44
- Dynamic: description
45
- Dynamic: description-content-type
46
- Dynamic: home-page
47
- Dynamic: license-file
48
- Dynamic: provides-extra
49
- Dynamic: requires-dist
50
- Dynamic: summary
51
-
52
- # Teaspoon
53
-
54
- ## See the full documentation on the [ReadTheDocs Pages](https://permafrostnet.gitlab.io/teaspoon/source/about.html)
55
-
56
- ## [What is it?](https://permafrostnet.gitlab.io/teaspoon/source/about.html)
57
- `tsp` ('teaspoon') is a python library designed to make working with permafrost ground temperature time series data more straightforward, efficient, and reproduceable. Some of the features include:
58
-
59
- * Read a variety of common published data formats, datalogger outputs, and model results into a common data structure
60
- * GEOtop model output
61
- * GTN-P database export csv
62
- * NTGS ground temperature report csv
63
- * Geoprecision datalogger export
64
- * HoboWare datalogger export
65
- * Export data in a variety of common formats
66
- * netcdf
67
- * 'GTN-P'-style csv
68
- * 'NTGS'-style csv
69
- * Perform common data transformations
70
- * Calculate daily, monthly, or yearly means, ignoring averaging periods with missing data
71
- * Switch between "long" and "wide" dataframes
72
- * Visualize and explore your data with commonly used plots
73
- * Trumpet curves
74
- * Temperature-time graphs
75
- * Colour-contour profiles
76
-
77
- ## [Installation](https://permafrostnet.gitlab.io/teaspoon/source/install.html)
78
-
79
- ## [Usage Examples](https://permafrostnet.gitlab.io/teaspoon/source/examples.html)
80
-
81
- ## [How to contribute](https://permafrostnet.gitlab.io/teaspoon/source/contributions.html)
82
-
83
- ## Citation
84
- If you find this software helpful, please consider using the following citation:
85
-
86
- > Brown, N., (2022). tsp ("Teaspoon"): A library for ground temperature data. Journal of Open Source Software, 7(77), 4704, [https://doi.org/10.21105/joss.04704](https://doi.org/10.21105/joss.04704)
1
+ Metadata-Version: 2.4
2
+ Name: tsp
3
+ Version: 1.10.2
4
+ Summary: Making permafrost data effortless
5
+ Home-page: https://gitlab.com/permafrostnet/teaspoon
6
+ Author: Nick Brown
7
+ Author-email: nick.brown@carleton.ca
8
+ Classifier: Development Status :: 4 - Beta
9
+ Classifier: Intended Audience :: Developers
10
+ Classifier: License :: OSI Approved :: GNU General Public License (GPL)
11
+ Classifier: Operating System :: POSIX
12
+ Classifier: Programming Language :: Python
13
+ Classifier: Topic :: Software Development :: Libraries
14
+ Description-Content-Type: text/markdown
15
+ License-File: LICENSE
16
+ Requires-Dist: pandas
17
+ Requires-Dist: numpy
18
+ Requires-Dist: regex
19
+ Requires-Dist: matplotlib
20
+ Requires-Dist: setuptools
21
+ Requires-Dist: netCDF4
22
+ Requires-Dist: pfit>=0.3.0
23
+ Requires-Dist: scipy
24
+ Requires-Dist: pyrsktools
25
+ Requires-Dist: openpyxl
26
+ Provides-Extra: nc
27
+ Requires-Dist: netCDF4; extra == "nc"
28
+ Requires-Dist: pfit>=0.3.0; extra == "nc"
29
+ Provides-Extra: plotting
30
+ Requires-Dist: scipy; extra == "plotting"
31
+ Provides-Extra: rbr
32
+ Requires-Dist: pyrsktools; extra == "rbr"
33
+ Requires-Dist: openpyxl; extra == "rbr"
34
+ Provides-Extra: full
35
+ Requires-Dist: scipy; extra == "full"
36
+ Requires-Dist: pyrsktools; extra == "full"
37
+ Requires-Dist: netCDF4; extra == "full"
38
+ Requires-Dist: openpyxl; extra == "full"
39
+ Requires-Dist: pfit>=0.3.0; extra == "full"
40
+ Provides-Extra: dev
41
+ Requires-Dist: manuel; extra == "dev"
42
+ Requires-Dist: pytest; extra == "dev"
43
+ Requires-Dist: pytest-cov; extra == "dev"
44
+ Requires-Dist: coverage; extra == "dev"
45
+ Requires-Dist: mock; extra == "dev"
46
+ Dynamic: author
47
+ Dynamic: author-email
48
+ Dynamic: classifier
49
+ Dynamic: description
50
+ Dynamic: description-content-type
51
+ Dynamic: home-page
52
+ Dynamic: license-file
53
+ Dynamic: provides-extra
54
+ Dynamic: requires-dist
55
+ Dynamic: summary
56
+
57
+ # Teaspoon
58
+
59
+ ## See the full documentation on the [ReadTheDocs Pages](https://permafrostnet.gitlab.io/teaspoon/source/about.html)
60
+
61
+ ## [What is it?](https://permafrostnet.gitlab.io/teaspoon/source/about.html)
62
+ `tsp` ('teaspoon') is a python library designed to make working with permafrost ground temperature time series data more straightforward, efficient, and reproduceable. Some of the features include:
63
+
64
+ * Read a variety of common published data formats, datalogger outputs, and model results into a common data structure
65
+ * GEOtop model output
66
+ * GTN-P database export csv
67
+ * NTGS ground temperature report csv
68
+ * Geoprecision datalogger export
69
+ * HoboWare datalogger export
70
+ * Export data in a variety of common formats
71
+ * TSP-recommended csv format
72
+ * netcdf
73
+ * 'GTN-P'-style csv
74
+ * 'NTGS'-style csv
75
+ * Perform common data transformations
76
+ * Calculate daily, monthly, or yearly means, ignoring averaging periods with missing data
77
+ * Switch between "long" and "wide" dataframes
78
+ * Visualize and explore your data with commonly used plots
79
+ * Trumpet curves
80
+ * Temperature-time graphs
81
+ * Colour-contour profiles
82
+
83
+ ## [Installation](https://permafrostnet.gitlab.io/teaspoon/source/install.html)
84
+
85
+ ## [Usage Examples](https://permafrostnet.gitlab.io/teaspoon/source/examples.html)
86
+
87
+ ## [How to contribute](https://permafrostnet.gitlab.io/teaspoon/source/contributions.html)
88
+
89
+ ## Data Standard
90
+ TSP also defines a recommended csv format for ground temperature data (which can also be extended to many other kinds of permafrost data). It is described in the [DATA_STANDARD.md](./DATA_STANDARD.md) file in this directory. Files can be read (using the `TSP.to_csv` method) and written (using `read_tsp`) using the teaspoon software package.
91
+
92
+ ## Citation
93
+ If you find this software helpful, please consider using the following citation:
94
+
95
+ > Brown, N., (2022). tsp ("Teaspoon"): A library for ground temperature data. Journal of Open Source Software, 7(77), 4704, [https://doi.org/10.21105/joss.04704](https://doi.org/10.21105/joss.04704)