halib 0.1.60__tar.gz → 0.1.65__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {halib-0.1.60 → halib-0.1.65}/.gitignore +2 -0
- halib-0.1.65/MANIFEST.in +4 -0
- {halib-0.1.60 → halib-0.1.65}/PKG-INFO +48 -7
- {halib-0.1.60 → halib-0.1.65}/README.md +7 -1
- {halib-0.1.60 → halib-0.1.65}/halib/research/perfcalc.py +135 -83
- halib-0.1.65/halib/utils/gpu_mon.py +58 -0
- {halib-0.1.60 → halib-0.1.65}/halib/utils/video.py +16 -2
- {halib-0.1.60 → halib-0.1.65}/halib.egg-info/PKG-INFO +48 -7
- {halib-0.1.60 → halib-0.1.65}/halib.egg-info/SOURCES.txt +1 -0
- {halib-0.1.60 → halib-0.1.65}/setup.py +1 -1
- halib-0.1.60/MANIFEST.in +0 -3
- {halib-0.1.60 → halib-0.1.65}/GDriveFolder.txt +0 -0
- {halib-0.1.60 → halib-0.1.65}/LICENSE.txt +0 -0
- {halib-0.1.60 → halib-0.1.65}/guide_publish_pip.pdf +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/__init__.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/common.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/cuda.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/filetype/__init__.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/filetype/csvfile.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/filetype/jsonfile.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/filetype/textfile.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/filetype/videofile.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/filetype/yamlfile.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/online/__init__.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/online/gdrive.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/online/gdrive_mkdir.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/online/gdrive_test.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/online/projectmake.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/research/__init__.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/research/dataset.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/research/perftb.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/research/plot.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/research/torchloader.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/research/wandb_op.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/rich_color.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/system/__init__.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/system/cmd.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/system/filesys.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/utils/__init__.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/utils/dataclass_util.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/utils/dict_op.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/utils/listop.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib/utils/tele_noti.py +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib.egg-info/dependency_links.txt +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib.egg-info/requires.txt +0 -0
- {halib-0.1.60 → halib-0.1.65}/halib.egg-info/top_level.txt +0 -0
- {halib-0.1.60 → halib-0.1.65}/setup.cfg +0 -0
halib-0.1.65/MANIFEST.in
ADDED
@@ -1,23 +1,66 @@
|
|
1
|
-
Metadata-Version: 2.
|
1
|
+
Metadata-Version: 2.4
|
2
2
|
Name: halib
|
3
|
-
Version: 0.1.
|
3
|
+
Version: 0.1.65
|
4
4
|
Summary: Small library for common tasks
|
5
5
|
Author: Hoang Van Ha
|
6
6
|
Author-email: hoangvanhauit@gmail.com
|
7
|
-
License: UNKNOWN
|
8
|
-
Platform: UNKNOWN
|
9
7
|
Classifier: Programming Language :: Python :: 3
|
10
8
|
Classifier: License :: OSI Approved :: MIT License
|
11
9
|
Classifier: Operating System :: OS Independent
|
12
10
|
Requires-Python: >=3.8
|
13
11
|
Description-Content-Type: text/markdown
|
14
12
|
License-File: LICENSE.txt
|
13
|
+
Requires-Dist: arrow
|
14
|
+
Requires-Dist: click
|
15
|
+
Requires-Dist: enlighten
|
16
|
+
Requires-Dist: kaleido==0.1.*
|
17
|
+
Requires-Dist: loguru
|
18
|
+
Requires-Dist: more-itertools
|
19
|
+
Requires-Dist: moviepy
|
20
|
+
Requires-Dist: networkx
|
21
|
+
Requires-Dist: numpy
|
22
|
+
Requires-Dist: omegaconf
|
23
|
+
Requires-Dist: opencv-python
|
24
|
+
Requires-Dist: pandas
|
25
|
+
Requires-Dist: Pillow
|
26
|
+
Requires-Dist: Pyarrow
|
27
|
+
Requires-Dist: pycurl
|
28
|
+
Requires-Dist: python-telegram-bot
|
29
|
+
Requires-Dist: requests
|
30
|
+
Requires-Dist: rich
|
31
|
+
Requires-Dist: scikit-learn
|
32
|
+
Requires-Dist: matplotlib
|
33
|
+
Requires-Dist: seaborn
|
34
|
+
Requires-Dist: plotly
|
35
|
+
Requires-Dist: pygwalker
|
36
|
+
Requires-Dist: tabulate
|
37
|
+
Requires-Dist: itables
|
38
|
+
Requires-Dist: timebudget
|
39
|
+
Requires-Dist: tqdm
|
40
|
+
Requires-Dist: tube_dl
|
41
|
+
Requires-Dist: wandb
|
42
|
+
Requires-Dist: dataclass-wizard
|
43
|
+
Dynamic: author
|
44
|
+
Dynamic: author-email
|
45
|
+
Dynamic: classifier
|
46
|
+
Dynamic: description
|
47
|
+
Dynamic: description-content-type
|
48
|
+
Dynamic: license-file
|
49
|
+
Dynamic: requires-dist
|
50
|
+
Dynamic: requires-python
|
51
|
+
Dynamic: summary
|
15
52
|
|
16
53
|
Helper package for coding and automation
|
17
54
|
|
18
|
-
**Version 0.1.
|
55
|
+
**Version 0.1.65**
|
56
|
+
|
57
|
+
+ now use `uv` for venv management
|
58
|
+
+ `research/perfcalc`: support both torchmetrics and custom metrics for performance calculation
|
59
|
+
|
60
|
+
**Version 0.1.61**
|
19
61
|
|
20
62
|
+ add `util/video`: add `VideoUtils` class to handle common video-related tasks
|
63
|
+
+ add `util/gpu_mon`: add `GPUMonitor` class to monitor GPU usage and performance
|
21
64
|
|
22
65
|
**Version 0.1.59**
|
23
66
|
|
@@ -145,5 +188,3 @@ New Features
|
|
145
188
|
New Features
|
146
189
|
|
147
190
|
+ add support to upload local to google drive.
|
148
|
-
|
149
|
-
|
@@ -1,8 +1,14 @@
|
|
1
1
|
Helper package for coding and automation
|
2
2
|
|
3
|
-
**Version 0.1.
|
3
|
+
**Version 0.1.65**
|
4
|
+
|
5
|
+
+ now use `uv` for venv management
|
6
|
+
+ `research/perfcalc`: support both torchmetrics and custom metrics for performance calculation
|
7
|
+
|
8
|
+
**Version 0.1.61**
|
4
9
|
|
5
10
|
+ add `util/video`: add `VideoUtils` class to handle common video-related tasks
|
11
|
+
+ add `util/gpu_mon`: add `GPUMonitor` class to monitor GPU usage and performance
|
6
12
|
|
7
13
|
**Version 0.1.59**
|
8
14
|
|
@@ -12,6 +12,7 @@ from abc import ABC, abstractmethod
|
|
12
12
|
from ..filetype import csvfile
|
13
13
|
from ..common import now_str
|
14
14
|
from ..research.perftb import PerfTB
|
15
|
+
from collections import OrderedDict
|
15
16
|
|
16
17
|
# try to import torch, and torchmetrics
|
17
18
|
try:
|
@@ -62,11 +63,12 @@ REQUIRED_COLS = ["experiment", "dataset"]
|
|
62
63
|
CSV_FILE_POSTFIX = "__perf"
|
63
64
|
|
64
65
|
class PerfCalc(ABC): # Abstract base class for performance calculation
|
66
|
+
|
65
67
|
@abstractmethod
|
66
|
-
def
|
68
|
+
def get_experiment_name(self):
|
67
69
|
"""
|
68
|
-
Return
|
69
|
-
|
70
|
+
Return the name of the experiment.
|
71
|
+
This function should be overridden by the subclass if needed.
|
70
72
|
"""
|
71
73
|
pass
|
72
74
|
|
@@ -79,7 +81,22 @@ class PerfCalc(ABC): # Abstract base class for performance calculation
|
|
79
81
|
pass
|
80
82
|
|
81
83
|
@abstractmethod
|
82
|
-
def
|
84
|
+
def get_metrics_info(self):
|
85
|
+
"""
|
86
|
+
Return a list of metric names to be used for performance calculation OR a dictionaray with keys as metric names and values as metric instances of torchmetrics.Metric. For example: {"accuracy": Accuracy(), "precision": Precision()}
|
87
|
+
|
88
|
+
"""
|
89
|
+
pass
|
90
|
+
|
91
|
+
def calc_exp_outdict_custom_fields(self, outdict, *args, **kwargs):
|
92
|
+
"""Can be overridden by the subclass to add custom fields to the output dictionary.
|
93
|
+
! must return the modified outdict, and a ordered list of custom fields to be added to the output dictionary.
|
94
|
+
"""
|
95
|
+
return outdict, []
|
96
|
+
|
97
|
+
# ! can be override, but ONLY if torchmetrics are used
|
98
|
+
# Prepare the exp data for torch metrics.
|
99
|
+
def prepare_torch_metrics_exp_data(self, metric_names, *args, **kwargs):
|
83
100
|
"""
|
84
101
|
Prepare the data for metrics.
|
85
102
|
This function should be overridden by the subclass if needed.
|
@@ -88,76 +105,113 @@ class PerfCalc(ABC): # Abstract base class for performance calculation
|
|
88
105
|
"""
|
89
106
|
pass
|
90
107
|
|
91
|
-
|
92
|
-
def get_experiment_name(self):
|
108
|
+
def __validate_metrics_info(self, metrics_info):
|
93
109
|
"""
|
94
|
-
|
95
|
-
This function should be overridden by the subclass if needed.
|
110
|
+
Validate the metrics_info to ensure it is a list or a dictionary with valid metric names and instances.
|
96
111
|
"""
|
97
|
-
|
112
|
+
if not isinstance(metrics_info, (list, dict)):
|
113
|
+
raise TypeError(f"Metrics info must be a list or a dictionary, got {type(metrics_info).__name__}")
|
114
|
+
|
115
|
+
if isinstance(metrics_info, dict):
|
116
|
+
for k, v in metrics_info.items():
|
117
|
+
if not isinstance(k, str):
|
118
|
+
raise TypeError(f"Key '{k}' is not a string")
|
119
|
+
if not isinstance(v, Metric):
|
120
|
+
raise TypeError(f"Value for key '{k}' is not a torchmetrics.Metric (got {type(v).__name__})")
|
121
|
+
elif isinstance(metrics_info, list):
|
122
|
+
for metric in metrics_info:
|
123
|
+
if not isinstance(metric, str):
|
124
|
+
raise TypeError(f"Metric '{metric}' is not a string")
|
125
|
+
return metrics_info
|
126
|
+
def __calc_exp_perf_metrics(self, *args, **kwargs):
|
127
|
+
"""
|
128
|
+
Calculate the performance metrics for the experiment.
|
129
|
+
"""
|
130
|
+
metrics_info = self.__validate_metrics_info(self.get_metrics_info())
|
131
|
+
USED_TORCHMETRICS = isinstance(metrics_info, dict)
|
132
|
+
metric_names = metrics_info if isinstance(metrics_info, list) else list(metrics_info.keys())
|
133
|
+
out_dict = {metric: None for metric in metric_names}
|
134
|
+
out_dict["dataset"] = self.get_dataset_name()
|
135
|
+
out_dict["experiment"] = self.get_experiment_name()
|
136
|
+
out_dict, custom_fields = self.calc_exp_outdict_custom_fields(
|
137
|
+
outdict=out_dict, *args, **kwargs
|
138
|
+
)
|
139
|
+
if USED_TORCHMETRICS:
|
140
|
+
torch_metrics_dict = self.get_metrics_info()
|
141
|
+
all_metric_data = self.prepare_torch_metrics_exp_data(
|
142
|
+
metric_names, *args, **kwargs
|
143
|
+
)
|
144
|
+
metric_col_names = []
|
145
|
+
for metric in metric_names:
|
146
|
+
if metric not in all_metric_data:
|
147
|
+
raise ValueError(f"Metric '{metric}' not found in provided data.")
|
148
|
+
tmetric = torch_metrics_dict[metric] # torchmetrics instance
|
149
|
+
metric_data = all_metric_data[metric] # should be a dict of args/kwargs
|
150
|
+
# Inspect expected parameters for the metric's update() method
|
151
|
+
sig = inspect.signature(tmetric.update)
|
152
|
+
expected_args = list(sig.parameters.values())
|
153
|
+
# Prepare args in correct order
|
154
|
+
if isinstance(metric_data, dict):
|
155
|
+
# Match dict keys to parameter names
|
156
|
+
args = [metric_data[param.name] for param in expected_args]
|
157
|
+
elif isinstance(metric_data, (list, tuple)):
|
158
|
+
args = metric_data
|
159
|
+
else:
|
160
|
+
raise TypeError(f"Unsupported data format for metric '{metric}'")
|
98
161
|
|
99
|
-
|
100
|
-
|
101
|
-
|
162
|
+
# Call update and compute
|
163
|
+
if len(expected_args) == 1:
|
164
|
+
tmetric.update(args) # pass as single argument
|
165
|
+
else:
|
166
|
+
tmetric.update(*args) # unpack multiple arguments
|
167
|
+
computed_value = tmetric.compute()
|
168
|
+
# ensure the computed value converted to a scala value or list array
|
169
|
+
if isinstance(computed_value, torch.Tensor):
|
170
|
+
if computed_value.numel() == 1:
|
171
|
+
computed_value = computed_value.item()
|
172
|
+
else:
|
173
|
+
computed_value = computed_value.tolist()
|
174
|
+
col_name = f"metric_{metric}" if "metric_" not in metric else metric
|
175
|
+
metric_col_names.append(col_name)
|
176
|
+
out_dict[col_name] = computed_value
|
177
|
+
else:
|
178
|
+
# If torchmetrics are not used, calculate metrics using the custom method
|
179
|
+
metric_rs_dict = self.calc_exp_perf_metrics(
|
180
|
+
metric_names, *args, **kwargs)
|
181
|
+
for metric in metric_names:
|
182
|
+
if metric not in metric_rs_dict:
|
183
|
+
raise ValueError(f"Metric '{metric}' not found in provided data.")
|
184
|
+
col_name = f"metric_{metric}" if "metric_" not in metric else metric
|
185
|
+
out_dict[col_name] = metric_rs_dict[metric]
|
186
|
+
metric_col_names = [f"metric_{metric}" for metric in metric_names]
|
187
|
+
ordered_cols = REQUIRED_COLS + custom_fields + metric_col_names
|
188
|
+
# create a new ordered dictionary with the correct order
|
189
|
+
out_dict = OrderedDict((col, out_dict[col]) for col in ordered_cols if col in out_dict)
|
190
|
+
return out_dict
|
191
|
+
|
192
|
+
# ! only need to override this method if torchmetrics are not used
|
193
|
+
def calc_exp_perf_metrics(self, metric_names, *args, **kwargs):
|
102
194
|
"""
|
103
|
-
|
195
|
+
Calculate the performance metrics for the experiment, but not using torchmetrics.
|
196
|
+
This function should be overridden by the subclass if needed.
|
197
|
+
Must return a dictionary with keys as metric names and values as the calculated metrics.
|
198
|
+
"""
|
199
|
+
raise NotImplementedError("calc_exp_perf_metrics() must be overridden by the subclass if torchmetrics are not used.")
|
200
|
+
|
104
201
|
|
105
202
|
#! custom kwargs:
|
106
203
|
#! outfile - if provided, will save the output to a CSV file with the given path
|
107
204
|
#! outdir - if provided, will save the output to a CSV file in the given directory with a generated filename
|
108
205
|
#! return_df - if True, will return a DataFrame instead of a dictionary
|
109
206
|
|
110
|
-
def
|
207
|
+
def calc_save_exp_perfs(self, *args, **kwargs):
|
111
208
|
"""
|
112
209
|
Calculate the metrics.
|
113
210
|
This function should be overridden by the subclass if needed.
|
114
211
|
Must return a dictionary with keys as metric names and values as the calculated metrics.
|
115
212
|
"""
|
116
|
-
|
117
|
-
|
118
|
-
out_dict['dataset'] = self.get_dataset_name()
|
119
|
-
out_dict['experiment'] = self.get_experiment_name()
|
120
|
-
out_dict, custom_fields = self.calc_exp_outdict_custom_fields(
|
121
|
-
outdict=out_dict, *args, **kwargs
|
122
|
-
)
|
123
|
-
torch_metrics_dict = self.get_exp_torch_metrics()
|
124
|
-
all_metric_data = self.prepare_exp_data_for_metrics(
|
125
|
-
metric_names, *args, **kwargs
|
126
|
-
)
|
127
|
-
metric_col_names = []
|
128
|
-
for metric in metric_names:
|
129
|
-
if metric not in all_metric_data:
|
130
|
-
raise ValueError(f"Metric '{metric}' not found in provided data.")
|
131
|
-
tmetric = torch_metrics_dict[metric] # torchmetrics instance
|
132
|
-
metric_data = all_metric_data[metric] # should be a dict of args/kwargs
|
133
|
-
# Inspect expected parameters for the metric's update() method
|
134
|
-
sig = inspect.signature(tmetric.update)
|
135
|
-
expected_args = list(sig.parameters.values())
|
136
|
-
# Prepare args in correct order
|
137
|
-
if isinstance(metric_data, dict):
|
138
|
-
# Match dict keys to parameter names
|
139
|
-
args = [metric_data[param.name] for param in expected_args]
|
140
|
-
elif isinstance(metric_data, (list, tuple)):
|
141
|
-
args = metric_data
|
142
|
-
else:
|
143
|
-
raise TypeError(f"Unsupported data format for metric '{metric}'")
|
144
|
-
|
145
|
-
# Call update and compute
|
146
|
-
if len(expected_args) == 1:
|
147
|
-
tmetric.update(args) # pass as single argument
|
148
|
-
else:
|
149
|
-
tmetric.update(*args) # unpack multiple arguments
|
150
|
-
computed_value = tmetric.compute()
|
151
|
-
# ensure the computed value converted to a scala value or list array
|
152
|
-
if isinstance(computed_value, torch.Tensor):
|
153
|
-
if computed_value.numel() == 1:
|
154
|
-
computed_value = computed_value.item()
|
155
|
-
else:
|
156
|
-
computed_value = computed_value.tolist()
|
157
|
-
col_name = f"metric_{metric}" if 'metric_' not in metric else metric
|
158
|
-
metric_col_names.append(col_name)
|
159
|
-
out_dict[col_name] = computed_value
|
160
|
-
|
213
|
+
out_dict = self.__calc_exp_perf_metrics(*args, **kwargs)
|
214
|
+
# pprint(f"Output Dictionary: {out_dict}")
|
161
215
|
# check if any kwargs named "outfile"
|
162
216
|
csv_outfile = kwargs.get("outfile", None)
|
163
217
|
if csv_outfile is not None:
|
@@ -171,7 +225,8 @@ class PerfCalc(ABC): # Abstract base class for performance calculation
|
|
171
225
|
|
172
226
|
# convert out_dict to a DataFrame
|
173
227
|
df = pd.DataFrame([out_dict])
|
174
|
-
|
228
|
+
# get the orders of the columns as the orders or the keys in out_dict
|
229
|
+
ordered_cols = list(out_dict.keys())
|
175
230
|
df = df[ordered_cols] # reorder columns
|
176
231
|
|
177
232
|
if csv_outfile:
|
@@ -182,9 +237,17 @@ class PerfCalc(ABC): # Abstract base class for performance calculation
|
|
182
237
|
else:
|
183
238
|
return out_dict, csv_outfile
|
184
239
|
|
240
|
+
@staticmethod
|
241
|
+
def default_exp_csv_filter_fn(exp_file_name: str) -> bool:
|
242
|
+
"""
|
243
|
+
Default filter function for experiments.
|
244
|
+
Returns True if the experiment name does not start with "test_" or "debug_".
|
245
|
+
"""
|
246
|
+
return "__perf.csv" in exp_file_name
|
247
|
+
|
185
248
|
@classmethod
|
186
249
|
def gen_perf_report_for_multip_exps(
|
187
|
-
cls, indir: str,
|
250
|
+
cls, indir: str, exp_csv_filter_fn=default_exp_csv_filter_fn, csv_sep=";"
|
188
251
|
) -> PerfTB:
|
189
252
|
"""
|
190
253
|
Generate a performance report by scanning experiment subdirectories.
|
@@ -289,12 +352,12 @@ class PerfCalc(ABC): # Abstract base class for performance calculation
|
|
289
352
|
]
|
290
353
|
if len(exp_dirs) == 0:
|
291
354
|
csv_perf_files = glob.glob(
|
292
|
-
os.path.join(indir, f"
|
355
|
+
os.path.join(indir, f"*.csv")
|
293
356
|
)
|
294
357
|
csv_perf_files = [
|
295
358
|
file_item
|
296
359
|
for file_item in csv_perf_files
|
297
|
-
if
|
360
|
+
if exp_csv_filter_fn(file_item)
|
298
361
|
]
|
299
362
|
else:
|
300
363
|
# multiple experiment directories found
|
@@ -302,33 +365,22 @@ class PerfCalc(ABC): # Abstract base class for performance calculation
|
|
302
365
|
for exp_dir in exp_dirs:
|
303
366
|
# pprint(f"Searching in experiment directory: {exp_dir}")
|
304
367
|
matched = glob.glob(
|
305
|
-
os.path.join(exp_dir, f"
|
368
|
+
os.path.join(exp_dir, f"*.csv")
|
306
369
|
)
|
370
|
+
matched = [
|
371
|
+
file_item
|
372
|
+
for file_item in matched
|
373
|
+
if exp_csv_filter_fn(file_item)
|
374
|
+
]
|
307
375
|
csv_perf_files.extend(matched)
|
308
376
|
|
309
377
|
assert (
|
310
378
|
len(csv_perf_files) > 0
|
311
|
-
), f"No CSV files matching pattern '{
|
379
|
+
), f"No CSV files matching pattern '{exp_csv_filter_fn}' found in the experiment directories."
|
312
380
|
|
313
|
-
assert len(csv_perf_files) > 0, f"No CSV files matching pattern '{
|
381
|
+
assert len(csv_perf_files) > 0, f"No CSV files matching pattern '{exp_csv_filter_fn}' found in the experiment directories."
|
314
382
|
|
315
383
|
all_exp_perf_df = get_df_for_all_exp_perf(csv_perf_files, csv_sep=csv_sep)
|
316
384
|
csvfile.fn_display_df(all_exp_perf_df)
|
317
385
|
perf_tb = mk_perftb_report(all_exp_perf_df)
|
318
|
-
return perf_tb
|
319
|
-
|
320
|
-
|
321
|
-
def main():
|
322
|
-
indir = "./zreport/test"
|
323
|
-
report_outfile = "./zreport/all.csv"
|
324
|
-
exp_perf_csv_pattern = "__perf"
|
325
|
-
csv_sep = ";"
|
326
|
-
perftb = PerfCalc.gen_perf_report_for_multip_exps(
|
327
|
-
indir, exp_perf_csv_pattern, csv_sep
|
328
|
-
)
|
329
|
-
perftb.to_csv(report_outfile, sep=csv_sep)
|
330
|
-
inspect(perftb)
|
331
|
-
perftb.plot(save_path="./zreport/test_csv.svg", open_plot=True)
|
332
|
-
|
333
|
-
if __name__ == "__main__":
|
334
|
-
main()
|
386
|
+
return perf_tb
|
@@ -0,0 +1,58 @@
|
|
1
|
+
# install `pynvml_utils` package first
|
2
|
+
# see this repo: https://github.com/gpuopenanalytics/pynvml
|
3
|
+
from pynvml_utils import nvidia_smi
|
4
|
+
import time
|
5
|
+
import threading
|
6
|
+
from rich.pretty import pprint
|
7
|
+
|
8
|
+
class GPUMonitor:
|
9
|
+
def __init__(self, gpu_index=0, interval=0.01):
|
10
|
+
self.nvsmi = nvidia_smi.getInstance()
|
11
|
+
self.gpu_index = gpu_index
|
12
|
+
self.interval = interval
|
13
|
+
self.gpu_stats = []
|
14
|
+
self._running = False
|
15
|
+
self._thread = None
|
16
|
+
|
17
|
+
def _monitor(self):
|
18
|
+
while self._running:
|
19
|
+
stats = self.nvsmi.DeviceQuery("power.draw, memory.used")["gpu"][
|
20
|
+
self.gpu_index
|
21
|
+
]
|
22
|
+
# pprint(stats)
|
23
|
+
self.gpu_stats.append(
|
24
|
+
{
|
25
|
+
"power": stats["power_readings"]["power_draw"],
|
26
|
+
"power_unit": stats["power_readings"]["unit"],
|
27
|
+
"memory": stats["fb_memory_usage"]["used"],
|
28
|
+
"memory_unit": stats["fb_memory_usage"]["unit"],
|
29
|
+
}
|
30
|
+
)
|
31
|
+
time.sleep(self.interval)
|
32
|
+
|
33
|
+
def start(self):
|
34
|
+
if not self._running:
|
35
|
+
self._running = True
|
36
|
+
# clear previous stats
|
37
|
+
self.gpu_stats.clear()
|
38
|
+
self._thread = threading.Thread(target=self._monitor)
|
39
|
+
self._thread.start()
|
40
|
+
|
41
|
+
def stop(self):
|
42
|
+
if self._running:
|
43
|
+
self._running = False
|
44
|
+
self._thread.join()
|
45
|
+
# clear the thread reference
|
46
|
+
self._thread = None
|
47
|
+
|
48
|
+
def get_stats(self):
|
49
|
+
## return self.gpu_stats
|
50
|
+
assert self._running is False, "GPU monitor is still running. Stop it first."
|
51
|
+
|
52
|
+
powers = [s["power"] for s in self.gpu_stats if s["power"] is not None]
|
53
|
+
memories = [s["memory"] for s in self.gpu_stats if s["memory"] is not None]
|
54
|
+
avg_power = sum(powers) / len(powers) if powers else 0
|
55
|
+
max_memory = max(memories) if memories else 0
|
56
|
+
# power_unit = self.gpu_stats[0]["power_unit"] if self.gpu_stats else "W"
|
57
|
+
# memory_unit = self.gpu_stats[0]["memory_unit"] if self.gpu_stats else "MiB"
|
58
|
+
return {"gpu_avg_power": avg_power, "gpu_avg_max_memory": max_memory}
|
@@ -5,15 +5,17 @@ from ..system import filesys as fs
|
|
5
5
|
|
6
6
|
|
7
7
|
class VideoUtils:
|
8
|
+
|
8
9
|
@staticmethod
|
9
|
-
def
|
10
|
+
def _default_meta_extractor(video_path):
|
11
|
+
"""Default video metadata extractor function."""
|
10
12
|
# Open the video file
|
11
13
|
cap = cv2.VideoCapture(video_path)
|
12
14
|
|
13
15
|
# Check if the video was opened successfully
|
14
16
|
if not cap.isOpened():
|
15
17
|
print(f"Error: Could not open video file {video_path}")
|
16
|
-
return None
|
18
|
+
return None
|
17
19
|
|
18
20
|
# Get the frame count
|
19
21
|
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
|
@@ -23,6 +25,7 @@ class VideoUtils:
|
|
23
25
|
|
24
26
|
# Release the video capture object
|
25
27
|
cap.release()
|
28
|
+
|
26
29
|
meta_dict = {
|
27
30
|
"video_path": video_path,
|
28
31
|
"frame_count": frame_count,
|
@@ -31,6 +34,17 @@ class VideoUtils:
|
|
31
34
|
return meta_dict
|
32
35
|
|
33
36
|
@staticmethod
|
37
|
+
def get_video_meta_dict(video_path, meta_dict_extractor_func=None):
|
38
|
+
assert os.path.exists(video_path), f"Video file {video_path} does not exist"
|
39
|
+
if meta_dict_extractor_func and callable(meta_dict_extractor_func):
|
40
|
+
assert meta_dict_extractor_func.__code__.co_argcount == 1, "meta_dict_extractor_func must take exactly one argument (video_path)"
|
41
|
+
meta_dict = meta_dict_extractor_func(video_path)
|
42
|
+
assert isinstance(meta_dict, dict), "meta_dict_extractor_func must return a dictionary"
|
43
|
+
assert 'video_path' in meta_dict, "meta_dict must contain 'video_path'"
|
44
|
+
else:
|
45
|
+
meta_dict = VideoUtils._default_meta_extractor(video_path=video_path)
|
46
|
+
return meta_dict
|
47
|
+
@staticmethod
|
34
48
|
def get_video_dir_meta_df(video_dir, video_exts=['.mp4', '.avi', '.mov', '.mkv'], search_recursive=False, csv_outfile=None):
|
35
49
|
assert os.path.exists(video_dir), f"Video directory {video_dir} does not exist"
|
36
50
|
video_files = fs.filter_files_by_extension(video_dir, video_exts, recursive=search_recursive)
|
@@ -1,23 +1,66 @@
|
|
1
|
-
Metadata-Version: 2.
|
1
|
+
Metadata-Version: 2.4
|
2
2
|
Name: halib
|
3
|
-
Version: 0.1.
|
3
|
+
Version: 0.1.65
|
4
4
|
Summary: Small library for common tasks
|
5
5
|
Author: Hoang Van Ha
|
6
6
|
Author-email: hoangvanhauit@gmail.com
|
7
|
-
License: UNKNOWN
|
8
|
-
Platform: UNKNOWN
|
9
7
|
Classifier: Programming Language :: Python :: 3
|
10
8
|
Classifier: License :: OSI Approved :: MIT License
|
11
9
|
Classifier: Operating System :: OS Independent
|
12
10
|
Requires-Python: >=3.8
|
13
11
|
Description-Content-Type: text/markdown
|
14
12
|
License-File: LICENSE.txt
|
13
|
+
Requires-Dist: arrow
|
14
|
+
Requires-Dist: click
|
15
|
+
Requires-Dist: enlighten
|
16
|
+
Requires-Dist: kaleido==0.1.*
|
17
|
+
Requires-Dist: loguru
|
18
|
+
Requires-Dist: more-itertools
|
19
|
+
Requires-Dist: moviepy
|
20
|
+
Requires-Dist: networkx
|
21
|
+
Requires-Dist: numpy
|
22
|
+
Requires-Dist: omegaconf
|
23
|
+
Requires-Dist: opencv-python
|
24
|
+
Requires-Dist: pandas
|
25
|
+
Requires-Dist: Pillow
|
26
|
+
Requires-Dist: Pyarrow
|
27
|
+
Requires-Dist: pycurl
|
28
|
+
Requires-Dist: python-telegram-bot
|
29
|
+
Requires-Dist: requests
|
30
|
+
Requires-Dist: rich
|
31
|
+
Requires-Dist: scikit-learn
|
32
|
+
Requires-Dist: matplotlib
|
33
|
+
Requires-Dist: seaborn
|
34
|
+
Requires-Dist: plotly
|
35
|
+
Requires-Dist: pygwalker
|
36
|
+
Requires-Dist: tabulate
|
37
|
+
Requires-Dist: itables
|
38
|
+
Requires-Dist: timebudget
|
39
|
+
Requires-Dist: tqdm
|
40
|
+
Requires-Dist: tube_dl
|
41
|
+
Requires-Dist: wandb
|
42
|
+
Requires-Dist: dataclass-wizard
|
43
|
+
Dynamic: author
|
44
|
+
Dynamic: author-email
|
45
|
+
Dynamic: classifier
|
46
|
+
Dynamic: description
|
47
|
+
Dynamic: description-content-type
|
48
|
+
Dynamic: license-file
|
49
|
+
Dynamic: requires-dist
|
50
|
+
Dynamic: requires-python
|
51
|
+
Dynamic: summary
|
15
52
|
|
16
53
|
Helper package for coding and automation
|
17
54
|
|
18
|
-
**Version 0.1.
|
55
|
+
**Version 0.1.65**
|
56
|
+
|
57
|
+
+ now use `uv` for venv management
|
58
|
+
+ `research/perfcalc`: support both torchmetrics and custom metrics for performance calculation
|
59
|
+
|
60
|
+
**Version 0.1.61**
|
19
61
|
|
20
62
|
+ add `util/video`: add `VideoUtils` class to handle common video-related tasks
|
63
|
+
+ add `util/gpu_mon`: add `GPUMonitor` class to monitor GPU usage and performance
|
21
64
|
|
22
65
|
**Version 0.1.59**
|
23
66
|
|
@@ -145,5 +188,3 @@ New Features
|
|
145
188
|
New Features
|
146
189
|
|
147
190
|
+ add support to upload local to google drive.
|
148
|
-
|
149
|
-
|
halib-0.1.60/MANIFEST.in
DELETED
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|