actinet 0.0.dev3__tar.gz → 0.0.dev5__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {actinet-0.0.dev3 → actinet-0.0.dev5}/PKG-INFO +35 -13
- {actinet-0.0.dev3 → actinet-0.0.dev5}/README.md +34 -12
- {actinet-0.0.dev3 → actinet-0.0.dev5}/setup.py +1 -1
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/__init__.py +2 -2
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/_version.py +3 -3
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/actinet.py +73 -43
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/models.py +17 -13
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/sslmodel.py +12 -8
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/summarisation.py +12 -25
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet.egg-info/PKG-INFO +35 -13
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet.egg-info/requires.txt +1 -1
- {actinet-0.0.dev3 → actinet-0.0.dev5}/LICENSE.md +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/pyproject.toml +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/setup.cfg +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/accPlot.py +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/circadian.py +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/hmm.py +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/utils/__init__.py +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/utils/collate_outputs.py +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/utils/generate_commands.py +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet/utils/utils.py +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet.egg-info/SOURCES.txt +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet.egg-info/dependency_links.txt +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet.egg-info/entry_points.txt +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/src/actinet.egg-info/top_level.txt +0 -0
- {actinet-0.0.dev3 → actinet-0.0.dev5}/versioneer.py +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: actinet
|
|
3
|
-
Version: 0.0.
|
|
3
|
+
Version: 0.0.dev5
|
|
4
4
|
Summary: Activity detection algorithm compatible with the UK Biobank Accelerometer Dataset
|
|
5
5
|
Home-page: https://github.com/OxWearables/actinet
|
|
6
6
|
Download-URL: https://github.com/OxWearables/actinet
|
|
@@ -60,16 +60,16 @@ You are all set! The next time that you want to use `actinet`, open the Anaconda
|
|
|
60
60
|
|
|
61
61
|
```bash
|
|
62
62
|
# Process an AX3 file
|
|
63
|
-
$ actinet sample.cwa
|
|
63
|
+
$ actinet -f sample.cwa
|
|
64
64
|
|
|
65
65
|
# Or an ActiGraph file
|
|
66
|
-
$ actinet sample.gt3x
|
|
66
|
+
$ actinet -f sample.gt3x
|
|
67
67
|
|
|
68
68
|
# Or a GENEActiv file
|
|
69
|
-
$ actinet sample.bin
|
|
69
|
+
$ actinet -f sample.bin
|
|
70
70
|
|
|
71
71
|
# Or a CSV file (see data format below)
|
|
72
|
-
$ actinet sample.csv
|
|
72
|
+
$ actinet -f sample.csv
|
|
73
73
|
```
|
|
74
74
|
|
|
75
75
|
### Troubleshooting
|
|
@@ -80,12 +80,34 @@ Some systems may face issues with Java when running the script. If this is your
|
|
|
80
80
|
conda install -n actinet openjdk=8
|
|
81
81
|
```
|
|
82
82
|
|
|
83
|
+
### Offline usage
|
|
84
|
+
|
|
85
|
+
To use this package offline, one must first download and install the relevant classifier file and model modules.
|
|
86
|
+
This repository offers two ways of doing this.
|
|
87
|
+
|
|
88
|
+
Run the following code when you have internet access:
|
|
89
|
+
```console
|
|
90
|
+
actinet --cache-classifier
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
Following this, the actinet classifier can be used as standard without internet access, without needing to specify the flags relating to the model repository.
|
|
94
|
+
|
|
95
|
+
Alternatively, you can download or git clone the ssl modules from the [ssl-wearables repository](https://github.com/OxWearables/ssl-wearables).
|
|
96
|
+
|
|
97
|
+
In addition, you can donwload/prepare a custom classifier file.
|
|
98
|
+
|
|
99
|
+
Once this is downloaded to an appopriate location, you can run the actinet model using:
|
|
100
|
+
|
|
101
|
+
```console
|
|
102
|
+
actinet -f sample.cwa -c /path/to/classifier.joblib.lzma -m /path/to/ssl-wearables
|
|
103
|
+
```
|
|
104
|
+
|
|
83
105
|
### Output files
|
|
84
106
|
|
|
85
107
|
By default, output files will be stored in a folder named after the input file, `outputs/{filename}/`, created in the current working directory. You can change the output path with the `-o` flag:
|
|
86
108
|
|
|
87
109
|
```console
|
|
88
|
-
$ actinet sample.cwa -o /path/to/some/folder/
|
|
110
|
+
$ actinet -f sample.cwa -o /path/to/some/folder/
|
|
89
111
|
|
|
90
112
|
<Output summary written to: /path/to/some/folder/sample-outputSummary.json>
|
|
91
113
|
<Time series output written to: /path/to/some/folder/sample-timeSeries.csv.gz>
|
|
@@ -103,7 +125,7 @@ See [Data Dictionary](https://biobankaccanalysis.readthedocs.io/en/latest/datadi
|
|
|
103
125
|
To plot the activity profiles, you can use the -p flag:
|
|
104
126
|
|
|
105
127
|
```console
|
|
106
|
-
$ actinet sample.cwa -p
|
|
128
|
+
$ actinet -f sample.cwa -p
|
|
107
129
|
<Output plot written to: data/sample-timeSeries-plot.png>
|
|
108
130
|
```
|
|
109
131
|
|
|
@@ -138,9 +160,9 @@ To process multiple files you can create a text file in Notepad which includes o
|
|
|
138
160
|
Example text file *commands.txt*:
|
|
139
161
|
|
|
140
162
|
```console
|
|
141
|
-
actinet file1.cwa &
|
|
142
|
-
actinet file2.cwa &
|
|
143
|
-
actinet file3.cwa
|
|
163
|
+
actinet -f file1.cwa &
|
|
164
|
+
actinet -f file2.cwa &
|
|
165
|
+
actinet -f file3.cwa
|
|
144
166
|
:END
|
|
145
167
|
````
|
|
146
168
|
|
|
@@ -151,9 +173,9 @@ Once this file is created, run `cmd < commands.txt` from the terminal.
|
|
|
151
173
|
Create a file *command.sh* with:
|
|
152
174
|
|
|
153
175
|
```console
|
|
154
|
-
actinet file1.cwa
|
|
155
|
-
actinet file2.cwa
|
|
156
|
-
actinet file3.cwa
|
|
176
|
+
actinet -f file1.cwa
|
|
177
|
+
actinet -f file2.cwa
|
|
178
|
+
actinet -f file3.cwa
|
|
157
179
|
```
|
|
158
180
|
|
|
159
181
|
Then, run `bash command.sh` from the terminal.
|
|
@@ -38,16 +38,16 @@ You are all set! The next time that you want to use `actinet`, open the Anaconda
|
|
|
38
38
|
|
|
39
39
|
```bash
|
|
40
40
|
# Process an AX3 file
|
|
41
|
-
$ actinet sample.cwa
|
|
41
|
+
$ actinet -f sample.cwa
|
|
42
42
|
|
|
43
43
|
# Or an ActiGraph file
|
|
44
|
-
$ actinet sample.gt3x
|
|
44
|
+
$ actinet -f sample.gt3x
|
|
45
45
|
|
|
46
46
|
# Or a GENEActiv file
|
|
47
|
-
$ actinet sample.bin
|
|
47
|
+
$ actinet -f sample.bin
|
|
48
48
|
|
|
49
49
|
# Or a CSV file (see data format below)
|
|
50
|
-
$ actinet sample.csv
|
|
50
|
+
$ actinet -f sample.csv
|
|
51
51
|
```
|
|
52
52
|
|
|
53
53
|
### Troubleshooting
|
|
@@ -58,12 +58,34 @@ Some systems may face issues with Java when running the script. If this is your
|
|
|
58
58
|
conda install -n actinet openjdk=8
|
|
59
59
|
```
|
|
60
60
|
|
|
61
|
+
### Offline usage
|
|
62
|
+
|
|
63
|
+
To use this package offline, one must first download and install the relevant classifier file and model modules.
|
|
64
|
+
This repository offers two ways of doing this.
|
|
65
|
+
|
|
66
|
+
Run the following code when you have internet access:
|
|
67
|
+
```console
|
|
68
|
+
actinet --cache-classifier
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
Following this, the actinet classifier can be used as standard without internet access, without needing to specify the flags relating to the model repository.
|
|
72
|
+
|
|
73
|
+
Alternatively, you can download or git clone the ssl modules from the [ssl-wearables repository](https://github.com/OxWearables/ssl-wearables).
|
|
74
|
+
|
|
75
|
+
In addition, you can donwload/prepare a custom classifier file.
|
|
76
|
+
|
|
77
|
+
Once this is downloaded to an appopriate location, you can run the actinet model using:
|
|
78
|
+
|
|
79
|
+
```console
|
|
80
|
+
actinet -f sample.cwa -c /path/to/classifier.joblib.lzma -m /path/to/ssl-wearables
|
|
81
|
+
```
|
|
82
|
+
|
|
61
83
|
### Output files
|
|
62
84
|
|
|
63
85
|
By default, output files will be stored in a folder named after the input file, `outputs/{filename}/`, created in the current working directory. You can change the output path with the `-o` flag:
|
|
64
86
|
|
|
65
87
|
```console
|
|
66
|
-
$ actinet sample.cwa -o /path/to/some/folder/
|
|
88
|
+
$ actinet -f sample.cwa -o /path/to/some/folder/
|
|
67
89
|
|
|
68
90
|
<Output summary written to: /path/to/some/folder/sample-outputSummary.json>
|
|
69
91
|
<Time series output written to: /path/to/some/folder/sample-timeSeries.csv.gz>
|
|
@@ -81,7 +103,7 @@ See [Data Dictionary](https://biobankaccanalysis.readthedocs.io/en/latest/datadi
|
|
|
81
103
|
To plot the activity profiles, you can use the -p flag:
|
|
82
104
|
|
|
83
105
|
```console
|
|
84
|
-
$ actinet sample.cwa -p
|
|
106
|
+
$ actinet -f sample.cwa -p
|
|
85
107
|
<Output plot written to: data/sample-timeSeries-plot.png>
|
|
86
108
|
```
|
|
87
109
|
|
|
@@ -116,9 +138,9 @@ To process multiple files you can create a text file in Notepad which includes o
|
|
|
116
138
|
Example text file *commands.txt*:
|
|
117
139
|
|
|
118
140
|
```console
|
|
119
|
-
actinet file1.cwa &
|
|
120
|
-
actinet file2.cwa &
|
|
121
|
-
actinet file3.cwa
|
|
141
|
+
actinet -f file1.cwa &
|
|
142
|
+
actinet -f file2.cwa &
|
|
143
|
+
actinet -f file3.cwa
|
|
122
144
|
:END
|
|
123
145
|
````
|
|
124
146
|
|
|
@@ -129,9 +151,9 @@ Once this file is created, run `cmd < commands.txt` from the terminal.
|
|
|
129
151
|
Create a file *command.sh* with:
|
|
130
152
|
|
|
131
153
|
```console
|
|
132
|
-
actinet file1.cwa
|
|
133
|
-
actinet file2.cwa
|
|
134
|
-
actinet file3.cwa
|
|
154
|
+
actinet -f file1.cwa
|
|
155
|
+
actinet -f file2.cwa
|
|
156
|
+
actinet -f file3.cwa
|
|
135
157
|
```
|
|
136
158
|
|
|
137
159
|
Then, run `bash command.sh` from the terminal.
|
|
@@ -4,8 +4,8 @@ __maintainer__ = "Shing Chan"
|
|
|
4
4
|
__maintainer_email__ = "shing.chan@ndph.ox.ac.uk"
|
|
5
5
|
__license__ = "See LICENSE file."
|
|
6
6
|
|
|
7
|
-
|
|
8
|
-
|
|
7
|
+
__classifier_version__ = "ssl_ukb_c24_rw_20240204"
|
|
8
|
+
__classifier_md5__ = "11c3f36348dae37da4f99bd6d810bbb2"
|
|
9
9
|
|
|
10
10
|
from . import _version
|
|
11
11
|
|
|
@@ -8,11 +8,11 @@ import json
|
|
|
8
8
|
|
|
9
9
|
version_json = '''
|
|
10
10
|
{
|
|
11
|
-
"date": "2024-02-
|
|
11
|
+
"date": "2024-02-06T18:32:30+0000",
|
|
12
12
|
"dirty": false,
|
|
13
13
|
"error": null,
|
|
14
|
-
"full-revisionid": "
|
|
15
|
-
"version": "0.0.
|
|
14
|
+
"full-revisionid": "ae03389ab9965e52140c46a6c43e39799ad1c61b",
|
|
15
|
+
"version": "0.0.dev5"
|
|
16
16
|
}
|
|
17
17
|
''' # END VERSION_JSON
|
|
18
18
|
|
|
@@ -12,16 +12,15 @@ import joblib
|
|
|
12
12
|
|
|
13
13
|
import actipy
|
|
14
14
|
|
|
15
|
-
from actinet import
|
|
16
|
-
from actinet import
|
|
15
|
+
from actinet import __classifier_version__
|
|
16
|
+
from actinet import __classifier_md5__
|
|
17
17
|
from actinet.accPlot import plotTimeSeries
|
|
18
18
|
from actinet.models import ActivityClassifier
|
|
19
19
|
from actinet.sslmodel import SAMPLE_RATE
|
|
20
|
-
from actinet.summarisation import getActivitySummary
|
|
20
|
+
from actinet.summarisation import getActivitySummary
|
|
21
21
|
from actinet.utils.utils import infer_freq
|
|
22
22
|
|
|
23
|
-
BASE_URL = "https://zenodo.org/records/
|
|
24
|
-
|
|
23
|
+
BASE_URL = "https://zenodo.org/records/10625542/files/"
|
|
25
24
|
|
|
26
25
|
def main():
|
|
27
26
|
|
|
@@ -29,7 +28,7 @@ def main():
|
|
|
29
28
|
description="A tool to predict activities from accelerometer data using a self-supervised Resnet 18 model",
|
|
30
29
|
add_help=True,
|
|
31
30
|
)
|
|
32
|
-
parser.add_argument("filepath", help="Enter file to be processed")
|
|
31
|
+
parser.add_argument("--filepath", "-f", help="Enter file to be processed")
|
|
33
32
|
parser.add_argument(
|
|
34
33
|
"--outdir",
|
|
35
34
|
"-o",
|
|
@@ -37,10 +36,15 @@ def main():
|
|
|
37
36
|
default="outputs/",
|
|
38
37
|
)
|
|
39
38
|
parser.add_argument(
|
|
40
|
-
"--
|
|
39
|
+
"--classifier-path",
|
|
40
|
+
"-c",
|
|
41
|
+
help="Enter custom acitivty classifier file to use",
|
|
42
|
+
default=None,
|
|
41
43
|
)
|
|
42
44
|
parser.add_argument(
|
|
43
|
-
"--force-download",
|
|
45
|
+
"--force-download",
|
|
46
|
+
action="store_true",
|
|
47
|
+
help="Force download of classifier file",
|
|
44
48
|
)
|
|
45
49
|
parser.add_argument(
|
|
46
50
|
"--pytorch-device",
|
|
@@ -63,9 +67,12 @@ def main():
|
|
|
63
67
|
help="Plot the predicted activity labels",
|
|
64
68
|
)
|
|
65
69
|
parser.add_argument(
|
|
66
|
-
"--cache-
|
|
70
|
+
"--cache-classifier",
|
|
67
71
|
action="store_true",
|
|
68
|
-
help="Download and cache
|
|
72
|
+
help="Download and cache classifier file and model modules for offline usage",
|
|
73
|
+
)
|
|
74
|
+
parser.add_argument(
|
|
75
|
+
"--model-repo-path", "-m", help="Enter repository of ssl model", default=None
|
|
69
76
|
)
|
|
70
77
|
parser.add_argument("--quiet", "-q", action="store_true", help="Suppress output")
|
|
71
78
|
args = parser.parse_args()
|
|
@@ -74,8 +81,18 @@ def main():
|
|
|
74
81
|
|
|
75
82
|
verbose = not args.quiet
|
|
76
83
|
|
|
77
|
-
|
|
78
|
-
|
|
84
|
+
classifier_path = (
|
|
85
|
+
pathlib.Path(__file__).parent / f"{__classifier_version__}.joblib.lzma"
|
|
86
|
+
)
|
|
87
|
+
|
|
88
|
+
if args.cache_classifier:
|
|
89
|
+
load_classifier(
|
|
90
|
+
classifier_path=classifier_path,
|
|
91
|
+
model_repo_path=None,
|
|
92
|
+
check_md5=True,
|
|
93
|
+
force_download=True,
|
|
94
|
+
verbose=verbose,
|
|
95
|
+
)
|
|
79
96
|
|
|
80
97
|
after = time.time()
|
|
81
98
|
print(f"Done! ({round(after - before,2)}s)")
|
|
@@ -95,21 +112,21 @@ def main():
|
|
|
95
112
|
outdir = os.path.join(args.outdir, basename)
|
|
96
113
|
os.makedirs(outdir, exist_ok=True)
|
|
97
114
|
|
|
98
|
-
# Run
|
|
115
|
+
# Run classifier
|
|
99
116
|
if verbose:
|
|
100
|
-
print("Loading
|
|
101
|
-
|
|
102
|
-
check_md5 = args.
|
|
103
|
-
|
|
104
|
-
args.
|
|
117
|
+
print("Loading classifier...")
|
|
118
|
+
|
|
119
|
+
check_md5 = args.classifier_path is None
|
|
120
|
+
classifier: ActivityClassifier = load_classifier(
|
|
121
|
+
args.classifier_path or classifier_path, args.model_repo_path, check_md5, args.force_download, verbose
|
|
105
122
|
)
|
|
106
123
|
|
|
107
|
-
|
|
108
|
-
|
|
124
|
+
classifier.verbose = verbose
|
|
125
|
+
classifier.device = args.pytorch_device
|
|
109
126
|
|
|
110
127
|
if verbose:
|
|
111
128
|
print("Running activity classifier...")
|
|
112
|
-
Y =
|
|
129
|
+
Y = classifier.predict_from_frame(data)
|
|
113
130
|
|
|
114
131
|
# Save predicted activities
|
|
115
132
|
timeSeriesFile = f"{outdir}/{basename}-timeSeries.csv.gz"
|
|
@@ -118,8 +135,17 @@ def main():
|
|
|
118
135
|
if verbose:
|
|
119
136
|
print("Time series output written to:", timeSeriesFile)
|
|
120
137
|
|
|
138
|
+
# Plot activity profile
|
|
139
|
+
if args.plot_activity:
|
|
140
|
+
plotFile = f"{outdir}/{basename}-timeSeries-plot.png"
|
|
141
|
+
fig = plotTimeSeries(Y)
|
|
142
|
+
fig.savefig(plotFile, dpi=200, bbox_inches="tight")
|
|
143
|
+
|
|
144
|
+
if verbose:
|
|
145
|
+
print("Output plot written to:", plotFile)
|
|
146
|
+
|
|
121
147
|
# Summary
|
|
122
|
-
summary = getActivitySummary(Y, True, True, verbose)
|
|
148
|
+
summary = getActivitySummary(Y, classifier.labels, True, True, verbose)
|
|
123
149
|
|
|
124
150
|
# Join the actipy processing info, with acitivity summary data
|
|
125
151
|
outputSummary = {**summary, **info}
|
|
@@ -136,7 +162,7 @@ def main():
|
|
|
136
162
|
if verbose:
|
|
137
163
|
print("\nSummary Stats\n---------------------")
|
|
138
164
|
print(
|
|
139
|
-
{
|
|
165
|
+
json.dumps({
|
|
140
166
|
key: outputSummary[key]
|
|
141
167
|
for key in [
|
|
142
168
|
"Filename",
|
|
@@ -144,21 +170,11 @@ def main():
|
|
|
144
170
|
"WearTime(days)",
|
|
145
171
|
"NonwearTime(days)",
|
|
146
172
|
"ReadOK",
|
|
147
|
-
"acc-overall-avg(mg)",
|
|
148
173
|
]
|
|
149
|
-
+ [f"{label}-
|
|
150
|
-
}
|
|
174
|
+
+ [f"{label}-overall-avg" for label in ["acc"] + classifier.labels]
|
|
175
|
+
}, indent=4, cls=NpEncoder)
|
|
151
176
|
)
|
|
152
177
|
|
|
153
|
-
# Plot activity profile
|
|
154
|
-
if args.plot_activity:
|
|
155
|
-
plotFile = f"{outdir}/{basename}-timeSeries-plot.png"
|
|
156
|
-
fig = plotTimeSeries(Y)
|
|
157
|
-
fig.savefig(plotFile, dpi=200, bbox_inches="tight")
|
|
158
|
-
|
|
159
|
-
if verbose:
|
|
160
|
-
print("Output plot written to:", plotFile)
|
|
161
|
-
|
|
162
178
|
after = time.time()
|
|
163
179
|
print(f"Done! ({round(after - before,2)}s)")
|
|
164
180
|
|
|
@@ -237,27 +253,41 @@ def resolve_path(path):
|
|
|
237
253
|
return dirname, filename, extension
|
|
238
254
|
|
|
239
255
|
|
|
240
|
-
def
|
|
241
|
-
|
|
256
|
+
def load_classifier(
|
|
257
|
+
classifier_path,
|
|
258
|
+
model_repo_path=None,
|
|
259
|
+
check_md5=True,
|
|
260
|
+
force_download=False,
|
|
261
|
+
verbose=True,
|
|
262
|
+
):
|
|
263
|
+
"""Load trained classifier. Download if not exists."""
|
|
242
264
|
|
|
243
|
-
pth = pathlib.Path(
|
|
265
|
+
pth = pathlib.Path(classifier_path)
|
|
244
266
|
|
|
245
267
|
if force_download or not pth.exists():
|
|
246
268
|
|
|
247
|
-
url = f"{BASE_URL}{
|
|
269
|
+
url = f"{BASE_URL}{__classifier_version__}.joblib.lzma"
|
|
248
270
|
|
|
249
|
-
|
|
271
|
+
if verbose:
|
|
272
|
+
print(f"Downloading {url}...")
|
|
250
273
|
|
|
251
274
|
with urllib.request.urlopen(url) as f_src, open(pth, "wb") as f_dst:
|
|
252
275
|
shutil.copyfileobj(f_src, f_dst)
|
|
253
276
|
|
|
254
277
|
if check_md5:
|
|
255
|
-
assert md5(pth) ==
|
|
256
|
-
"
|
|
278
|
+
assert md5(pth) == __classifier_md5__, (
|
|
279
|
+
"Classifier file is corrupted. Please run with --force-download "
|
|
257
280
|
"to download the model file again."
|
|
258
281
|
)
|
|
259
282
|
|
|
260
|
-
|
|
283
|
+
classifier: ActivityClassifier = joblib.load(pth)
|
|
284
|
+
|
|
285
|
+
if model_repo_path and pathlib.Path(model_repo_path).exists() and verbose:
|
|
286
|
+
print(f"Loading model repository from {model_repo_path}.")
|
|
287
|
+
|
|
288
|
+
classifier.load_model(model_repo_path)
|
|
289
|
+
|
|
290
|
+
return classifier
|
|
261
291
|
|
|
262
292
|
|
|
263
293
|
def md5(fname):
|
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
import numpy as np
|
|
2
2
|
import pandas as pd
|
|
3
|
+
import joblib
|
|
3
4
|
from tqdm.auto import tqdm
|
|
4
5
|
from torch.utils.data import DataLoader
|
|
5
6
|
|
|
@@ -13,24 +14,24 @@ class ActivityClassifier:
|
|
|
13
14
|
device="cpu",
|
|
14
15
|
batch_size=512,
|
|
15
16
|
window_sec=30,
|
|
16
|
-
weights_path=
|
|
17
|
+
weights_path=None,
|
|
17
18
|
labels=[],
|
|
18
|
-
ssl_repo=None,
|
|
19
19
|
repo_tag="v1.0.0",
|
|
20
20
|
hmm_params=None,
|
|
21
21
|
verbose=False,
|
|
22
22
|
):
|
|
23
23
|
self.device = device
|
|
24
|
-
self.weights_path = weights_path
|
|
25
24
|
self.repo_tag = repo_tag
|
|
26
25
|
self.batch_size = batch_size
|
|
27
26
|
self.window_sec = window_sec
|
|
28
|
-
self.state_dict = None
|
|
29
27
|
self.labels = labels
|
|
30
28
|
self.window_len = int(np.ceil(self.window_sec * sslmodel.SAMPLE_RATE))
|
|
31
29
|
self.verbose = verbose
|
|
32
30
|
|
|
33
|
-
self.
|
|
31
|
+
self.model_weights = (
|
|
32
|
+
sslmodel.get_model_dict(weights_path, device) if weights_path else None
|
|
33
|
+
)
|
|
34
|
+
self.model = None
|
|
34
35
|
|
|
35
36
|
hmm_params = hmm_params or dict()
|
|
36
37
|
self.hmms = hmm.HMM(**hmm_params)
|
|
@@ -43,7 +44,7 @@ class ActivityClassifier:
|
|
|
43
44
|
"batch_size: {self.batch_size}\n"
|
|
44
45
|
"device: {self.device}\n"
|
|
45
46
|
"hmm: {self.hmms}\n"
|
|
46
|
-
"model: {
|
|
47
|
+
"model: {model}".format(self=self, model=self.model or "Model has not been loaded.")
|
|
47
48
|
)
|
|
48
49
|
|
|
49
50
|
def predict_from_frame(self, data):
|
|
@@ -72,6 +73,9 @@ class ActivityClassifier:
|
|
|
72
73
|
return Y
|
|
73
74
|
|
|
74
75
|
def _predict(self, X):
|
|
76
|
+
if self.model is None:
|
|
77
|
+
raise Exception("Model has not been loaded for ActivityClassifier.")
|
|
78
|
+
|
|
75
79
|
sslmodel.verbose = self.verbose
|
|
76
80
|
|
|
77
81
|
dataset = sslmodel.NormalDataset(X)
|
|
@@ -90,18 +94,18 @@ class ActivityClassifier:
|
|
|
90
94
|
|
|
91
95
|
return y_pred
|
|
92
96
|
|
|
93
|
-
def
|
|
94
|
-
model = sslmodel.get_sslnet(
|
|
95
|
-
self.device,
|
|
97
|
+
def load_model(self, model_repo=None):
|
|
98
|
+
self.model = sslmodel.get_sslnet(
|
|
96
99
|
tag=self.repo_tag,
|
|
97
|
-
local_repo_path=
|
|
98
|
-
|
|
100
|
+
local_repo_path=model_repo,
|
|
101
|
+
pretrained_weights=self.model_weights or True,
|
|
99
102
|
window_sec=self.window_sec,
|
|
100
103
|
num_labels=len(self.labels),
|
|
101
104
|
)
|
|
102
|
-
model.to(self.device)
|
|
105
|
+
self.model.to(self.device)
|
|
103
106
|
|
|
104
|
-
|
|
107
|
+
def save(self, output_path):
|
|
108
|
+
joblib.dump(self, output_path, compress=("lzma", 3))
|
|
105
109
|
|
|
106
110
|
|
|
107
111
|
def make_windows(data, window_sec, fn=None, return_index=False, verbose=True):
|
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
""" Helper classes and functions for the SSL model """
|
|
2
2
|
|
|
3
|
+
from collections import OrderedDict
|
|
3
4
|
import torch
|
|
4
5
|
import torch.nn as nn
|
|
5
6
|
import numpy as np
|
|
@@ -184,10 +185,9 @@ class EarlyStopping:
|
|
|
184
185
|
|
|
185
186
|
|
|
186
187
|
def get_sslnet(
|
|
187
|
-
device,
|
|
188
188
|
tag="v1.0.0",
|
|
189
189
|
local_repo_path=None,
|
|
190
|
-
|
|
190
|
+
pretrained_weights=False,
|
|
191
191
|
window_sec: int = 30,
|
|
192
192
|
num_labels: int = 4,
|
|
193
193
|
):
|
|
@@ -196,8 +196,8 @@ def get_sslnet(
|
|
|
196
196
|
|
|
197
197
|
:param str device: PyTorch device to use
|
|
198
198
|
:param str tag: Tag on the ssl-wearables repo to check out
|
|
199
|
-
:param str local_repo_path: Path to local version of the SSL
|
|
200
|
-
:param bool/
|
|
199
|
+
:param str local_repo_path: Path to local version of the SSL repository for offline usage
|
|
200
|
+
:param bool/OrderedDict pretrained_weights: Initialise the model with UKB self-supervised/specified pretrained weights
|
|
201
201
|
:param int window_sec: The length of the window of data in seconds (limited to 5, 10 or 30)
|
|
202
202
|
:param int num_labels: The number of labels to predict
|
|
203
203
|
:return: pytorch SSL model
|
|
@@ -218,7 +218,7 @@ def get_sslnet(
|
|
|
218
218
|
f"harnet{window_sec}",
|
|
219
219
|
source="local",
|
|
220
220
|
class_num=num_labels,
|
|
221
|
-
pretrained=
|
|
221
|
+
pretrained=pretrained_weights == True,
|
|
222
222
|
)
|
|
223
223
|
|
|
224
224
|
else:
|
|
@@ -251,16 +251,20 @@ def get_sslnet(
|
|
|
251
251
|
trust_repo=True,
|
|
252
252
|
source=source,
|
|
253
253
|
class_num=num_labels,
|
|
254
|
-
pretrained=
|
|
254
|
+
pretrained=pretrained_weights == True,
|
|
255
255
|
verbose=verbose,
|
|
256
256
|
)
|
|
257
257
|
|
|
258
|
-
|
|
259
|
-
|
|
258
|
+
if isinstance(pretrained_weights, OrderedDict):
|
|
259
|
+
sslnet.load_state_dict(pretrained_weights)
|
|
260
260
|
|
|
261
261
|
return sslnet
|
|
262
262
|
|
|
263
263
|
|
|
264
|
+
def get_model_dict(weights_path, device):
|
|
265
|
+
return torch.load(weights_path, map_location=device)
|
|
266
|
+
|
|
267
|
+
|
|
264
268
|
def predict(model, dataloader, device, output_logits=False):
|
|
265
269
|
"""
|
|
266
270
|
Iterate over the dataloader and do prediction with a pytorch model.
|
|
@@ -7,11 +7,10 @@ from pandas.tseries.frequencies import to_offset
|
|
|
7
7
|
from actinet.utils.utils import date_parser, toScreen
|
|
8
8
|
from actinet import circadian
|
|
9
9
|
|
|
10
|
-
ACTIVITY_LABELS = ["light", "moderate-vigorous", "sedentary", "sleep"]
|
|
11
|
-
|
|
12
10
|
|
|
13
11
|
def getActivitySummary(
|
|
14
12
|
data,
|
|
13
|
+
labels,
|
|
15
14
|
intensityDistribution=False,
|
|
16
15
|
circadianMetrics=False,
|
|
17
16
|
verbose=True,
|
|
@@ -19,12 +18,12 @@ def getActivitySummary(
|
|
|
19
18
|
"""
|
|
20
19
|
Calculate overall activity summary from predicted activity label data.
|
|
21
20
|
This is achieved by:
|
|
22
|
-
1)
|
|
23
|
-
2) calculate
|
|
24
|
-
3)
|
|
25
|
-
4) derive main movement summaries (overall, weekday/weekend, and hour)
|
|
21
|
+
1) calculate imputation values to replace nan PA metric values
|
|
22
|
+
2) calculate empirical cumulative distribution function of vector magnitudes
|
|
23
|
+
3) derive main movement summaries (overall, weekday/weekend, and hour)
|
|
26
24
|
|
|
27
25
|
:param str data: Input csv.gz file or pandas dataframe of processed epoch data
|
|
26
|
+
:param list(str) labels: Activity state labels
|
|
28
27
|
:param bool intensityDistribution: Add intensity outputs to dict <summary>
|
|
29
28
|
:param bool circadianMetrics: Add circadian rhythm metrics to dict <summary>
|
|
30
29
|
:param bool verbose: Print verbose output
|
|
@@ -46,7 +45,7 @@ def getActivitySummary(
|
|
|
46
45
|
|
|
47
46
|
# Main movement summaries
|
|
48
47
|
summary = _summarise(
|
|
49
|
-
data,
|
|
48
|
+
data, labels, intensityDistribution, circadianMetrics, verbose,
|
|
50
49
|
)
|
|
51
50
|
|
|
52
51
|
# Return physical activity summary
|
|
@@ -84,9 +83,6 @@ def _summarise(
|
|
|
84
83
|
startTime = data.index[0]
|
|
85
84
|
summary["FirstDay(0=mon,6=sun)"] = startTime.weekday()
|
|
86
85
|
|
|
87
|
-
# Check daylight savings crossings
|
|
88
|
-
summary = checkDST(data, summary)
|
|
89
|
-
|
|
90
86
|
# Hours of activity for each recorded day
|
|
91
87
|
epochPeriod = int(pd.Timedelta(freq).total_seconds())
|
|
92
88
|
cols = labels
|
|
@@ -171,31 +167,22 @@ def _summarise(
|
|
|
171
167
|
if circadianMetrics:
|
|
172
168
|
toScreen("=== Calculating circadian metrics ===", verbose)
|
|
173
169
|
summary = circadian.calculatePSD(
|
|
174
|
-
data, epochPeriod, False,
|
|
170
|
+
data, epochPeriod, False, labels, summary
|
|
175
171
|
)
|
|
176
172
|
summary = circadian.calculatePSD(
|
|
177
|
-
data, epochPeriod, True,
|
|
173
|
+
data, epochPeriod, True, labels, summary
|
|
178
174
|
)
|
|
179
175
|
summary = circadian.calculateFourierFreq(
|
|
180
|
-
data, epochPeriod, False,
|
|
176
|
+
data, epochPeriod, False, labels, summary
|
|
181
177
|
)
|
|
182
178
|
summary = circadian.calculateFourierFreq(
|
|
183
|
-
data, epochPeriod, False,
|
|
179
|
+
data, epochPeriod, False, labels, summary
|
|
184
180
|
)
|
|
185
181
|
summary = circadian.calculateM10L5(data, epochPeriod, summary)
|
|
186
182
|
|
|
187
183
|
return summary
|
|
188
184
|
|
|
189
185
|
|
|
190
|
-
def checkDST(data, summary={}):
|
|
191
|
-
if data.index[0].dst() < data.index[-1].dst():
|
|
192
|
-
summary["quality-daylightSavingsCrossover"] = 1
|
|
193
|
-
elif data.index[0].dst() > data.index[-1].dst():
|
|
194
|
-
summary["quality-daylightSavingsCrossover"] = -1
|
|
195
|
-
else:
|
|
196
|
-
summary["quality-daylightSavingsCrossover"] = 0
|
|
197
|
-
|
|
198
|
-
return summary
|
|
199
186
|
|
|
200
187
|
|
|
201
188
|
def imputeMissing(data, extrapolate=True):
|
|
@@ -223,7 +210,7 @@ def imputeMissing(data, extrapolate=True):
|
|
|
223
210
|
data.index[0].floor("D"),
|
|
224
211
|
data.index[-1].ceil("D"),
|
|
225
212
|
freq=to_offset(pd.infer_freq(data.index)),
|
|
226
|
-
|
|
213
|
+
inclusive="left",
|
|
227
214
|
name="time",
|
|
228
215
|
),
|
|
229
216
|
method="nearest",
|
|
@@ -303,7 +290,7 @@ def calculateECDF(x, summary):
|
|
|
303
290
|
)
|
|
304
291
|
|
|
305
292
|
# Write to summary
|
|
306
|
-
for level, val in ecdf.
|
|
293
|
+
for level, val in ecdf.items():
|
|
307
294
|
summary[f"{x.name}-ecdf-{level}mg"] = val
|
|
308
295
|
|
|
309
296
|
return summary
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: actinet
|
|
3
|
-
Version: 0.0.
|
|
3
|
+
Version: 0.0.dev5
|
|
4
4
|
Summary: Activity detection algorithm compatible with the UK Biobank Accelerometer Dataset
|
|
5
5
|
Home-page: https://github.com/OxWearables/actinet
|
|
6
6
|
Download-URL: https://github.com/OxWearables/actinet
|
|
@@ -60,16 +60,16 @@ You are all set! The next time that you want to use `actinet`, open the Anaconda
|
|
|
60
60
|
|
|
61
61
|
```bash
|
|
62
62
|
# Process an AX3 file
|
|
63
|
-
$ actinet sample.cwa
|
|
63
|
+
$ actinet -f sample.cwa
|
|
64
64
|
|
|
65
65
|
# Or an ActiGraph file
|
|
66
|
-
$ actinet sample.gt3x
|
|
66
|
+
$ actinet -f sample.gt3x
|
|
67
67
|
|
|
68
68
|
# Or a GENEActiv file
|
|
69
|
-
$ actinet sample.bin
|
|
69
|
+
$ actinet -f sample.bin
|
|
70
70
|
|
|
71
71
|
# Or a CSV file (see data format below)
|
|
72
|
-
$ actinet sample.csv
|
|
72
|
+
$ actinet -f sample.csv
|
|
73
73
|
```
|
|
74
74
|
|
|
75
75
|
### Troubleshooting
|
|
@@ -80,12 +80,34 @@ Some systems may face issues with Java when running the script. If this is your
|
|
|
80
80
|
conda install -n actinet openjdk=8
|
|
81
81
|
```
|
|
82
82
|
|
|
83
|
+
### Offline usage
|
|
84
|
+
|
|
85
|
+
To use this package offline, one must first download and install the relevant classifier file and model modules.
|
|
86
|
+
This repository offers two ways of doing this.
|
|
87
|
+
|
|
88
|
+
Run the following code when you have internet access:
|
|
89
|
+
```console
|
|
90
|
+
actinet --cache-classifier
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
Following this, the actinet classifier can be used as standard without internet access, without needing to specify the flags relating to the model repository.
|
|
94
|
+
|
|
95
|
+
Alternatively, you can download or git clone the ssl modules from the [ssl-wearables repository](https://github.com/OxWearables/ssl-wearables).
|
|
96
|
+
|
|
97
|
+
In addition, you can donwload/prepare a custom classifier file.
|
|
98
|
+
|
|
99
|
+
Once this is downloaded to an appopriate location, you can run the actinet model using:
|
|
100
|
+
|
|
101
|
+
```console
|
|
102
|
+
actinet -f sample.cwa -c /path/to/classifier.joblib.lzma -m /path/to/ssl-wearables
|
|
103
|
+
```
|
|
104
|
+
|
|
83
105
|
### Output files
|
|
84
106
|
|
|
85
107
|
By default, output files will be stored in a folder named after the input file, `outputs/{filename}/`, created in the current working directory. You can change the output path with the `-o` flag:
|
|
86
108
|
|
|
87
109
|
```console
|
|
88
|
-
$ actinet sample.cwa -o /path/to/some/folder/
|
|
110
|
+
$ actinet -f sample.cwa -o /path/to/some/folder/
|
|
89
111
|
|
|
90
112
|
<Output summary written to: /path/to/some/folder/sample-outputSummary.json>
|
|
91
113
|
<Time series output written to: /path/to/some/folder/sample-timeSeries.csv.gz>
|
|
@@ -103,7 +125,7 @@ See [Data Dictionary](https://biobankaccanalysis.readthedocs.io/en/latest/datadi
|
|
|
103
125
|
To plot the activity profiles, you can use the -p flag:
|
|
104
126
|
|
|
105
127
|
```console
|
|
106
|
-
$ actinet sample.cwa -p
|
|
128
|
+
$ actinet -f sample.cwa -p
|
|
107
129
|
<Output plot written to: data/sample-timeSeries-plot.png>
|
|
108
130
|
```
|
|
109
131
|
|
|
@@ -138,9 +160,9 @@ To process multiple files you can create a text file in Notepad which includes o
|
|
|
138
160
|
Example text file *commands.txt*:
|
|
139
161
|
|
|
140
162
|
```console
|
|
141
|
-
actinet file1.cwa &
|
|
142
|
-
actinet file2.cwa &
|
|
143
|
-
actinet file3.cwa
|
|
163
|
+
actinet -f file1.cwa &
|
|
164
|
+
actinet -f file2.cwa &
|
|
165
|
+
actinet -f file3.cwa
|
|
144
166
|
:END
|
|
145
167
|
````
|
|
146
168
|
|
|
@@ -151,9 +173,9 @@ Once this file is created, run `cmd < commands.txt` from the terminal.
|
|
|
151
173
|
Create a file *command.sh* with:
|
|
152
174
|
|
|
153
175
|
```console
|
|
154
|
-
actinet file1.cwa
|
|
155
|
-
actinet file2.cwa
|
|
156
|
-
actinet file3.cwa
|
|
176
|
+
actinet -f file1.cwa
|
|
177
|
+
actinet -f file2.cwa
|
|
178
|
+
actinet -f file3.cwa
|
|
157
179
|
```
|
|
158
180
|
|
|
159
181
|
Then, run `bash command.sh` from the terminal.
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|