pylantir 0.0.8__tar.gz → 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: pylantir
3
- Version: 0.0.8
3
+ Version: 0.1.0
4
4
  Summary: Python - DICOM Modality WorkList
5
5
  Author-email: Milton Camacho <miltoncamachoicc@gmail.com>
6
6
  Requires-Python: >=3.11.1
@@ -31,7 +31,7 @@ Requires-Dist: flake8-formatter_junit_xml ; extra == "test"
31
31
  Requires-Dist: flake8 ; extra == "test"
32
32
  Requires-Dist: flake8-pyproject ; extra == "test"
33
33
  Requires-Dist: pre-commit==3.3.1 ; extra == "test"
34
- Requires-Dist: pylint==2.17.4 ; extra == "test"
34
+ Requires-Dist: pylint==3.3.6 ; extra == "test"
35
35
  Requires-Dist: pylint_junit ; extra == "test"
36
36
  Requires-Dist: pytest-cov==4.0.0 ; extra == "test"
37
37
  Requires-Dist: pytest-mock<3.10.1 ; extra == "test"
@@ -82,12 +82,19 @@ pylantir start --ip 127.0.0.1 --port 4242 --AEtitle MWL_SERVER --pylantir_config
82
82
 
83
83
  ## Tests
84
84
 
85
- Once your modality worklist server is running you can test it running the following:
85
+ If you want to run the tests make sure to clone the repository and run them from there.
86
+
87
+ Git clone the repository:
88
+
89
+ ```bash
90
+ git clone https://github.com/miltoncamacho/pylantir
91
+ cd pylantir/tests
92
+ ```
86
93
 
87
94
  Query the worklist database to check that you have some entries using:
88
95
 
89
96
  ```bash
90
- pylantir query-db
97
+ python query-db.py
91
98
  ```
92
99
 
93
100
  Then, you can get a StudyUID from one of the entries to test the MPPS workflow. For example: 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632
@@ -95,19 +102,19 @@ Then, you can get a StudyUID from one of the entries to test the MPPS workflow.
95
102
  Take this and run a create action to mark the worklist Procedure Step Status as IN_PROGRESS
96
103
 
97
104
  ```bash
98
- pylantir test-mpps --AEtitle MWL_SERVER --mpps_action create --callingAEtitle MWL_TESTER --ip 127.0.0.1 --port 4242 --study_uid 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632
105
+ python test-mpps.py --AEtitle MWL_SERVER --mpps_action create --callingAEtitle MWL_TESTER --ip 127.0.0.1 --port 4242 --study_uid 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632
99
106
  ```
100
107
 
101
108
  You can verify that this in fact modified your database re-running:
102
109
 
103
110
  ```bash
104
- pylantir query-db
111
+ python query-db.py
105
112
  ```
106
113
 
107
114
  Finally, you can also simulate the pocedure completion efectively updating the Procedure Step Status to COMPLETED or DISCONTINUED:
108
115
 
109
116
  ```bash
110
- pylantir test-mpps --AEtitle MWL_SERVER --mpps_action set --mpps_status COMPLETED --callingAEtitle MWL_TESTER --ip 127.0.0.1 --port 4242 --study_uid 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632 --sop_uid 1.2.840.10008.3.1.2.3.4.187176383255263644225774937658729238426
117
+ python test-mpps.py --AEtitle MWL_SERVER --mpps_action set --mpps_status COMPLETED --callingAEtitle MWL_TESTER --ip 127.0.0.1 --port 4242 --study_uid 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632 --sop_uid 1.2.840.10008.3.1.2.3.4.187176383255263644225774937658729238426
111
118
  ```
112
119
 
113
120
  ## Usage
@@ -137,6 +144,8 @@ usage: pylantir [-h] [--AEtitle AETITLE] [--ip IP] [--port PORT] [--pylantir_con
137
144
  - **site**: Site ID:string
138
145
  - **protocol**: `{"site": "protocol_name", "mapping": "HIS/RIS mapping"}`
139
146
  - **redcap2wl**: Dictionary of REDCap fields to worklist fields mapping e.g., `{"redcap_field": "worklist_field"}`
147
+ - **db_update_interval**: How often to reload the database e
148
+ - **operation_interval**: What is the time range in a day in which the database will be updated e.g., `{"start_time":[hours,minutes],"end_time":[hours,minutes]}`
140
149
  - **--mpps_action {create,set}**: Action to perform for MPPS either create or set
141
150
  - **--mpps_status {COMPLETED,DISCONTINUED}**: Status to set for MPPS either COMPLETED or DISCONTINUED
142
151
  - **--callingAEtitle CALLINGAETITLE**: Calling AE Title for MPPS, it helps when the MWL is limited to only accept certain AE titles
@@ -152,6 +161,7 @@ As a default pylantir will try to read a JSON structured file with the following
152
161
  "db_path": "/path/to/worklist.db",
153
162
  "db_echo": "False",
154
163
  "db_update_interval": 60,
164
+ "operation_interval": {"start_time": [0,0],"end_time": [23,59]},
155
165
  "allowed_aet": [],
156
166
  "site": "792",
157
167
  "redcap2wl": {
@@ -35,12 +35,19 @@ pylantir start --ip 127.0.0.1 --port 4242 --AEtitle MWL_SERVER --pylantir_config
35
35
 
36
36
  ## Tests
37
37
 
38
- Once your modality worklist server is running you can test it running the following:
38
+ If you want to run the tests make sure to clone the repository and run them from there.
39
+
40
+ Git clone the repository:
41
+
42
+ ```bash
43
+ git clone https://github.com/miltoncamacho/pylantir
44
+ cd pylantir/tests
45
+ ```
39
46
 
40
47
  Query the worklist database to check that you have some entries using:
41
48
 
42
49
  ```bash
43
- pylantir query-db
50
+ python query-db.py
44
51
  ```
45
52
 
46
53
  Then, you can get a StudyUID from one of the entries to test the MPPS workflow. For example: 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632
@@ -48,19 +55,19 @@ Then, you can get a StudyUID from one of the entries to test the MPPS workflow.
48
55
  Take this and run a create action to mark the worklist Procedure Step Status as IN_PROGRESS
49
56
 
50
57
  ```bash
51
- pylantir test-mpps --AEtitle MWL_SERVER --mpps_action create --callingAEtitle MWL_TESTER --ip 127.0.0.1 --port 4242 --study_uid 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632
58
+ python test-mpps.py --AEtitle MWL_SERVER --mpps_action create --callingAEtitle MWL_TESTER --ip 127.0.0.1 --port 4242 --study_uid 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632
52
59
  ```
53
60
 
54
61
  You can verify that this in fact modified your database re-running:
55
62
 
56
63
  ```bash
57
- pylantir query-db
64
+ python query-db.py
58
65
  ```
59
66
 
60
67
  Finally, you can also simulate the pocedure completion efectively updating the Procedure Step Status to COMPLETED or DISCONTINUED:
61
68
 
62
69
  ```bash
63
- pylantir test-mpps --AEtitle MWL_SERVER --mpps_action set --mpps_status COMPLETED --callingAEtitle MWL_TESTER --ip 127.0.0.1 --port 4242 --study_uid 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632 --sop_uid 1.2.840.10008.3.1.2.3.4.187176383255263644225774937658729238426
70
+ python test-mpps.py --AEtitle MWL_SERVER --mpps_action set --mpps_status COMPLETED --callingAEtitle MWL_TESTER --ip 127.0.0.1 --port 4242 --study_uid 1.2.840.10008.3.1.2.3.4.55635351412689303463019139483773956632 --sop_uid 1.2.840.10008.3.1.2.3.4.187176383255263644225774937658729238426
64
71
  ```
65
72
 
66
73
  ## Usage
@@ -90,6 +97,8 @@ usage: pylantir [-h] [--AEtitle AETITLE] [--ip IP] [--port PORT] [--pylantir_con
90
97
  - **site**: Site ID:string
91
98
  - **protocol**: `{"site": "protocol_name", "mapping": "HIS/RIS mapping"}`
92
99
  - **redcap2wl**: Dictionary of REDCap fields to worklist fields mapping e.g., `{"redcap_field": "worklist_field"}`
100
+ - **db_update_interval**: How often to reload the database e
101
+ - **operation_interval**: What is the time range in a day in which the database will be updated e.g., `{"start_time":[hours,minutes],"end_time":[hours,minutes]}`
93
102
  - **--mpps_action {create,set}**: Action to perform for MPPS either create or set
94
103
  - **--mpps_status {COMPLETED,DISCONTINUED}**: Status to set for MPPS either COMPLETED or DISCONTINUED
95
104
  - **--callingAEtitle CALLINGAETITLE**: Calling AE Title for MPPS, it helps when the MWL is limited to only accept certain AE titles
@@ -105,6 +114,7 @@ As a default pylantir will try to read a JSON structured file with the following
105
114
  "db_path": "/path/to/worklist.db",
106
115
  "db_echo": "False",
107
116
  "db_update_interval": 60,
117
+ "operation_interval": {"start_time": [0,0],"end_time": [23,59]},
108
118
  "allowed_aet": [],
109
119
  "site": "792",
110
120
  "redcap2wl": {
@@ -4,7 +4,7 @@ build-backend = "flit_core.buildapi"
4
4
 
5
5
  [project]
6
6
  name = "pylantir"
7
- version = "0.0.8"
7
+ version = "0.1.0"
8
8
  authors = [
9
9
  {name = "Milton Camacho", email = "miltoncamachoicc@gmail.com"},
10
10
  ]
@@ -50,7 +50,7 @@ test = [
50
50
  "flake8",
51
51
  "flake8-pyproject",
52
52
  "pre-commit==3.3.1",
53
- "pylint==2.17.4",
53
+ "pylint==3.3.6",
54
54
  "pylint_junit",
55
55
  "pytest-cov==4.0.0",
56
56
  "pytest-mock<3.10.1",
@@ -209,6 +209,9 @@ def main() -> None:
209
209
  # Extract the database update interval (default to 60 seconds if missing)
210
210
  db_update_interval = config.get("db_update_interval", 60)
211
211
 
212
+ # Extract the operation interval (default from 00:00 to 23:59 hours if missing)
213
+ operation_interval = config.get("operation_interval", {"start_time": [0,0], "end_time": [23,59]})
214
+
212
215
  # Extract allowed AE Titles (default to empty list if missing)
213
216
  allowed_aet = config.get("allowed_aet", [])
214
217
 
@@ -219,7 +222,6 @@ def main() -> None:
219
222
  redcap2wl = config.get("redcap2wl", {})
220
223
 
221
224
  # EXtract protocol mapping
222
-
223
225
  protocol = config.get("protocol", {})
224
226
 
225
227
  # Create and update the MWL database
@@ -230,6 +232,7 @@ def main() -> None:
230
232
  protocol=protocol,
231
233
  redcap2wl=redcap2wl,
232
234
  interval=db_update_interval,
235
+ operation_interval=operation_interval,
233
236
  )
234
237
 
235
238
  # sync_redcap_to_db(
@@ -3,6 +3,16 @@
3
3
  "db_echo": "0",
4
4
  "db_update_interval": 60,
5
5
  "allowed_aet": [],
6
+ "operation_interval": {
7
+ "start_time": [
8
+ 0,
9
+ 0
10
+ ],
11
+ "end_time": [
12
+ 23,
13
+ 59
14
+ ]
15
+ },
6
16
  "mri_visit_session_mapping": {
7
17
  "t1_arm_1": "1",
8
18
  "t2_arm_1": "2",
@@ -10,7 +20,7 @@
10
20
  },
11
21
  "site": "792",
12
22
  "redcap2wl": {
13
- "study_id" : "study_id",
23
+ "study_id": "study_id",
14
24
  "family_id": "family_id",
15
25
  "youth_dob_y": "youth_dob_y",
16
26
  "t1_date": "t1_date",
@@ -22,7 +32,7 @@
22
32
  "performing_physician": "performing_physician",
23
33
  "station_name": "station_name",
24
34
  "status": "performed_procedure_step_status"
25
- },
35
+ },
26
36
  "protocol": {
27
37
  "792": "BRAIN_MRI_3T"
28
38
  }
@@ -37,7 +37,10 @@ class WorklistItem(Base):
37
37
  procedure_description = Column(String(200))
38
38
  protocol_name = Column(String(100))
39
39
  station_name = Column(String(100))
40
- hisris_coding_designator = Column(String(100))
40
+ try:
41
+ hisris_coding_designator = Column(String(100))
42
+ except:
43
+ lgr.warning("Could not get hisris_coding_designator check models.py ")
41
44
  performed_procedure_step_status = Column(String, default="SCHEDULED")
42
45
 
43
46
 
@@ -8,6 +8,7 @@ from .db_setup import engine
8
8
  from .models import WorklistItem
9
9
  import time
10
10
  import threading
11
+ from datetime import datetime, time, date, timedelta
11
12
 
12
13
  lgr = logging.getLogger(__name__)
13
14
 
@@ -22,7 +23,7 @@ Session = sessionmaker(bind=engine)
22
23
 
23
24
 
24
25
 
25
- def fetch_redcap_entries(redcap_fields: list) -> list:
26
+ def fetch_redcap_entries(redcap_fields: list, interval: float) -> list:
26
27
  """Fetch REDCap entries using PyCap and return a list of filtered dicts."""
27
28
  project = Project(REDCAP_API_URL, REDCAP_API_TOKEN)
28
29
 
@@ -41,7 +42,9 @@ def fetch_redcap_entries(redcap_fields: list) -> list:
41
42
  lgr.info(f"Fetching REDCap data for fields: {redcap_fields}")
42
43
 
43
44
  # Export data
44
- records = project.export_records(fields=redcap_fields, format_type="df")
45
+ datetime_now = datetime.now()
46
+ datetime_interval = datetime_now - timedelta(seconds=interval)
47
+ records = project.export_records(fields=redcap_fields, date_begin=datetime_interval, date_end=datetime_now, format_type="df")
45
48
 
46
49
  if records.empty:
47
50
  lgr.warning("No records retrieved from REDCap.")
@@ -102,11 +105,13 @@ def sync_redcap_to_db(
102
105
  site_id: str,
103
106
  protocol: dict,
104
107
  redcap2wl: dict,
108
+ interval: float = 60.0,
105
109
  ) -> None:
106
110
  """Sync REDCap patient data with the worklist database."""
107
111
 
108
112
  if not redcap2wl:
109
113
  lgr.error("No field mapping (redcap2wl) provided for syncing.")
114
+ return
110
115
 
111
116
  session = Session()
112
117
 
@@ -120,7 +125,7 @@ def sync_redcap_to_db(
120
125
  if i not in redcap_fields:
121
126
  redcap_fields.append(i)
122
127
 
123
- redcap_entries = fetch_redcap_entries(redcap_fields)
128
+ redcap_entries = fetch_redcap_entries(redcap_fields, interval)
124
129
 
125
130
  for record in redcap_entries:
126
131
  study_id = record.get("study_id")
@@ -133,7 +138,8 @@ def sync_redcap_to_db(
133
138
  ses_id = record.get("mri_instance")
134
139
 
135
140
  PatientName = f"cpip-id-{study_id}^fa-{family_id}"
136
- PatientID = f"sub-{study_id}_ses-{ses_id}_fam-{family_id}_site-{site_id}"
141
+ PatientID = f"sub_{study_id}_ses_{ses_id}_fam_{family_id}_site_{site_id}"
142
+ PatientID_ = f"sub-{study_id}_ses-{ses_id}_fam-{family_id}_site-{site_id}"
137
143
 
138
144
  if not PatientID:
139
145
  lgr.warning("Skipping record due to missing Study ID.")
@@ -149,8 +155,18 @@ def sync_redcap_to_db(
149
155
  .first()
150
156
  )
151
157
 
158
+ existing_entry_ = (
159
+ session.query(WorklistItem)
160
+ .filter_by(patient_id=PatientID_)
161
+ .first()
162
+ )
152
163
  if existing_entry:
153
164
  logging.debug(f"Updating existing worklist entry for PatientID {PatientID}")
165
+ elif existing_entry_:
166
+ logging.debug(f"Updating existing worklist entry for PatientID {PatientID_}")
167
+ existing_entry = existing_entry_
168
+
169
+ if existing_entry:
154
170
  existing_entry.patient_name = PatientName
155
171
  existing_entry.patient_id = PatientID
156
172
  existing_entry.patient_birth_date = record.get("youth_dob_y", "19000101")
@@ -205,38 +221,102 @@ def sync_redcap_to_db_repeatedly(
205
221
  site_id=None,
206
222
  protocol=None,
207
223
  redcap2wl=None,
208
- interval=60
224
+ interval=60,
225
+ operation_interval={"start_time": [00,00], "end_time": [23,59]},
209
226
  ):
210
227
  """
211
- Keep syncing with REDCap in a loop every `interval` seconds.
228
+ Keep syncing with REDCap in a loop every `interval` seconds,
229
+ but only between operation_interval[start_time] and operation_interval[end_time].
212
230
  Exit cleanly when STOP_EVENT is set.
213
231
  """
232
+ if operation_interval is None:
233
+ operation_interval = {"start_time": [0, 0], "end_time": [23, 59]}
234
+
235
+ start_h, start_m = operation_interval.get("start_time", [0, 0])
236
+ end_h, end_m = operation_interval.get("end_time", [23, 59])
237
+ start_time = time(start_h, start_m)
238
+ end_time = time(end_h, end_m)
239
+
240
+ # last_sync_date = datetime.now().date()
241
+ last_sync_date = datetime.now().date() - timedelta(days=1)
242
+ interval_sync = interval + 600 # add 10 minutes to the interval to overlap with the previous sync and avoid missing data
243
+
214
244
  while not STOP_EVENT.is_set():
215
- try:
216
- sync_redcap_to_db(
217
- site_id=site_id,
218
- protocol=protocol,
219
- redcap2wl=redcap2wl,
245
+ # === 1) BASELINE: set defaults for flags and wait-time each iteration ===
246
+ is_first_run = False
247
+ extended_interval = interval
248
+
249
+ # === 2) FIGURE OUT "NOW" in hours/minutes (zero out seconds) ===
250
+ now_dt = datetime.now().replace(second=0, microsecond=0)
251
+ now_time = now_dt.time()
252
+ today_date = now_dt.date()
253
+
254
+ # === 3) ONLY SYNC IF WE'RE WITHIN [start_time, end_time] ===
255
+ if start_time <= now_time <= end_time:
256
+ # Check if we haven't synced today yet
257
+ is_first_run = (last_sync_date != today_date)
258
+
259
+ # If it really *is* the first sync of this new day (and it's not the very first run ever)
260
+ if is_first_run and (last_sync_date is not None):
261
+ logging.info(f"First sync of the day for site {site_id} at {now_time}.")
262
+ # Calculate how many seconds from "end_time of yesterday" until "start_time of today"
263
+ yesterday = last_sync_date
264
+ dt_end_yesterday = datetime.combine(yesterday, end_time)
265
+ dt_start_today = datetime.combine(today_date, start_time)
266
+ delta = dt_start_today - dt_end_yesterday
267
+ # guaranteed to be positive if yesterday < today
268
+ extended_interval = delta.total_seconds()
269
+ logging.info(f"Using extended interval: {extended_interval}, {interval} seconds until next sync.")
270
+ else:
271
+ # Either not first run, or last_sync_date is None (this is first-ever run)
272
+ logging.info("Using default interval {interval} seconds.")
273
+
274
+ # --- CALL THE SYNC FUNCTION INSIDE A TRY/EXCEPT ---
275
+ logging.debug(f"Syncing REDCap to DB for site {site_id} at {now_time}.")
276
+ logging.debug(f"First run {is_first_run}")
277
+ try:
278
+ logging.debug(f"last_sync_date was: {last_sync_date}")
279
+ if is_first_run and (last_sync_date is not None):
280
+ sync_redcap_to_db(
281
+ site_id=site_id,
282
+ protocol=protocol,
283
+ redcap2wl=redcap2wl,
284
+ interval=extended_interval,
285
+ )
286
+ else:
287
+ sync_redcap_to_db(
288
+ site_id=site_id,
289
+ protocol=protocol,
290
+ redcap2wl=redcap2wl,
291
+ interval=interval_sync,
292
+ )
293
+ last_sync_date = today_date
294
+ logging.debug(f"REDCap sync completed at {now_time}. Next sync atempt in {interval} seconds.")
295
+ except Exception as exc:
296
+ logging.error(f"Error in REDCap sync: {exc}")
297
+ else:
298
+ # We're outside of operation hours. Just log once and sleep a bit.
299
+ logging.debug(
300
+ f"Current time {now_time} is outside operation window "
301
+ f"({start_time}–{end_time}). Sleeping for {interval} seconds."
220
302
  )
221
- except Exception as exc:
222
- logging.error(f"Error in REDCap sync: {exc}")
223
303
 
224
- # Wait up to `interval` seconds, or break early if STOP_EVENT is set
304
+ # === 4) WAIT before the next iteration. We already set extended_interval above. ===
305
+ logging.debug(f"Sleeping for {interval} seconds before next check...")
225
306
  STOP_EVENT.wait(interval)
226
307
 
227
308
  logging.info("Exiting sync_redcap_to_db_repeatedly because STOP_EVENT was set.")
228
309
 
229
310
 
230
311
  if __name__ == "__main__":
231
- # This block is just a demo usage. In practice, you might set STOP_EVENT
232
- # from a signal handler or from another part of your code.
233
312
  try:
234
313
  sync_redcap_to_db_repeatedly(
235
314
  site_id=None,
236
315
  protocol=None,
237
316
  redcap2wl=None,
238
- interval=60
317
+ interval=60,
318
+ operation_interval={"start_time": [0, 0], "end_time": [23, 59]},
239
319
  )
240
320
  except KeyboardInterrupt:
241
321
  logging.info("KeyboardInterrupt received. Stopping background sync...")
242
- STOP_EVENT.set()
322
+ STOP_EVENT.set()
@@ -1,2 +0,0 @@
1
- DB_ECHO='0'
2
- DB_PATH='/Users/milton/Desktop/worklist.db'
File without changes