medicafe 0.240419.2__py3-none-any.whl → 0.240517.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of medicafe might be problematic. Click here for more details.

@@ -1,18 +1,24 @@
1
- import csv
2
- import subprocess
3
1
  import os
4
2
  import re
5
- from datetime import datetime
6
3
  from collections import OrderedDict # so that the field_mapping stays in order.
7
4
  import re
8
5
  import sys
6
+ import argparse
7
+ import MediBot_Crosswalk_Library
9
8
 
10
9
  # Add parent directory of the project to the Python path
11
10
  project_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
12
11
  sys.path.append(project_dir)
13
12
 
14
- from MediLink import MediLink_ConfigLoader
15
- from MediLink import MediLink_DataMgmt
13
+ try:
14
+ from MediLink import MediLink_ConfigLoader
15
+ except ImportError:
16
+ import MediLink_ConfigLoader
17
+
18
+ try:
19
+ import MediBot_Preprocessor_lib
20
+ except ImportError:
21
+ from MediBot import MediBot_Preprocessor_lib
16
22
 
17
23
  """
18
24
  Preprocessing Enhancements
@@ -26,7 +32,7 @@ Data Integrity and Validation
26
32
  - [ ] Conduct a thorough CSV integrity check before processing to flag potential issues upfront.
27
33
  - [ ] Implement a mechanism to confirm the accuracy of entered data, potentially through a verification step or summary report.
28
34
  - [ ] Explore the possibility of integrating direct database queries for existing patient checks to streamline the process.
29
- - [ ] Automate the replacement of spaces with underscores ('_') in last names for Medicare entries, ensuring data consistency.
35
+ - [ ] Automate the replacement of spaces with underscores ('_') in last names for Medicare entries.
30
36
  - [ ] Enhance CSV integrity checks to identify and report potential issues with data format, especially concerning insurance policy numbers and special character handling.
31
37
 
32
38
  Known Issues and Bugs
@@ -34,185 +40,88 @@ Known Issues and Bugs
34
40
  - [ ] Investigate the issue with Excel modifying long policy numbers in the CSV and provide guidance or a workaround.
35
41
 
36
42
  Future Work
43
+ - [X] Check for PatientID number in La Forma Z to link back to Carol's table for mapping Medisoft insurance name to payerID and payer name and address.
44
+ - [X] Check for PatientID to Medisoft custom insurance name mapping in MAPAT.
45
+ - [X] Middle Names should all be single letters. Make sure it gets truncated before submitting.
37
46
  - [ ] Consolidate data from multiple sources (Provider_Notes.csv, Surgery_Schedule.csv, and Carols_CSV.csv) into a single table with Patient ID as the key, ensuring all data elements are aligned and duplicate entries are minimized.
38
- - [ ] Implement logic to verify and match Patient IDs across different files to ensure data integrity before consolidation.
47
+ - [ ] Implement logic to verify and match Patient IDs across different files to ensure data integrity before consolidation. (Catching errors between source data)
39
48
  - [ ] Optimize the preprocessing of surgery dates and diagnosis codes for use in patient billing and scheduling systems.
40
- - [ ] This needs to be able to take in the Surgery Schedule doc and parse out a Patient ID : Diagnosis Code table
41
- - [ ] The Minutes & Cacncellation data with logic to consolidate into one table in memory.
42
-
43
-
44
- Future Work: crosswalk_update() automates the process of updating the crosswalk.json file with new Medisoft insurance information.
45
-
46
- Development Roadmap:
47
- 1. Problem Statement:
48
- - The need to update the crosswalk.json file arises whenever a new Medisoft insurance is discovered. Automation of this process is required for accuracy and efficiency.
49
-
50
- 2. Identifying New Insurance:
51
- - New Medisoft insurances are identified based on the payer ID number.
52
- - The existence of the payer ID number is checked in the crosswalk.json under existing endpoints.
53
-
54
- 3. Adding New Insurance:
55
- - If the payer ID number does not exist in any endpoint, the tool prompts the user, assisted by endpoint APIs, to add the payer ID to a specific endpoint.
56
- - The corresponding name from Carol's spreadsheet is used as the value for the new payer ID.
57
-
58
- 4. Mapping to Main Insurance:
59
- - The tool presents the user with a list of the top 5-7 insurances, scored higher on a fuzzy search or above a certain score.
60
- - The user selects the appropriate insurance based on the identified Medisoft insurance, establishing the medisoft_insurance_to_payer_id relationship.
61
-
62
- 5. Confirming Mapping:
63
- - The tool implicitly establishes the insurance_to_endpoint_mapping based on the selected MediSoft name and endpoint.
64
- - This step is confirmed or re-evaluated to ensure accuracy.
65
-
66
- 6. User Interaction:
67
- - Unrecognized payer IDs are presented to the user.
68
- - Users can assign these payer IDs to MediSoft custom names individually.
69
- - Grouping of payer IDs may be facilitated, especially for insurances like CIGNA with multiple addresses but few payer IDs.
70
-
71
- 7. Handling Unavailable Payer IDs:
72
- - An extra endpoint named "Fax/Mail or Other" is created to handle cases where the payer ID is unavailable.
73
- - The tool retains payer IDs not existing in any endpoint, allowing users to assign them to the "Fax/Mail or Other" key in the crosswalk.
74
-
75
- 8. Implementation Considerations:
76
- - The tool should handle various scenarios, including checking for free payer IDs and determining the appropriate endpoint for assignment.
77
- - Integration of API checks to verify payer ID availability and associated information is recommended.
78
- - Validation mechanisms should be implemented to prevent incorrect mappings and ensure data integrity.
79
-
80
- NOTE: this needs to also pull from the CSV the listed address of the insruance.
81
- NOTE: La Forma Z can have the PatientID number which can link back to Carol's table which can then map the Medisoft insurance name to the payerID
82
- and payer name and address when the insurance is already selected in Medisoft so the program can learn retroactively and would know the Medisoft # from
83
- the sequencing rather than trying to feed it from the beginning. so that'll be out of ["fixedWidthSlices"]["personal_slices"]["PATID"].
84
- NOTE: Also check MAPAT because maybe the PatientID to Medisoft custom insurance name might exist there enmasse + the PatientID to PayerID link from Carol's CSV
85
- gives us the Medisoft custom insurance name to Payer ID. Then, the endpoint mapping is the clearinghouse PayerID list (API?). MAPAT has the PatientID to Medisoft
86
- insruance reference number which is the MAINS offset by 1 for the header. MAPAT has columns [159,162] for insurance and [195,200] for patient ID.
49
+ - [ ] Read Surgery Schedule doc and parse out a Patient ID : Diagnosis Code table.
50
+ - [ ] The Minutes & Cancellation data with logic to consolidate into one table in memory.
51
+ - [ ] Dynamically list the endpoint for a new Payer ID via API or user interaction to update the crosswalk.json efficiently.
52
+ - [ ] Pull listed addresses of insurance from the CSV. (Not really necessary)
53
+ - [ ] Retroactively learn Medisoft insurance name and payerID from the provided data sources.
54
+
55
+ Development Roadmap for crosswalk_update():
56
+ - [X] Automation required for updating the crosswalk.json when new Medisoft insurance is discovered.
57
+ - [X] New Medisoft insurances are identified based on the payer ID number.
58
+ - [X] Check the existence of the payer ID in crosswalk.json under existing endpoints.
59
+ - [X] Facilitate grouping of IDs for insurances like CIGNA with multiple addresses but few payer IDs.
60
+ - [X] Retroactive learning based on selected insurances in Medisoft
61
+ - [ ] Prompt user via endpoint APIs to add new payer ID to an endpoint if it does not exist.
62
+ - [ ] Retain payer IDs without Insurance ID for future assignments.
63
+ - [ ] Check for free payer IDs and determine the appropriate endpoint for assignment.
64
+ - [ ] Present unrecognized payer IDs with Carol's Insurance Name to users for assignment to Insurance ID. (Try API Call)
65
+ - [ ] Integrate API checks to verify payer ID availability and related information.
66
+ - [ ] Implement "Fax/Mail or Other" endpoint for unavailable payer IDs.
67
+ - [ ] Present user with a list of top insurances for selection based on fuzzy search scores.
68
+ - [ ] Establish payer ID to insurance ID relationship based on user selection.
69
+ - [ ] Implicitly establish payer ID to endpoint mapping based on user selection.
70
+ - [ ] Implement validation mechanisms to prevent incorrect mappings and ensure data integrity.
71
+ - [ ] Considerations for extracting insurance addresses (if necessary)
72
+ - [ ] Handle better the case where a payer_id doesn't exist (When Carol's CSV doesn't bring the Payer ID).
73
+ Maybe ask the user what the payer ID is for that patient? I dont know.
74
+ - [ ] TODO (MED) Crosswalk (both initializing and updating) needs to pull AFTER the Preprocessor for Carol's CSV because
75
+ all that data lives in-memory and then gets corrections or replacements before being used so we need
76
+ the post-correction data to be used to build and update the crosswalk.
87
77
  """
88
-
89
78
  # Load configuration
90
79
  # Should this also take args? Path for ./MediLink needed to be added for this to resolve
91
- config, _ = MediLink_ConfigLoader.load_configuration()
92
-
93
- class InitializationError(Exception):
94
- def __init__(self, message):
95
- self.message = message
96
- super().__init__(self.message)
97
-
98
- def initialize(config):
99
- global AHK_EXECUTABLE, CSV_FILE_PATH, field_mapping, page_end_markers
100
-
101
- try:
102
- AHK_EXECUTABLE = config.get('AHK_EXECUTABLE', "")
103
- except AttributeError:
104
- raise InitializationError("Error: 'AHK_EXECUTABLE' not found in config.")
105
-
106
- try:
107
- CSV_FILE_PATH = config.get('CSV_FILE_PATH', "")
108
- except AttributeError:
109
- raise InitializationError("Error: 'CSV_FILE_PATH' not found in config.")
110
-
111
- try:
112
- field_mapping = OrderedDict(config.get('field_mapping', {}))
113
- except AttributeError:
114
- raise InitializationError("Error: 'field_mapping' not found in config.")
115
-
116
- try:
117
- page_end_markers = config.get('page_end_markers', [])
118
- except AttributeError:
119
- raise InitializationError("Error: 'page_end_markers' not found in config.")
120
-
121
-
122
- def open_csv_for_editing(csv_file_path):
123
- try:
124
- # Open the CSV file in the default program
125
- subprocess.run(['open' if os.name == 'posix' else 'start', csv_file_path], check=True, shell=True)
126
- print("After saving the revised CSV, please re-run MediBot.")
127
- except subprocess.CalledProcessError as e:
128
- print("Failed to open CSV file:", e)
129
-
130
- # Function to load and process CSV data
131
- def load_csv_data(csv_file_path):
132
- try:
133
- # Check if the file exists
134
- if not os.path.exists(csv_file_path):
135
- raise FileNotFoundError("***Error: CSV file '{}' not found.".format(csv_file_path))
136
-
137
- with open(csv_file_path, 'r') as csvfile:
138
- reader = csv.DictReader(csvfile)
139
- return [row for row in reader] # Return a list of dictionaries
140
- except FileNotFoundError as e:
141
- print(e) # Print the informative error message
142
- print("Hint: Check if CSV file is located in the expected directory or specify a different path in config file.")
143
- print("Please correct the issue and re-run MediBot.")
144
- sys.exit(1) # Halt the script
145
- except IOError as e:
146
- print("Error reading CSV file: {}. Please check the file path and permissions.".format(e))
147
- sys.exit(1) # Halt the script in case of other IO errors
80
+ config, crosswalk = MediLink_ConfigLoader.load_configuration()
148
81
 
149
82
  # CSV Preprocessor built for Carol
150
- def preprocess_csv_data(csv_data):
83
+ def preprocess_csv_data(csv_data, crosswalk):
151
84
  try:
152
- # Filter out rows without a Patient ID
153
- csv_data[:] = [row for row in csv_data if row.get('Patient ID', '').strip()]
85
+ # Add the "Ins1 Insurance ID" column to the CSV data.
86
+ # This initializes the column with empty values for each row.
87
+ MediLink_ConfigLoader.log("CSV Pre-processor: Adding 'Ins1 Insurance ID' column to the CSV data...", level="INFO")
88
+ MediBot_Preprocessor_lib.add_insurance_id_column(csv_data)
154
89
 
155
- # Remove Patients (rows) that are Primary Insurance: 'AETNA', 'AETNA MEDICARE', or 'HUMANA MED HMO'.
156
- csv_data[:] = [row for row in csv_data if row.get('Primary Insurance', '').strip() not in ['AETNA', 'AETNA MEDICARE', 'HUMANA MED HMO']]
157
-
158
- # Convert 'Surgery Date' to datetime objects for sorting
159
- for row in csv_data:
160
- try:
161
- row['Surgery Date'] = datetime.strptime(row.get('Surgery Date', ''), '%m/%d/%Y')
162
- except ValueError:
163
- # Handle or log the error if the date is invalid
164
- row['Surgery Date'] = datetime.min # Assign a minimum datetime value for sorting purposes
165
-
166
- # Initially sort the patients first by 'Surgery Date' and then by 'Patient Last' alphabetically
167
- csv_data.sort(key=lambda x: (x['Surgery Date'], x.get('Patient Last', '').strip()))
90
+ # Filter out rows without a Patient ID and rows where the Primary Insurance
91
+ # is 'AETNA', 'AETNA MEDICARE', or 'HUMANA MED HMO'.
92
+ MediLink_ConfigLoader.log("CSV Pre-processor: Filtering out missing Patient IDs and 'AETNA', 'AETNA MEDICARE', or 'HUMANA MED HMO'...", level="INFO")
93
+ MediBot_Preprocessor_lib.filter_rows(csv_data)
168
94
 
169
- # Deduplicate patient records based on Patient ID, keeping the entry with the earliest surgery date
170
- unique_patients = {}
171
- for row in csv_data:
172
- patient_id = row.get('Patient ID')
173
- if patient_id not in unique_patients or row['Surgery Date'] < unique_patients[patient_id]['Surgery Date']:
174
- unique_patients[patient_id] = row
95
+ # Convert 'Surgery Date' from string format to datetime objects for sorting purposes.
96
+ # Sort the patients by 'Surgery Date' and then by 'Patient Last' name alphabetically.
97
+ # Deduplicate patient records based on Patient ID, keeping the entry with the earliest surgery date.
98
+ # Update the CSV data to include only unique patient records.
99
+ # Re-sort the CSV data after deduplication to ensure the correct order.
100
+ MediLink_ConfigLoader.log("CSV Pre-processor: Sorting and de-duplicating patient records...", level="INFO")
101
+ MediBot_Preprocessor_lib.convert_surgery_date(csv_data)
102
+ MediBot_Preprocessor_lib.sort_and_deduplicate(csv_data)
175
103
 
176
- # Update csv_data to only include unique patient records
177
- csv_data[:] = list(unique_patients.values())
178
-
179
- # Re-sort the csv_data after deduplication to ensure correct order
180
- csv_data.sort(key=lambda x: (x['Surgery Date'], x.get('Patient Last', '').strip()))
104
+ # Convert 'Surgery Date' back to string format if needed for further processing.
105
+ # Combine 'Patient First', 'Patient Middle', and 'Patient Last' into a single 'Patient Name' field.
106
+ # Combine 'Patient Address1' and 'Patient Address2' into a single 'Patient Street' field.
107
+ MediLink_ConfigLoader.log("CSV Pre-processor: Constructing Patient Name and Address for Medisoft...", level="INFO")
108
+ MediBot_Preprocessor_lib.combine_fields(csv_data)
181
109
 
182
- # Maybe make a dataformat_library function for this? csv_data = format_preprocessor(csv_data)?
183
- for row in csv_data:
184
- # Convert 'Surgery Date' back to string format if needed for further processing (cleanup)
185
- row['Surgery Date'] = row['Surgery Date'].strftime('%m/%d/%Y')
186
-
187
- # Combine name fields
188
- first_name = row.get('Patient First', '').strip()
189
- middle_name = row.get('Patient Middle', '').strip()
190
- last_name = row.get('Patient Last', '').strip()
191
- row['Patient Name'] = "{}, {} {}".format(last_name, first_name, middle_name).strip()
192
-
193
- # Combine address fields
194
- address1 = row.get('Patient Address1', '').strip()
195
- address2 = row.get('Patient Address2', '').strip()
196
- row['Patient Street'] = "{} {}".format(address1, address2).strip()
197
-
198
- # Probably make a data_format function for this:
199
- # Define the replacements as a dictionary
200
- replacements = {
201
- '777777777': '', # Replace '777777777' with an empty string
202
- 'RAILROAD MEDICARE': 'RAILROAD', # Replace 'RAILROAD MEDICARE' with 'RAILROAD'
203
- 'AARP MEDICARE COMPLETE': 'AARP COMPLETE' # Replace 'AARP MEDICARE COMPLETE' with 'AARP COMPLETE'
204
- }
205
-
206
- # Iterate over each key-value pair in the replacements dictionary
207
- for old_value, new_value in replacements.items():
208
- # Replace the old value with the new value if it exists in the row
209
- if row.get('Patient SSN', '') == old_value:
210
- row['Patient SSN'] = new_value
211
- elif row.get('Primary Insurance', '') == old_value:
212
- row['Primary Insurance'] = new_value
213
-
110
+ # Retrieve replacement values from the crosswalk.
111
+ # Iterate over each key-value pair in the replacements dictionary and replace the old value
112
+ # with the new value in the corresponding fields of each row.
113
+ MediLink_ConfigLoader.log("CSV Pre-processor: Applying mandatory replacements per Crosswalk...", level="INFO")
114
+ MediBot_Preprocessor_lib.apply_replacements(csv_data, crosswalk)
115
+
116
+ # Update the "Ins1 Insurance ID" column based on the crosswalk and the "Ins1 Payer ID" column for each row.
117
+ # If the Payer ID is not found in the crosswalk, create a placeholder entry in the crosswalk and mark the row for review.
118
+ MediLink_ConfigLoader.log("CSV Pre-processor: Populating 'Ins1 Insurance ID' based on Crosswalk...", level="INFO")
119
+ MediBot_Preprocessor_lib.update_insurance_ids(csv_data, crosswalk)
120
+
214
121
  except Exception as e:
215
- print("An error occurred while pre-processing CSV data. Please repair the CSV directly and try again:", e)
122
+ message = "An error occurred while pre-processing CSV data. Please repair the CSV directly and try again: {}".format(e)
123
+ MediLink_ConfigLoader.log(message, level="ERROR")
124
+ print(message)
216
125
 
217
126
  def check_existing_patients(selected_patient_ids, MAPAT_MED_PATH):
218
127
  existing_patients = []
@@ -245,6 +154,9 @@ def intake_scan(csv_headers, field_mapping):
245
154
  missing_fields_warnings = []
246
155
  required_fields = config["required_fields"]
247
156
 
157
+ # MediLink_ConfigLoader.log("Intake Scan - Field Mapping: {}".format(field_mapping))
158
+ # MediLink_ConfigLoader.log("Intake Scan - CSV Headers: {}".format(csv_headers))
159
+
248
160
  # Iterate over the Medisoft fields defined in field_mapping
249
161
  for medisoft_field in field_mapping.keys():
250
162
  for pattern in field_mapping[medisoft_field]:
@@ -252,32 +164,84 @@ def intake_scan(csv_headers, field_mapping):
252
164
  if matched_headers:
253
165
  # Assuming the first matched header is the desired one
254
166
  identified_fields[matched_headers[0]] = medisoft_field
167
+ # MediLink_ConfigLoader.log("Found Header: {}".format(identified_fields[matched_headers[0]]))
255
168
  break
256
169
  else:
257
170
  # Check if the missing field is a required field before appending the warning
258
171
  if medisoft_field in required_fields:
259
172
  missing_fields_warnings.append("WARNING: No matching CSV header found for Medisoft field '{0}'".format(medisoft_field))
260
173
 
261
- #-----------------------
262
- # CSV Integrity Check
263
- #-----------------------
264
-
265
- # This section needs to be revamped further so that it can interpret the information from here and decide
266
- # if it's significant or not.
267
- # e.g. If the 'Street' value:key is 'Address', then any warnings about City, State, Zip can be ignored.
268
- # Insurance Policy Numbers should be all alphanumeric with no other characters.
269
- # Make sure that the name field has at least one name under it (basically check for a blank or
270
- # partially blank csv with just a header)
271
-
174
+ # CSV Integrity Checks
175
+ # Check for blank or partially blank CSV
176
+ if len(csv_headers) == 0 or all(header == "" for header in csv_headers):
177
+ missing_fields_warnings.append("WARNING: The CSV appears to be blank or contains only headers without data.")
178
+
272
179
  # Display the identified fields and missing fields warnings
273
- #print("The following Medisoft fields have been identified in the CSV:\n")
180
+ #MediLink_ConfigLoader.log("The following Medisoft fields have been identified in the CSV:")
274
181
  #for header, medisoft_field in identified_fields.items():
275
- # print("{0} (CSV header: {1})".format(medisoft_field, header))
182
+ # MediLink_ConfigLoader.log("{} (CSV header: {})".format(medisoft_field, header))
183
+
184
+ # This section interprets the information from identified_fields and decides if there are significant issues.
185
+ # e.g. If the 'Street' value:key is 'Address', then any warnings about City, State, Zip can be ignored.
186
+ for header, field in identified_fields.items():
187
+ # Insurance Policy Numbers should be all alphanumeric with no other characters.
188
+ if 'Insurance Policy Number' in field:
189
+ policy_number = identified_fields.get(header)
190
+ if not bool(re.match("^[a-zA-Z0-9]*$", policy_number)):
191
+ missing_fields_warnings.append("WARNING: Insurance Policy Number '{}' contains invalid characters.".format(policy_number))
192
+ # Additional checks can be added as needed for other fields
193
+
194
+ if missing_fields_warnings:
195
+ MediLink_ConfigLoader.log("\nSome required fields could not be matched:")
196
+ for warning in missing_fields_warnings:
197
+ MediLink_ConfigLoader.log(warning)
198
+
199
+ return identified_fields
200
+
201
+ def main():
202
+ parser = argparse.ArgumentParser(description='Run MediLink Data Management Tasks')
203
+ parser.add_argument('--update-crosswalk', action='store_true',
204
+ help='Run the crosswalk update independently')
205
+ parser.add_argument('--init-crosswalk', action='store_true',
206
+ help='Initialize the crosswalk using historical data from MAPAT and Carols CSV')
207
+ parser.add_argument('--load-csv', action='store_true',
208
+ help='Load and process CSV data')
209
+ parser.add_argument('--preprocess-csv', action='store_true',
210
+ help='Preprocess CSV data based on specific rules')
211
+ parser.add_argument('--open-csv', action='store_true',
212
+ help='Open CSV for manual editing')
213
+
214
+ args = parser.parse_args()
215
+
216
+ config, crosswalk = MediLink_ConfigLoader.load_configuration()
217
+
218
+ # If no arguments provided, print usage instructions
219
+ if not any(vars(args).values()):
220
+ parser.print_help()
221
+ return
222
+
223
+ if args.update_crosswalk:
224
+ print("Updating the crosswalk...")
225
+ MediBot_Crosswalk_Library.crosswalk_update(config, crosswalk)
226
+
227
+ if args.init_crosswalk:
228
+ MediBot_Crosswalk_Library.initialize_crosswalk_from_mapat()
229
+
230
+ if args.load_csv:
231
+ print("Loading CSV data...")
232
+ csv_data = MediBot_Preprocessor_lib.load_csv_data(config['CSV_FILE_PATH'])
233
+ print("Loaded {} records from the CSV.".format(len(csv_data)))
234
+
235
+ if args.preprocess_csv:
236
+ if 'csv_data' in locals():
237
+ print("Preprocessing CSV data...")
238
+ preprocess_csv_data(csv_data, crosswalk)
239
+ else:
240
+ print("Error: CSV data needs to be loaded before preprocessing. Use --load-csv.")
276
241
 
277
- #if missing_fields_warnings:
278
- # print("\nSome required fields could not be matched:")
279
- # for warning in missing_fields_warnings:
280
- # print(warning)
242
+ if args.open_csv:
243
+ print("Opening CSV for editing...")
244
+ MediBot_Preprocessor_lib.open_csv_for_editing(config['CSV_FILE_PATH'])
281
245
 
282
- #print("Debug - Identified fields mapping (intake scan):", identified_fields)
283
- return identified_fields
246
+ if __name__ == '__main__':
247
+ main()