brynq-sdk-meta4 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,10 @@
1
+ Metadata-Version: 1.0
2
+ Name: brynq_sdk_meta4
3
+ Version: 1.0.0
4
+ Summary: Meta4 wrapper from BrynQ
5
+ Home-page: UNKNOWN
6
+ Author: BrynQ
7
+ Author-email: support@brynq.com
8
+ License: BrynQ License
9
+ Description: UNKNOWN
10
+ Platform: UNKNOWN
@@ -0,0 +1,155 @@
1
+ ## BrynQ Meta4 SDK
2
+
3
+ A lightweight toolkit to validate data and generate Meta4-ready CSV files. It validates Pandas DataFrames using Pydantic schemas and writes `.csv` files into the `outputs/` directory. Then upload the files via SFTP.
4
+
5
+ This SDK consists of two parts:
6
+ - CSV generation: validate and accumulate data in memory, then export to CSV files under `outputs/`.
7
+ - SFTP upload: send the generated CSV files to the target server directories.
8
+
9
+ ### How it works
10
+
11
+ 1. Instantiate the Meta4 client.
12
+ 2. Use entity managers (Employees, CostCenters, Jobs) to call `create/update/delete` with DataFrames. Each call validates rows against the entity schema and appends valid rows to that entity’s in-memory `batch_df`. No file open/close happens per call; data is only accumulated.
13
+ 3. When ready, call the entity’s `export()` to write all accumulated rows from its `batch_df` into a CSV under `outputs/`.
14
+ 4. Call `meta4.upload()` (on the Meta4 client) to send the generated CSV files to the server over SFTP.
15
+
16
+ - Employee movement codes: `1=Create`, `2=Delete`, `3=Update`
17
+ - Cost center movement codes: `-36=Creation/Modification`, `-37=Deletion`
18
+ - Job movement codes: `-28=Creation`, `-29=Modification/Deletion`
19
+
20
+ Notes:
21
+ - Date fields expect the `DD/MM/YYYY` format. Schemas will auto-parse strings.
22
+ - DataFrame column names must match schema field names (e.g., `job_id`, `cost_center_id`, `employee_name`). Schemas output by aliases, but input can use field names.
23
+
24
+ ### Design: batch_df per entity
25
+
26
+ Each entity manager (`employees`, `cost_centers`, `jobs`) maintains its own `batch_df` in memory. Calls to `create`, `update`, and `delete` only validate input data and append valid rows to the corresponding `batch_df`. This avoids repeated reads/writes and keeps I/O minimal. The actual file write happens only when you call the entity’s `export()` method, which flushes the accumulated rows into a CSV under `outputs/`.
27
+
28
+ ### Installation
29
+
30
+ ```bash
31
+ pip install -e .
32
+ ```
33
+
34
+ Or activate your virtual environment and run the same command from the repository root.
35
+
36
+ ### Quick start
37
+
38
+ ```python
39
+ import pandas as pd
40
+ from brynq_sdk_meta4 import Meta4
41
+
42
+ meta4 = Meta4() # writes to outputs/
43
+
44
+ # 1) Cost Centers
45
+ df_cc = pd.DataFrame([
46
+ {"cost_center_id": "C200", "cost_center_name": "Centro de coste"},
47
+ ])
48
+ meta4.cost_centers.create(df_cc)
49
+ meta4.cost_centers.export() # outputs/cost_center_import.csv
50
+
51
+ # 2) Jobs
52
+ df_jobs = pd.DataFrame([
53
+ {"job_id": "100", "job_name": "Puesto 1", "start_date": "01/01/2025", "end_date": "31/12/2025", "cno_subcode": "1120"},
54
+ ])
55
+ meta4.jobs.create(df_jobs)
56
+ meta4.jobs.export() # outputs/job_import.csv
57
+
58
+ # 3) Employees
59
+ df_emp = pd.DataFrame([
60
+ {"effective_date": "06/03/2025", "person_id": "M88019", "employee_name": "Carlos", "document_type": "1", "document_number": "16719242J"}
61
+ ])
62
+ meta4.employees.create(df_emp)
63
+ meta4.employees.export() # outputs/employee_import.csv
64
+ ```
65
+
66
+ ### CRUD workflow examples
67
+
68
+ ```python
69
+ # Jobs UPDATE/DELETE
70
+ df_update = pd.DataFrame([
71
+ {"job_id": "100", "job_name": "Puesto 1 actualizado"}
72
+ ])
73
+ meta4.jobs.update(df_update)
74
+
75
+ df_delete = pd.DataFrame([
76
+ {"job_id": "100"}
77
+ ])
78
+ meta4.jobs.delete(df_delete)
79
+
80
+ # Export (writes all accumulated valid rows)
81
+ meta4.jobs.export()
82
+ ```
83
+
84
+ ```python
85
+ # Cost Centers UPDATE/DELETE
86
+ df_cc_update = pd.DataFrame([
87
+ {"cost_center_id": "C200", "cost_center_name": "Centro actualizado"}
88
+ ])
89
+ meta4.cost_centers.update(df_cc_update)
90
+
91
+ df_cc_delete = pd.DataFrame([
92
+ {"cost_center_id": "C200"}
93
+ ])
94
+ meta4.cost_centers.delete(df_cc_delete)
95
+ meta4.cost_centers.export()
96
+ ```
97
+
98
+ ```python
99
+ # Employees UPDATE/DELETE
100
+ df_emp_update = pd.DataFrame([
101
+ {"effective_date": "06/03/2025", "person_id": "M88077", "ss_number": "2000"}
102
+ ])
103
+ meta4.employees.update(df_emp_update)
104
+
105
+ df_emp_delete = pd.DataFrame([
106
+ {"effective_date": "06/03/2025", "termination_date": "08/03/2025", "termination_reason": "012", "unemployment_cause": "74", "person_id": "M88087"}
107
+ ])
108
+ meta4.employees.delete(df_emp_delete)
109
+ meta4.employees.export()
110
+ ```
111
+
112
+ ### Outputs
113
+
114
+ - `outputs/employee_import.csv`
115
+ - `outputs/cost_center_import.csv`
116
+ - `outputs/job_import.csv`
117
+
118
+ ### Running the tests
119
+
120
+ See `brynq_sdk_meta4/test_meta.py` for end-to-end examples (create/update/delete/export). Run directly:
121
+
122
+ ```bash
123
+ python brynq_sdk_meta4/test_meta.py
124
+ ```
125
+
126
+ ### Upload via SFTP
127
+
128
+ The client can upload generated CSVs via SFTP. By default, `employee_import.csv` goes to `ENTRADA/EMPLEADOS`, and other files go to `SALIDA`.
129
+
130
+ ```python
131
+ from brynq_sdk_meta4 import Meta4
132
+ meta4 = Meta4()
133
+
134
+ # After you have exported CSVs
135
+ meta4.upload() # uploads to '/' + expected folders, then deletes uploaded CSVs locally
136
+
137
+ # Custom remote base path
138
+ meta4.upload(upload_path="/custom/path")
139
+ ```
140
+
141
+ How it works:
142
+
143
+ - **Scan output directory**: Looks for all files ending with `.csv` under the local `outputs/` folder (or the custom `output_path` if you set it on the `Meta4` client).
144
+ - **Per-file routing**:
145
+ - `employee_import.csv` is uploaded to `ENTRADA/EMPLEADOS` on the remote side.
146
+ - All other CSVs are uploaded to `SALIDA`.
147
+ - **Remote path base**: The optional `upload_path` argument is prepended to the remote folders (e.g., `/custom/path/ENTRADA/EMPLEADOS/employee_import.csv`).
148
+ - **Upload behavior**: Each file is attempted independently. Failures are printed and skipped so the rest can continue.
149
+ - **Return value**: A string summarizing the successfully uploaded remote file paths.
150
+ - **Errors**: If no files are successfully uploaded, an exception is raised.
151
+
152
+ ### Notes & tips
153
+
154
+ - Replace `NaN` values before validation if needed: `df.replace({pd.NA: None, pd.NaT: None})` (see examples).
155
+ - `export()` writes only the accumulated valid records from the internal `batch_df`. Use `get_batch_df()` to inspect and `clear_batch_df()` to reset when starting a new batch.
@@ -0,0 +1 @@
1
+ from .meta4 import Meta4
@@ -0,0 +1,74 @@
1
+ import pandas as pd
2
+ from .schemas.cost_center import CostCenterSchema
3
+
4
+ class CostCenters:
5
+ """
6
+ Handles all cost center related operations in Meta4
7
+ """
8
+ def __init__(self, meta4):
9
+ self.meta4 = meta4
10
+ schema_fields = CostCenterSchema.model_fields
11
+ column_names = [field.alias or name for name, field in schema_fields.items()]
12
+ self.batch_df = pd.DataFrame(columns=column_names)
13
+
14
+ def create(self, df: pd.DataFrame) -> str:
15
+ """
16
+ Export cost center data to CSV with validation.
17
+
18
+ Args:
19
+ df (pd.DataFrame): DataFrame containing cost center data to validate and export
20
+
21
+ Returns:
22
+ str: Path to the generated CSV file
23
+
24
+ Raises:
25
+ Exception: If validation or export fails
26
+ """
27
+ try:
28
+ # Set movement_type to ALTA for all records
29
+ df['movement_type'] = '-36'
30
+ valid_df = self.meta4.validate(df=df,schema=CostCenterSchema)
31
+ self.batch_df = pd.concat([self.batch_df, valid_df], ignore_index=True)
32
+ return {"success": True, "message": "Cost centers created successfully"}
33
+ except Exception as e:
34
+ raise Exception(f"Failed to export cost centers: {e}")
35
+
36
+ def update(self, df: pd.DataFrame) -> str:
37
+
38
+ try:
39
+ # Set movement_type to MODIFICACION for all records
40
+ df['movement_type'] = '-36'
41
+ valid_df = self.meta4.validate(df=df,schema=CostCenterSchema)
42
+ self.batch_df = pd.concat([self.batch_df, valid_df], ignore_index=True)
43
+ return {"success": True, "message": "Cost centers updated successfully"}
44
+ except Exception as e:
45
+ raise Exception(f"Failed to update cost centers: {e}")
46
+
47
+ def delete(self, df: pd.DataFrame) -> str:
48
+
49
+ try:
50
+ # Set movement_type to BAJA for all records
51
+ df['movement_type'] = '-37'
52
+ valid_df = self.meta4.validate(df=df,schema=CostCenterSchema)
53
+ self.batch_df = pd.concat([self.batch_df, valid_df], ignore_index=True)
54
+ return {"success": True, "message": "Cost centers deleted successfully"}
55
+ except Exception as e:
56
+ raise Exception(f"Failed to delete cost centers: {e}")
57
+
58
+ def export(self) -> str:
59
+ """
60
+ Export cost center data to CSV with validation.
61
+ """
62
+ return self.meta4.export(df=self.batch_df, filename="cost_center_import.csv")
63
+
64
+ def get_batch_df(self) -> pd.DataFrame:
65
+ """
66
+ Get the current batch DataFrame containing all validated cost center records.
67
+ """
68
+ return self.batch_df
69
+
70
+ def clear_batch_df(self):
71
+ """
72
+ Clear the batch DataFrame.
73
+ """
74
+ self.batch_df = pd.DataFrame()
@@ -0,0 +1,121 @@
1
+ import pandas as pd
2
+ from .schemas.employee import EmployeeSchema
3
+ import os
4
+
5
+ class Employees:
6
+ """
7
+ Handles all employee related operations in Meta4
8
+ """
9
+ def __init__(self, meta4):
10
+ self.meta4 = meta4
11
+ # Initialize batch_df with EmployeeSchema columns
12
+ schema_fields = EmployeeSchema.model_fields
13
+ column_names = [field.alias or name for name, field in schema_fields.items()]
14
+ self.batch_df = pd.DataFrame(columns=column_names)
15
+
16
+ def create(self, df: pd.DataFrame) -> str:
17
+ """
18
+ Create new employees (ALTA movement type).
19
+
20
+ Args:
21
+ df (pd.DataFrame): DataFrame containing employee data for creation
22
+
23
+ Returns:
24
+ str: Path to the generated CSV file
25
+
26
+ Raises:
27
+ Exception: If validation or export fails
28
+ """
29
+ try:
30
+ # Set movement_type to ALTA for all records
31
+ df['movement_type'] = '1'
32
+ validated_df = self.meta4.validate(df=df, schema=EmployeeSchema)
33
+
34
+ # Append validated DataFrame to batch_df
35
+ self.batch_df = pd.concat([self.batch_df, validated_df], ignore_index=True)
36
+
37
+ return {"success": True, "message": "Employees created successfully"}
38
+ except Exception as e:
39
+ raise Exception(f"Failed to create employees: {e}")
40
+
41
+ def update(self, df: pd.DataFrame) -> str:
42
+ """
43
+ Update existing employees (MODIFICACION movement type).
44
+
45
+ Args:
46
+ df (pd.DataFrame): DataFrame containing employee data for update
47
+
48
+ Returns:
49
+ str: Path to the generated CSV file
50
+
51
+ Raises:
52
+ Exception: If validation or export fails
53
+ """
54
+ try:
55
+ # Set movement_type to MODIFICACION for all records
56
+ df['movement_type'] = '3'
57
+ validated_df = self.meta4.validate(df=df, schema=EmployeeSchema)
58
+
59
+ # Append validated DataFrame to batch_df
60
+ self.batch_df = pd.concat([self.batch_df, validated_df], ignore_index=True)
61
+
62
+ return {"success": True, "message": "Employees updated successfully"}
63
+ except Exception as e:
64
+ raise Exception(f"Failed to update employees: {e}")
65
+
66
+ def delete(self, df: pd.DataFrame) -> str:
67
+ """
68
+ Delete/terminate employees (BAJA movement type).
69
+
70
+ Args:
71
+ df (pd.DataFrame): DataFrame containing employee data for termination
72
+
73
+ Returns:
74
+ str: Path to the generated CSV file
75
+
76
+ Raises:
77
+ Exception: If validation or export fails
78
+ """
79
+ try:
80
+ # Set movement_type to BAJA for all records
81
+ df['movement_type'] = '2'
82
+ validated_df = self.meta4.validate(df=df, schema=EmployeeSchema)
83
+
84
+ # Append validated DataFrame to batch_df
85
+ self.batch_df = pd.concat([self.batch_df, validated_df], ignore_index=True)
86
+
87
+ return {"success": True, "message": "Employees deleted successfully"}
88
+ except Exception as e:
89
+ raise Exception(f"Failed to delete employees: {e}")
90
+
91
+ def export(self) -> str:
92
+ """
93
+ Export employee data to CSV with validation (generic export).
94
+
95
+ Args:
96
+ df (pd.DataFrame): DataFrame containing employee data to validate and export
97
+
98
+ Returns:
99
+ str: Path to the generated CSV file
100
+
101
+ Raises:
102
+ Exception: If validation or export fails
103
+ """
104
+ return self.meta4.export(df=self.batch_df, filename="employee_import.csv")
105
+
106
+ def get_batch_df(self) -> pd.DataFrame:
107
+ """
108
+ Get the current batch DataFrame containing all validated employee records.
109
+
110
+ Returns:
111
+ pd.DataFrame: The batch DataFrame with all validated records
112
+ """
113
+ return self.batch_df
114
+
115
+ def clear_batch_df(self):
116
+ """
117
+ Clear the batch DataFrame.
118
+ """
119
+ schema_fields = EmployeeSchema.model_fields
120
+ column_names = [field.alias or name for name, field in schema_fields.items()]
121
+ self.batch_df = pd.DataFrame(columns=column_names)
@@ -0,0 +1,120 @@
1
+ import pandas as pd
2
+ from .schemas.job import JobSchema
3
+
4
+ class Jobs:
5
+ """
6
+ Handles all job related operations in Meta4
7
+ """
8
+ def __init__(self, meta4):
9
+ self.meta4 = meta4
10
+ # Initialize batch_df with JobSchema columns
11
+ schema_fields = JobSchema.model_fields
12
+ column_names = [field.alias or name for name, field in schema_fields.items()]
13
+ self.batch_df = pd.DataFrame(columns=column_names)
14
+
15
+ def create(self, df: pd.DataFrame) -> str:
16
+ """
17
+ Create new jobs (CREATE movement type -28).
18
+
19
+ Args:
20
+ df (pd.DataFrame): DataFrame containing job data for creation
21
+
22
+ Returns:
23
+ str: Success message
24
+
25
+ Raises:
26
+ Exception: If validation fails
27
+ """
28
+ try:
29
+ # Set movement_type to CREATE for all records
30
+ df['movement_type'] = '-28'
31
+ validated_df = self.meta4.validate(df=df, schema=JobSchema)
32
+
33
+ # Append validated DataFrame to batch_df
34
+ self.batch_df = pd.concat([self.batch_df, validated_df], ignore_index=True)
35
+
36
+ return {"success": True, "message": "Jobs created successfully"}
37
+ except Exception as e:
38
+ raise Exception(f"Failed to create jobs: {e}")
39
+
40
+ def update(self, df: pd.DataFrame) -> str:
41
+ """
42
+ Update existing jobs (UPDATE movement type -29).
43
+
44
+ Args:
45
+ df (pd.DataFrame): DataFrame containing job data for update
46
+
47
+ Returns:
48
+ str: Success message
49
+
50
+ Raises:
51
+ Exception: If validation fails
52
+ """
53
+ try:
54
+ # Set movement_type to UPDATE for all records
55
+ df['movement_type'] = '-29'
56
+ validated_df = self.meta4.validate(df=df, schema=JobSchema)
57
+
58
+ # Append validated DataFrame to batch_df
59
+ self.batch_df = pd.concat([self.batch_df, validated_df], ignore_index=True)
60
+
61
+ return {"success": True, "message": "Jobs updated successfully"}
62
+ except Exception as e:
63
+ raise Exception(f"Failed to update jobs: {e}")
64
+
65
+ def delete(self, df: pd.DataFrame) -> str:
66
+ """
67
+ Delete jobs (DELETE movement type -29).
68
+
69
+ Args:
70
+ df (pd.DataFrame): DataFrame containing job data for deletion
71
+
72
+ Returns:
73
+ str: Success message
74
+
75
+ Raises:
76
+ Exception: If validation fails
77
+ """
78
+ try:
79
+ # Set movement_type to DELETE for all records
80
+ df['movement_type'] = '-29'
81
+ validated_df = self.meta4.validate(df=df, schema=JobSchema)
82
+
83
+ # Append validated DataFrame to batch_df
84
+ self.batch_df = pd.concat([self.batch_df, validated_df], ignore_index=True)
85
+
86
+ return {"success": True, "message": "Jobs deleted successfully"}
87
+ except Exception as e:
88
+ raise Exception(f"Failed to delete jobs: {e}")
89
+
90
+ def export(self) -> str:
91
+ """
92
+ Export job data to CSV with validation (generic export).
93
+
94
+ Args:
95
+ df (pd.DataFrame): DataFrame containing job data to validate and export
96
+
97
+ Returns:
98
+ str: Path to the generated CSV file
99
+
100
+ Raises:
101
+ Exception: If validation or export fails
102
+ """
103
+ return self.meta4.export(df=self.batch_df, filename="job_import.csv")
104
+
105
+ def get_batch_df(self) -> pd.DataFrame:
106
+ """
107
+ Get the current batch DataFrame containing all validated job records.
108
+
109
+ Returns:
110
+ pd.DataFrame: The batch DataFrame with all validated records
111
+ """
112
+ return self.batch_df
113
+
114
+ def clear_batch_df(self):
115
+ """
116
+ Clear the batch DataFrame.
117
+ """
118
+ schema_fields = JobSchema.model_fields
119
+ column_names = [field.alias or name for name, field in schema_fields.items()]
120
+ self.batch_df = pd.DataFrame(columns=column_names)
@@ -0,0 +1,165 @@
1
+ import os
2
+ import csv
3
+ from typing import List, Optional, Literal
4
+ import pandas as pd
5
+ from brynq_sdk_ftp import SFTP
6
+ from brynq_sdk_brynq import BrynQ
7
+ from brynq_sdk_functions import Functions
8
+ import pydantic
9
+ from .employees import Employees
10
+ from .cost_centers import CostCenters
11
+ from .jobs import Jobs
12
+
13
+ class Meta4(BrynQ):
14
+ """
15
+ Meta4 HR system client for BrynQ integrations.
16
+ Focuses on schema validation and CSV export functionality.
17
+ """
18
+
19
+ def __init__(self, system_type: Optional[Literal['source', 'target']] = None, output_path:str="outputs", debug=False):
20
+ """
21
+ Initialize Meta4 client.
22
+ """
23
+ super().__init__()
24
+ self.debug = debug
25
+
26
+ self.output_path = output_path
27
+ # SFTP client as a composition attribute
28
+ self.sftp = SFTP()
29
+ credentials = self.interfaces.credentials.get(system="meta-4", system_type=system_type)
30
+ credentials = credentials.get('data', credentials)
31
+
32
+ self.sftp._set_credentials(credentials)
33
+
34
+ # Initialize entity classes
35
+ self.employees = Employees(self)
36
+ self.cost_centers = CostCenters(self)
37
+ self.jobs = Jobs(self)
38
+
39
+ def validate(self, df: pd.DataFrame, schema: pydantic.BaseModel) -> pd.DataFrame:
40
+ """
41
+ Validate data against schema and return validated DataFrame.
42
+ """
43
+ try:
44
+ data_list = df.to_dict('records')
45
+
46
+ valid_data = []
47
+ invalid_data = []
48
+ for data_item in data_list:
49
+ try:
50
+ validated_item = schema(**data_item)
51
+ valid_data.append(validated_item.model_dump(by_alias=True, mode="json"))
52
+ except Exception as validation_error:
53
+ invalid_data.append({
54
+ 'data': data_item,
55
+ 'error': str(validation_error)
56
+ })
57
+
58
+ # Print invalid data count
59
+ if invalid_data:
60
+ print(f" {len(invalid_data)} lines of {schema.__name__} data validation failed:")
61
+
62
+ # Convert to DataFrame
63
+ df = pd.DataFrame(valid_data)
64
+ return df
65
+ except Exception as e:
66
+ raise Exception(f"Failed to validate data: {e}")
67
+
68
+ def export(
69
+ self,
70
+ df: pd.DataFrame,
71
+ filename: str
72
+ ) -> dict:
73
+ """
74
+ Validate data against schema and export to CSV file.
75
+
76
+ Args:
77
+ schema_class: Pydantic schema class (e.g., EmployeeSchema, CostCenterSchema, JobSchema)
78
+ df: DataFrame containing data to validate and export
79
+
80
+ Returns:
81
+ dict: Dictionary containing filepath, valid count, invalid count, and invalid data list
82
+
83
+ Raises:
84
+ ValidationError: If data validation fails
85
+ Exception: If file writing fails
86
+ """
87
+ try:
88
+ os.makedirs(self.output_path, exist_ok=True)
89
+
90
+
91
+ # Export to CSV
92
+ df.to_csv(
93
+ f"{self.output_path}/{filename}",
94
+ index=False,
95
+ encoding="utf-8-sig",
96
+ sep=";",
97
+ quotechar='"',
98
+ quoting=csv.QUOTE_MINIMAL
99
+ )
100
+ return {
101
+ 'filepath': f"{self.output_path}/{filename}",
102
+ }
103
+
104
+ except Exception as e:
105
+ raise Exception(f"Failed to export data for {filename}: {e}")
106
+
107
+ def upload(self, upload_path: str="/") -> List[str]:
108
+ """
109
+ Upload all CSV files from the output directory to the specified remote path.
110
+
111
+ This method scans the output directory for CSV files and uploads each one
112
+ to the remote server via SFTP. Files are uploaded to different directories
113
+ based on their filename:
114
+ - employee_import.csv -> ENTRADA folder
115
+ - Other files -> SALIDA folder
116
+
117
+ Args:
118
+ upload_path (str): Remote directory path where CSV files will be uploaded.
119
+ Must be a valid directory path on the remote server.
120
+
121
+ Returns:
122
+ List[str]: List of successfully uploaded remote file paths.
123
+ Each path includes the remote directory and filename.
124
+ Raises:
125
+ Exception: For any other upload-related errors.
126
+ """
127
+ try:
128
+
129
+ # Get list of CSV files in output directory
130
+ csv_files = [f for f in os.listdir(self.output_path) if f.endswith('.csv')]
131
+ uploaded_files = []
132
+
133
+ # Upload each CSV file
134
+ for csv_file in csv_files:
135
+ try:
136
+ local_filepath = os.path.join(self.output_path, csv_file)
137
+
138
+ # Determine upload directory based on filename
139
+ if csv_file == "employee_import.csv":
140
+ upload_dir = "ENTRADA/EMPLEADOS"
141
+ else:
142
+ upload_dir = "SALIDA"
143
+
144
+ remote_filepath = os.path.join(upload_path, upload_dir, csv_file).replace('\\', '/')
145
+
146
+ # Upload file using SFTP attribute
147
+ self.sftp.upload_file(
148
+ local_filepath=local_filepath,
149
+ remote_filepath=remote_filepath
150
+ )
151
+
152
+ uploaded_files.append(remote_filepath)
153
+
154
+ except Exception as file_error:
155
+ print(f"Failed to upload {csv_file}: {file_error}")
156
+ # Continue with other files even if one fails
157
+ continue
158
+
159
+ if not uploaded_files:
160
+ raise Exception("No files were successfully uploaded")
161
+
162
+ return f"The files successfully uploaded: {', '.join(uploaded_files)}"
163
+
164
+ except Exception as e:
165
+ raise Exception(f"Upload failed with unexpected error: {e}")
@@ -0,0 +1,9 @@
1
+ from .employee import EmployeeSchema
2
+ from .cost_center import CostCenterSchema
3
+ from .job import JobSchema
4
+
5
+ __all__ = [
6
+ 'EmployeeSchema',
7
+ 'CostCenterSchema',
8
+ 'JobSchema'
9
+ ]