anaplan-orm 0.1.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- anaplan_orm-0.1.0/LICENSE +21 -0
- anaplan_orm-0.1.0/PKG-INFO +441 -0
- anaplan_orm-0.1.0/README.md +418 -0
- anaplan_orm-0.1.0/pyproject.toml +61 -0
- anaplan_orm-0.1.0/src/anaplan_orm/__init__.py +0 -0
- anaplan_orm-0.1.0/src/anaplan_orm/authenticator.py +149 -0
- anaplan_orm-0.1.0/src/anaplan_orm/client.py +620 -0
- anaplan_orm-0.1.0/src/anaplan_orm/exceptions.py +10 -0
- anaplan_orm-0.1.0/src/anaplan_orm/logger.py +29 -0
- anaplan_orm-0.1.0/src/anaplan_orm/models.py +56 -0
- anaplan_orm-0.1.0/src/anaplan_orm/parsers.py +260 -0
- anaplan_orm-0.1.0/src/anaplan_orm/utils.py +76 -0
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 Valerio
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
|
@@ -0,0 +1,441 @@
|
|
|
1
|
+
Metadata-Version: 2.3
|
|
2
|
+
Name: anaplan-orm
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: An Object-Relational Mapper (ORM) for the Anaplan API using Pydantic.
|
|
5
|
+
License: MIT
|
|
6
|
+
Keywords: anaplan,orm,pydantic,data-engineering,api
|
|
7
|
+
Author: Valerio DAlessio
|
|
8
|
+
Author-email: valdal14@gmail.com
|
|
9
|
+
Requires-Python: >=3.10
|
|
10
|
+
Classifier: Development Status :: 4 - Beta
|
|
11
|
+
Classifier: Intended Audience :: Developers
|
|
12
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
13
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
14
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
16
|
+
Requires-Dist: cryptography (>=46.0.5,<47.0.0)
|
|
17
|
+
Requires-Dist: httpx (>=0.28.1,<0.29.0)
|
|
18
|
+
Requires-Dist: jmespath (>=1.1.0,<2.0.0)
|
|
19
|
+
Requires-Dist: lxml (>=6.0.2,<7.0.0)
|
|
20
|
+
Requires-Dist: pydantic (>=2.12.5,<3.0.0)
|
|
21
|
+
Project-URL: Repository, https://github.com/valdal14/anaplan-orm
|
|
22
|
+
Description-Content-Type: text/markdown
|
|
23
|
+
|
|
24
|
+
# anaplan-orm
|
|
25
|
+
|
|
26
|
+

|
|
27
|
+

|
|
28
|
+

|
|
29
|
+
|
|
30
|
+
A lightweight Python 3 library that abstracts the Anaplan API into an Object-Relational Mapper (ORM).
|
|
31
|
+
|
|
32
|
+
## Current Status
|
|
33
|
+
🚀 **Active Beta** 🚀
|
|
34
|
+
Core data transformation, parsing engine, and Anaplan chunked API client are complete.
|
|
35
|
+
|
|
36
|
+
## 🌟 Features
|
|
37
|
+
|
|
38
|
+
* **Pydantic Data Ingestion:** Validates and maps Python objects to Anaplan models effortlessly.
|
|
39
|
+
* **Enterprise Security:** Supports standard Basic Authentication and Anaplan's proprietary RSA-SHA512 Certificate-based Authentication (mTLS).
|
|
40
|
+
* **Resilient Networking:** Built-in exponential backoff, automated retries to protect against dropped packets, and mid-flight authentication token refreshing for massive, long-running pipelines.
|
|
41
|
+
* **Massive Payloads:** Automatically handles chunked file uploads for multi-megabyte/gigabyte datasets without memory crashes.
|
|
42
|
+
* **Smart Polling:** Asynchronous process execution with configurable, patient polling for long-running database transactions.
|
|
43
|
+
|
|
44
|
+
---
|
|
45
|
+
|
|
46
|
+
## 🔐 Authentication
|
|
47
|
+
|
|
48
|
+
`anaplan-orm` uses a decoupled authentication strategy, allowing you to easily swap between development and production security standards.
|
|
49
|
+
|
|
50
|
+
### 1. Basic Authentication
|
|
51
|
+
Ideal for development and sandbox testing.
|
|
52
|
+
|
|
53
|
+
```python
|
|
54
|
+
from anaplan_orm.client import AnaplanClient
|
|
55
|
+
from anaplan_orm.authenticator import BasicAuthenticator
|
|
56
|
+
|
|
57
|
+
auth = BasicAuthenticator("your_email@company.com", "your_password")
|
|
58
|
+
client = AnaplanClient(authenticator=auth)
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
### 2. Certificate-Based Authentication (Enterprise Standard)
|
|
62
|
+
|
|
63
|
+
For production environments, Anaplan requires a custom RSA-SHA512 signature. The CertificateAuthenticator handles this cryptographic handshake automatically.
|
|
64
|
+
|
|
65
|
+
Note: The library expects a .pem file containing both your private key and public certificate. If your enterprise issues a .p12 keystore, you can extract it using your terminal:
|
|
66
|
+
|
|
67
|
+
```bash
|
|
68
|
+
openssl pkcs12 -in keystore.p12 -out certificate.pem
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
```python
|
|
72
|
+
from anaplan_orm.client import AnaplanClient
|
|
73
|
+
from anaplan_orm.authenticator import CertificateAuthenticator
|
|
74
|
+
|
|
75
|
+
# 1. Initialize the Certificate Authenticator
|
|
76
|
+
auth = CertificateAuthenticator(
|
|
77
|
+
cert_path="path/to/your/certificate.pem",
|
|
78
|
+
# Omit if your private key is unencrypted
|
|
79
|
+
cert_password="your_secure_password",
|
|
80
|
+
# Set to False if you need to bypass a corporate proxy
|
|
81
|
+
verify_ssl=True
|
|
82
|
+
)
|
|
83
|
+
|
|
84
|
+
# 2. Inject it into the Anaplan Client
|
|
85
|
+
client = AnaplanClient(authenticator=auth)
|
|
86
|
+
|
|
87
|
+
# 3. Execute a request
|
|
88
|
+
status = client.ping()
|
|
89
|
+
```
|
|
90
|
+
---
|
|
91
|
+
|
|
92
|
+
## Quick Start: XML Parsing & Data Upload
|
|
93
|
+
The `anaplan-orm` is designed to take raw XML strings (e.g., from MuleSoft or data pipeline payloads), validate them into Python objects, and stream them directly into Anaplan.
|
|
94
|
+
|
|
95
|
+
### 1. Define Your Model
|
|
96
|
+
Map your Anaplan target columns to Python using Pydantic fields. The `alias` parameter bridges the gap between external uppercase XML tags and internal Python `snake_case` variables.
|
|
97
|
+
|
|
98
|
+
```python
|
|
99
|
+
from pydantic import Field
|
|
100
|
+
from anaplan_orm.models import AnaplanModel
|
|
101
|
+
|
|
102
|
+
class Developer(AnaplanModel):
|
|
103
|
+
dev_id: int = Field(alias="DEV_ID")
|
|
104
|
+
dev_name: str = Field(alias="DEV_NAME")
|
|
105
|
+
dev_age: int = Field(alias="DEV_AGE")
|
|
106
|
+
dev_location: str = Field(alias="DEV_LOCATION")
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
### 2. Parse, Serialize, and Upload
|
|
110
|
+
Use the XMLStringParser to ingest your XML string payload, then use the AnaplanClient to stream the chunked data to Anaplan.
|
|
111
|
+
|
|
112
|
+
```python
|
|
113
|
+
from anaplan_orm.parsers import XMLStringParser
|
|
114
|
+
from anaplan_orm.client import AnaplanClient, BasicAuthenticator
|
|
115
|
+
|
|
116
|
+
# 1. Your incoming XML string payload
|
|
117
|
+
xml_string = """
|
|
118
|
+
<AnaplanExport>
|
|
119
|
+
<Row>
|
|
120
|
+
<DEV_ID>1001</DEV_ID>
|
|
121
|
+
<DEV_NAME>Ada Lovelace</DEV_NAME>
|
|
122
|
+
<DEV_AGE>36</DEV_AGE>
|
|
123
|
+
<DEV_LOCATION>London</DEV_LOCATION>
|
|
124
|
+
</Row>
|
|
125
|
+
</AnaplanExport>
|
|
126
|
+
"""
|
|
127
|
+
|
|
128
|
+
def run_pipeline():
|
|
129
|
+
# 2. Parse and Validate the data using the ORM
|
|
130
|
+
parser = XMLStringParser()
|
|
131
|
+
developers = Developer.from_payload(payload=xml_string, parser=parser)
|
|
132
|
+
|
|
133
|
+
# 3. Serialize to Anaplan-ready CSV (using a Pipe separator)
|
|
134
|
+
csv_data = Developer.to_csv(developers, separator="|")
|
|
135
|
+
|
|
136
|
+
# 4. Authenticate with Anaplan
|
|
137
|
+
auth = BasicAuthenticator(
|
|
138
|
+
email="ANAPLAN_EMAIL",
|
|
139
|
+
pwd="ANAPLAN_PASSWORD"
|
|
140
|
+
)
|
|
141
|
+
|
|
142
|
+
client = AnaplanClient(authenticator=auth)
|
|
143
|
+
|
|
144
|
+
# 5. Stream the file chunks safely
|
|
145
|
+
client.upload_file_chunked(
|
|
146
|
+
workspace_id="YOUR_WORKSPACE_ID",
|
|
147
|
+
model_id="YOUR_MODEL_ID",
|
|
148
|
+
file_id="YOUR_FILE_ID",
|
|
149
|
+
csv_data=csv_data,
|
|
150
|
+
chunk_size_mb=10
|
|
151
|
+
)
|
|
152
|
+
|
|
153
|
+
# 6. Execute the Import Process
|
|
154
|
+
task_id = client.execute_process(
|
|
155
|
+
workspace_id="YOUR_WORKSPACE_ID",
|
|
156
|
+
model_id="YOUR_MODEL_ID",
|
|
157
|
+
process_id="YOUR_PROCESS_ID"
|
|
158
|
+
)
|
|
159
|
+
|
|
160
|
+
# 7. Actively poll the database for success/failure
|
|
161
|
+
status = client.wait_for_process_completion(
|
|
162
|
+
workspace_id="YOUR_WORKSPACE_ID",
|
|
163
|
+
model_id="YOUR_MODEL_ID",
|
|
164
|
+
process_id="YOUR_PROCESS_ID",
|
|
165
|
+
task_id=task_id
|
|
166
|
+
)
|
|
167
|
+
|
|
168
|
+
if __name__ == "__main__":
|
|
169
|
+
run_pipeline()
|
|
170
|
+
```
|
|
171
|
+
---
|
|
172
|
+
|
|
173
|
+
### Advanced: Deeply Nested XML Extraction
|
|
174
|
+
If your XML payload is deeply nested or relies heavily on attributes (common with SOAP APIs), you can use Pydantic's json_schema_extra to define native XPath 1.0 mappings. The parser will automatically evaluate the XPath, extract both text nodes and attributes, and map them to your Anaplan aliases.
|
|
175
|
+
|
|
176
|
+
```python
|
|
177
|
+
from pydantic import Field
|
|
178
|
+
from anaplan_orm.models import AnaplanModel
|
|
179
|
+
|
|
180
|
+
class NestedXMLDeveloper(AnaplanModel):
|
|
181
|
+
# Use '@' to extract attributes
|
|
182
|
+
# Use '/' to navigate nested text nodes
|
|
183
|
+
dev_id: int = Field(
|
|
184
|
+
alias="DEV_ID",
|
|
185
|
+
json_schema_extra={"path": "./EmployeeDetails/@empId"}
|
|
186
|
+
)
|
|
187
|
+
dev_name: str = Field(
|
|
188
|
+
alias="DEV_NAME",
|
|
189
|
+
json_schema_extra={"path": "./EmployeeDetails/Profile/FullName"}
|
|
190
|
+
)
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
To extract the repeating rows from the document, simply pass the base XPath expression to the parser using the data_key argument:
|
|
194
|
+
|
|
195
|
+
```python
|
|
196
|
+
from anaplan_orm.parsers import XMLStringParser
|
|
197
|
+
|
|
198
|
+
# The parser will find every <Employee> node, and apply your XPath mappings to it!
|
|
199
|
+
developers = NestedXMLDeveloper.from_payload(
|
|
200
|
+
payload=raw_xml_string,
|
|
201
|
+
parser=XMLStringParser(),
|
|
202
|
+
data_key=".//Employee"
|
|
203
|
+
)
|
|
204
|
+
```
|
|
205
|
+
|
|
206
|
+
Below you can see the XML example used for the above example
|
|
207
|
+
|
|
208
|
+
```xml
|
|
209
|
+
<?xml version="1.0" encoding="UTF-8"?>
|
|
210
|
+
<EnterpriseExport status="success" timestamp="2026-03-16T08:00:00Z">
|
|
211
|
+
<EmployeeRecords>
|
|
212
|
+
<Employee status="active">
|
|
213
|
+
<EmployeeDetails empId="1001">
|
|
214
|
+
<Profile>
|
|
215
|
+
<FullName>Ada Lovelace</FullName>
|
|
216
|
+
<Age>36</Age>
|
|
217
|
+
</Profile>
|
|
218
|
+
</EmployeeDetails>
|
|
219
|
+
<Office>
|
|
220
|
+
<City>London</City>
|
|
221
|
+
<Region>EMEA</Region>
|
|
222
|
+
</Office>
|
|
223
|
+
</Employee>
|
|
224
|
+
<Employee status="active">
|
|
225
|
+
<EmployeeDetails empId="1002">
|
|
226
|
+
<Profile>
|
|
227
|
+
<FullName>Grace Hopper</FullName>
|
|
228
|
+
<Age>85</Age>
|
|
229
|
+
</Profile>
|
|
230
|
+
</EmployeeDetails>
|
|
231
|
+
<Office>
|
|
232
|
+
<City>New York</City>
|
|
233
|
+
<Region>NAMER</Region>
|
|
234
|
+
</Office>
|
|
235
|
+
</Employee>
|
|
236
|
+
</EmployeeRecords>
|
|
237
|
+
</EnterpriseExport>
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
---
|
|
241
|
+
|
|
242
|
+
## Quick Start: JSON Parsing (REST APIs & Files)
|
|
243
|
+
For modern web integrations or local file processing, `anaplan-orm` provides a native `JSONParser`. It gracefully handles both flat JSON arrays and nested API responses by allowing you to pass targeted extraction keys directly through your Pydantic model.
|
|
244
|
+
|
|
245
|
+
### 1. Define Your Model
|
|
246
|
+
```python
|
|
247
|
+
from pydantic import Field
|
|
248
|
+
from anaplan_orm.models import AnaplanModel
|
|
249
|
+
|
|
250
|
+
class Employee(AnaplanModel):
|
|
251
|
+
emp_id: int = Field(alias="id")
|
|
252
|
+
email: str = Field(alias="emailAddress")
|
|
253
|
+
department: str = Field(alias="dept")
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
### 2. Parse and Upload
|
|
257
|
+
If your JSON data is nested inside a metadata wrapper: I.E:
|
|
258
|
+
|
|
259
|
+
```json
|
|
260
|
+
{"status": "success", "data": [...] }
|
|
261
|
+
```
|
|
262
|
+
|
|
263
|
+
Simply pass the data_key argument to the from_payload method. The ORM will safely drill down, extract the records, and inflate your models.
|
|
264
|
+
|
|
265
|
+
```python
|
|
266
|
+
import httpx
|
|
267
|
+
from anaplan_orm.parsers import JSONParser
|
|
268
|
+
|
|
269
|
+
# 1. Fetch JSON from an external REST API (or read a local .json file)
|
|
270
|
+
api_response = httpx.get("https://api.mycompany.com/v1/employees").text
|
|
271
|
+
|
|
272
|
+
# 2. Parse the JSON string (drilling into the "data" array)
|
|
273
|
+
parser = JSONParser()
|
|
274
|
+
employees = Employee.from_payload(
|
|
275
|
+
payload=api_response,
|
|
276
|
+
parser=parser,
|
|
277
|
+
# The ORM passes this key directly to the parser.
|
|
278
|
+
data_key="data"
|
|
279
|
+
)
|
|
280
|
+
|
|
281
|
+
# 3. Convert to Anaplan CSV and Upload
|
|
282
|
+
csv_data = Employee.to_csv(employees)
|
|
283
|
+
client.upload_file_chunked(WORKSPACE_ID, MODEL_ID, FILE_ID, csv_data)
|
|
284
|
+
```
|
|
285
|
+
|
|
286
|
+
---
|
|
287
|
+
|
|
288
|
+
### Advanced: Deeply Nested JSON Extraction
|
|
289
|
+
If your API returns a deeply nested JSON response, you do not need to write custom flattening loops. Simply use Pydantic's `json_schema_extra` to define a [JMESPath](https://jmespath.org/) mapping. The ORM will automatically traverse the JSON tree, extract the value, and assign it to the correct Anaplan column (`alias`).
|
|
290
|
+
|
|
291
|
+
```python
|
|
292
|
+
from pydantic import Field
|
|
293
|
+
from anaplan_orm.models import AnaplanModel
|
|
294
|
+
|
|
295
|
+
class NestedEmployee(AnaplanModel):
|
|
296
|
+
# 'alias' is the Anaplan CSV column's name.
|
|
297
|
+
# 'path' is where to find the data in the JSON.
|
|
298
|
+
emp_id: int = Field(
|
|
299
|
+
alias="DEV_ID",
|
|
300
|
+
json_schema_extra={"path": "employeeDetails.empId"}
|
|
301
|
+
)
|
|
302
|
+
city: str = Field(
|
|
303
|
+
alias="LOCATION",
|
|
304
|
+
json_schema_extra={"path": "office.address.city"}
|
|
305
|
+
)
|
|
306
|
+
```
|
|
307
|
+
|
|
308
|
+
---
|
|
309
|
+
|
|
310
|
+
## Quick Start: SQL Databases (Relational Data to Anaplan)
|
|
311
|
+
If your source data lives in a relational database (Snowflake, PostgreSQL, SQL Server), `anaplan-orm` provides an `SQLCursorParser`. This allows you to stream live database queries directly into Pydantic models without ever saving a CSV to disk.
|
|
312
|
+
|
|
313
|
+
### 1. Execute your query and pass the cursor
|
|
314
|
+
The parser accepts any standard DB-API 2.0 cursor object, dynamically extracts the column headers, and maps them to your model `aliases`.
|
|
315
|
+
|
|
316
|
+
```python
|
|
317
|
+
import psycopg2 # Or sqlite3, snowflake.connector, etc.
|
|
318
|
+
from anaplan_orm.parsers import SQLCursorParser
|
|
319
|
+
|
|
320
|
+
# 1. Connect to your database and execute a query
|
|
321
|
+
conn = psycopg2.connect("dbname=enterprise user=admin password=secret")
|
|
322
|
+
cursor = conn.cursor()
|
|
323
|
+
cursor.execute("SELECT emp_id AS id, email_address, department FROM employees WHERE active = true")
|
|
324
|
+
|
|
325
|
+
# 2. Pass the active cursor directly into the ORM
|
|
326
|
+
employees = Employee.from_payload(
|
|
327
|
+
payload=cursor,
|
|
328
|
+
parser=SQLCursorParser()
|
|
329
|
+
)
|
|
330
|
+
|
|
331
|
+
# 3. Convert to Anaplan CSV and Upload
|
|
332
|
+
csv_data = Employee.to_csv(employees)
|
|
333
|
+
client.upload_file_chunked(WORKSPACE_ID, MODEL_ID, FILE_ID, csv_data)
|
|
334
|
+
|
|
335
|
+
conn.close()
|
|
336
|
+
```
|
|
337
|
+
|
|
338
|
+
---
|
|
339
|
+
|
|
340
|
+
## ⬇️ Extracting Data (Outbound Pipeline)
|
|
341
|
+
|
|
342
|
+
`anaplan-orm` supports two distinct architectural patterns for extracting data, depending on your pipeline's requirements.
|
|
343
|
+
|
|
344
|
+
### Option A: Pure Extract & Load (ELT)
|
|
345
|
+
If your goal is simply to move data from Anaplan to a Data Lake (like AWS S3) to ingest later. You can stream the raw CSV text directly.
|
|
346
|
+
|
|
347
|
+
```python
|
|
348
|
+
# 1. Trigger the export and wait for completion
|
|
349
|
+
task_id = client.execute_export(WORKSPACE_ID, MODEL_ID, EXPORT_ID)
|
|
350
|
+
client.wait_for_export_completion(WORKSPACE_ID, MODEL_ID, EXPORT_ID, task_id)
|
|
351
|
+
|
|
352
|
+
# 2. Download the raw CSV string in chunks
|
|
353
|
+
raw_csv_string = client.download_file_chunked(WORKSPACE_ID, MODEL_ID, EXPORT_ID)
|
|
354
|
+
|
|
355
|
+
# 3. Write directly to a file (or upload to AWS S3 using boto3)
|
|
356
|
+
with open("anaplan_export.csv", "w", encoding="utf-8") as f:
|
|
357
|
+
f.write(raw_csv_string)
|
|
358
|
+
```
|
|
359
|
+
|
|
360
|
+
### Option B: In-Flight Processing (The True ORM)
|
|
361
|
+
|
|
362
|
+
If you need to validate data types, perform cross-column mathematical transformations, or mask sensitive PII before routing the data to another microservice, you can seamlessly inflate the CSV into strongly-typed Pydantic models.
|
|
363
|
+
|
|
364
|
+
```python
|
|
365
|
+
from pydantic import BaseModel, Field, ValidationError
|
|
366
|
+
from anaplan_orm.parsers import CSVStringParser
|
|
367
|
+
|
|
368
|
+
# Define your data contract
|
|
369
|
+
class FinancialRow(BaseModel):
|
|
370
|
+
cost_center: str = Field(alias="Cost Center")
|
|
371
|
+
outlook_eur: float = Field(alias="Outlook in Local Currency")
|
|
372
|
+
|
|
373
|
+
# 1. Parse the raw CSV string into a list of dictionaries
|
|
374
|
+
parsed_rows = CSVStringParser.parse(raw_csv_string)
|
|
375
|
+
|
|
376
|
+
# 2. Inflate and validate the Pydantic models
|
|
377
|
+
valid_models = []
|
|
378
|
+
for row in parsed_rows:
|
|
379
|
+
try:
|
|
380
|
+
valid_models.append(FinancialRow(**row))
|
|
381
|
+
except ValidationError as e:
|
|
382
|
+
print(f"Quarantined invalid row: {e}")
|
|
383
|
+
|
|
384
|
+
# You now have a list of mathematically safe Python objects for an easy transformations
|
|
385
|
+
for model in valid_models:
|
|
386
|
+
model.outlook_eur = model.outlook_eur * 1.05
|
|
387
|
+
```
|
|
388
|
+
|
|
389
|
+
## 🤝 Contributing to anaplan-orm
|
|
390
|
+
|
|
391
|
+
We welcome contributions! To maintain enterprise-grade code quality, this project uses strict formatting, linting, and testing pipelines.
|
|
392
|
+
|
|
393
|
+
### Prerequisites
|
|
394
|
+
* **Python 3.10+**
|
|
395
|
+
* **Poetry** (Dependency management)
|
|
396
|
+
|
|
397
|
+
### 1. Local Setup
|
|
398
|
+
Clone the repository and install all dependencies (including the `dev` group tools like Pytest and Ruff):
|
|
399
|
+
|
|
400
|
+
```bash
|
|
401
|
+
git clone https://github.com/valdal14/anaplan-orm.git
|
|
402
|
+
cd anaplan-orm
|
|
403
|
+
poetry install
|
|
404
|
+
```
|
|
405
|
+
|
|
406
|
+
### 2. Formatting & Linting (Ruff)
|
|
407
|
+
This project enforces strict PEP 8 compliance using **Ruff**. Before submitting any code, you must format and lint your changes. If you do not run these commands, the GitHub Actions CI pipeline will fail your Pull Request.
|
|
408
|
+
|
|
409
|
+
Run the formatter to automatically fix spacing, quotes, and line breaks:
|
|
410
|
+
|
|
411
|
+
```bash
|
|
412
|
+
poetry run python3 -m ruff format .
|
|
413
|
+
```
|
|
414
|
+
|
|
415
|
+
Run the linter to catch unused imports, bad variables, and logical style issues:
|
|
416
|
+
|
|
417
|
+
```bash
|
|
418
|
+
poetry run python3 -m ruff check --fix .
|
|
419
|
+
```
|
|
420
|
+
|
|
421
|
+
(Tip: I highly recommend installing the Ruff extension in your IDE and setting it to "Format on Save").
|
|
422
|
+
|
|
423
|
+
### 3. Running Tests (Pytest)
|
|
424
|
+
|
|
425
|
+
Every feature and bug fix must be covered by unit tests. The test suite heavily utilizes Python's unittest.mock to simulate Anaplan network responses without requiring live API credentials.
|
|
426
|
+
|
|
427
|
+
Run the entire test suite:
|
|
428
|
+
|
|
429
|
+
```bash
|
|
430
|
+
poetry run python3 -m pytest
|
|
431
|
+
```
|
|
432
|
+
|
|
433
|
+
### 4. The Pull Request Workflow
|
|
434
|
+
|
|
435
|
+
```bash
|
|
436
|
+
1 - Create a feature branch (e.g., feature/ORM-123-new-parser).
|
|
437
|
+
2 - Write your code and your tests.
|
|
438
|
+
3 - Run Ruff (format and check) and Pytest.
|
|
439
|
+
4 - Push your branch to GitHub and open a Pull Request against main.
|
|
440
|
+
4 - Wait for the automated CI pipeline to verify your build before merging.
|
|
441
|
+
```
|