pandoraspec 0.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,182 @@
1
+ Metadata-Version: 2.4
2
+ Name: pandoraspec
3
+ Version: 0.2.0
4
+ Summary: DORA Compliance Auditor for OpenAPI Specs
5
+ Author-email: Ulises Merlan <ulimerlan@gmail.com>
6
+ License: MIT
7
+ Requires-Python: >=3.9
8
+ Description-Content-Type: text/markdown
9
+ Requires-Dist: schemathesis==4.9.1
10
+ Requires-Dist: typer[all]
11
+ Requires-Dist: rich
12
+ Requires-Dist: weasyprint
13
+ Requires-Dist: jinja2
14
+ Requires-Dist: requests
15
+
16
+ # PanDoraSpec
17
+
18
+ **The Open DORA Compliance Engine for OpenAPI Specs.**
19
+
20
+ PanDoraSpec is a CLI tool that performs deep technical due diligence on APIs to verify compliance with **DORA (Digital Operational Resilience Act)** requirements. It compares OpenAPI/Swagger specifications against real-world implementation to detect schema drift, resilience gaps, and security issues.
21
+
22
+ ---
23
+
24
+ ## 📦 Installation
25
+
26
+ ```bash
27
+ pip install pandoraspec
28
+ ```
29
+
30
+ ### System Requirements
31
+ The PDF report generation requires `weasyprint`, which depends on **Pango**.
32
+
33
+ **macOS:**
34
+ ```bash
35
+ brew install pango
36
+ ```
37
+
38
+ **Debian / Ubuntu:**
39
+ ```bash
40
+ sudo apt-get install libpango-1.0-0 libpangoft2-1.0-0
41
+ ```
42
+
43
+ ## 🛠️ Development Setup
44
+
45
+ To run the CLI locally without reinstalling after every change:
46
+
47
+ 1. **Clone & CD**:
48
+ ```bash
49
+ git clone ...
50
+ cd pandoraspec
51
+ ```
52
+
53
+ 2. **Create & Activate Virtual Environment**:
54
+ It's recommended to use a virtual environment to keep dependencies isolated.
55
+ ```bash
56
+ python3 -m venv venv
57
+ source venv/bin/activate # On Windows: venv\Scripts\activate
58
+ ```
59
+
60
+ 3. **Editable Install**:
61
+ ```bash
62
+ pip install -e .
63
+ ```
64
+ This links the `pandoraspec` command directly to your source code. Any changes you make will be reflected immediately.
65
+
66
+ ## 🚀 Usage
67
+
68
+ Run the audit directly from your terminal.
69
+
70
+ ### Basic Scan
71
+ ```bash
72
+ pandoraspec https://petstore.swagger.io/v2/swagger.json
73
+ ```
74
+
75
+ ### With Options
76
+ ```bash
77
+ pandoraspec https://api.example.com/spec.json --vendor "Stripe" --key "sk_live_..."
78
+ ```
79
+
80
+ ### Local File
81
+ ```bash
82
+ pandoraspec ./openapi.yaml
83
+ ```
84
+
85
+ ---
86
+
87
+ ## 🏎️ Zero-Config Testing (DORA Compliance)
88
+
89
+ For standard **DORA compliance**, you simply need to verify that your API implementation matches its specification. **No configuration is required.**
90
+
91
+ ```bash
92
+ pandoraspec https://petstore.swagger.io/v2/swagger.json
93
+ ```
94
+
95
+ This runs a **fuzzing** audit where random data is generated based on your schema types (e.g., sending random integers for IDs).
96
+ - **Value:** This is sufficient to prove that your API correctly handles unexpected inputs and adheres to the basic contract (e.g., returning 400 Bad Request instead of 500 Server Error).
97
+ - **Limitation:** Detailed business logic requiring valid IDs (e.g., `GET /user/{id}` where `{id}` must exist) may return `404 Not Found`. This is acceptable for a compliance scan but may not fully exercise deeper code paths.
98
+
99
+ ---
100
+
101
+ ## 🧠 Advanced Testing with Seed Data
102
+
103
+ To test **specific business workflows** (e.g., successfully retrieving a user profile), you can provide "Seed Data". This tells PanDoraSpec to use known, valid values instead of random fuzzing data.
104
+
105
+ ```bash
106
+ pandoraspec https://petstore.swagger.io/v2/swagger.json --config seed_parameters.yaml
107
+ ```
108
+
109
+ ### Configuration Hierarchy
110
+ You can define seed values at three levels of specificity. The engine resolves values in this order: **Endpoints > Verbs > General**.
111
+
112
+ ```yaml
113
+ seed_data:
114
+ # 1. General: Applies to EVERYTHING (path params, query params, headers)
115
+ general:
116
+ username: "test_user"
117
+ limit: 50
118
+
119
+ # 2. Verbs: Applies only to specific HTTP methods (Overwrites General)
120
+ verbs:
121
+ POST:
122
+ username: "admin_user" # Creation requests use a different user
123
+
124
+ # 3. Endpoints: Applies only to specific routes (Overwrites Everything)
125
+ endpoints:
126
+ /users/me:
127
+ GET:
128
+ limit: 10
129
+ ```
130
+
131
+ ### 🔗 Dynamic Seed Data (Chaining Requests)
132
+ You can even test **dependency chains** where one endpoint requires data from another (e.g., get a User ID from a search result to query their profile).
133
+
134
+ **Supported Features:**
135
+ - **Dynamic Resolution:** Fetch a value from another API call before running the test.
136
+ - **Extraction:** Extract values from JSON responses or plain text.
137
+ - **Parameter Interpolation:** Use `{param}` in the dependency URL to chain multiple steps.
138
+
139
+ ```yaml
140
+ endpoints:
141
+ /user/{username}:
142
+ GET:
143
+ username:
144
+ # 1. Fetch the user list first
145
+ # 2. Extract the 'username' field from the response
146
+ from_endpoint: "GET /users/search?role=admin"
147
+ extract: "data.items.0.username"
148
+
149
+ /orders/{orderId}:
150
+ GET:
151
+ orderId:
152
+ # 1. Use the {userId} from our general seeds
153
+ # 2. Call /users/{userId}/latest-order
154
+ # 3. Extract the ID using Regex from a message string
155
+ from_endpoint: "GET /users/{userId}/latest-order"
156
+ extract: "message"
157
+ regex: "Order ID: ([0-9]+)"
158
+ ```
159
+
160
+ ---
161
+
162
+ ## 🛡️ What It Checks
163
+
164
+ ### Module A: The Integrity Test (Drift)
165
+ Checks if your API implementation matches your documentation.
166
+ - **Why?** DORA requires you to monitor if the service effectively supports your critical functions. If the API behaves differently than documented, it's a risk.
167
+
168
+ ### Module B: The Resilience Test
169
+ Stress tests the API to ensure it handles invalid inputs gracefully (`4xx` vs `5xx`).
170
+ - **Why?** DORA Article 25 calls for "Digital operational resilience testing".
171
+
172
+ ### Module C: Security Hygiene
173
+ Checks for common security headers and configurations.
174
+
175
+ ### Module D: The Report
176
+ Generates a PDF report: **"DORA ICT Third-Party Technical Risk Assessment"**.
177
+
178
+ ---
179
+
180
+ ## 📄 License
181
+
182
+ MIT
@@ -0,0 +1,167 @@
1
+ # PanDoraSpec
2
+
3
+ **The Open DORA Compliance Engine for OpenAPI Specs.**
4
+
5
+ PanDoraSpec is a CLI tool that performs deep technical due diligence on APIs to verify compliance with **DORA (Digital Operational Resilience Act)** requirements. It compares OpenAPI/Swagger specifications against real-world implementation to detect schema drift, resilience gaps, and security issues.
6
+
7
+ ---
8
+
9
+ ## 📦 Installation
10
+
11
+ ```bash
12
+ pip install pandoraspec
13
+ ```
14
+
15
+ ### System Requirements
16
+ The PDF report generation requires `weasyprint`, which depends on **Pango**.
17
+
18
+ **macOS:**
19
+ ```bash
20
+ brew install pango
21
+ ```
22
+
23
+ **Debian / Ubuntu:**
24
+ ```bash
25
+ sudo apt-get install libpango-1.0-0 libpangoft2-1.0-0
26
+ ```
27
+
28
+ ## 🛠️ Development Setup
29
+
30
+ To run the CLI locally without reinstalling after every change:
31
+
32
+ 1. **Clone & CD**:
33
+ ```bash
34
+ git clone ...
35
+ cd pandoraspec
36
+ ```
37
+
38
+ 2. **Create & Activate Virtual Environment**:
39
+ It's recommended to use a virtual environment to keep dependencies isolated.
40
+ ```bash
41
+ python3 -m venv venv
42
+ source venv/bin/activate # On Windows: venv\Scripts\activate
43
+ ```
44
+
45
+ 3. **Editable Install**:
46
+ ```bash
47
+ pip install -e .
48
+ ```
49
+ This links the `pandoraspec` command directly to your source code. Any changes you make will be reflected immediately.
50
+
51
+ ## 🚀 Usage
52
+
53
+ Run the audit directly from your terminal.
54
+
55
+ ### Basic Scan
56
+ ```bash
57
+ pandoraspec https://petstore.swagger.io/v2/swagger.json
58
+ ```
59
+
60
+ ### With Options
61
+ ```bash
62
+ pandoraspec https://api.example.com/spec.json --vendor "Stripe" --key "sk_live_..."
63
+ ```
64
+
65
+ ### Local File
66
+ ```bash
67
+ pandoraspec ./openapi.yaml
68
+ ```
69
+
70
+ ---
71
+
72
+ ## 🏎️ Zero-Config Testing (DORA Compliance)
73
+
74
+ For standard **DORA compliance**, you simply need to verify that your API implementation matches its specification. **No configuration is required.**
75
+
76
+ ```bash
77
+ pandoraspec https://petstore.swagger.io/v2/swagger.json
78
+ ```
79
+
80
+ This runs a **fuzzing** audit where random data is generated based on your schema types (e.g., sending random integers for IDs).
81
+ - **Value:** This is sufficient to prove that your API correctly handles unexpected inputs and adheres to the basic contract (e.g., returning 400 Bad Request instead of 500 Server Error).
82
+ - **Limitation:** Detailed business logic requiring valid IDs (e.g., `GET /user/{id}` where `{id}` must exist) may return `404 Not Found`. This is acceptable for a compliance scan but may not fully exercise deeper code paths.
83
+
84
+ ---
85
+
86
+ ## 🧠 Advanced Testing with Seed Data
87
+
88
+ To test **specific business workflows** (e.g., successfully retrieving a user profile), you can provide "Seed Data". This tells PanDoraSpec to use known, valid values instead of random fuzzing data.
89
+
90
+ ```bash
91
+ pandoraspec https://petstore.swagger.io/v2/swagger.json --config seed_parameters.yaml
92
+ ```
93
+
94
+ ### Configuration Hierarchy
95
+ You can define seed values at three levels of specificity. The engine resolves values in this order: **Endpoints > Verbs > General**.
96
+
97
+ ```yaml
98
+ seed_data:
99
+ # 1. General: Applies to EVERYTHING (path params, query params, headers)
100
+ general:
101
+ username: "test_user"
102
+ limit: 50
103
+
104
+ # 2. Verbs: Applies only to specific HTTP methods (Overwrites General)
105
+ verbs:
106
+ POST:
107
+ username: "admin_user" # Creation requests use a different user
108
+
109
+ # 3. Endpoints: Applies only to specific routes (Overwrites Everything)
110
+ endpoints:
111
+ /users/me:
112
+ GET:
113
+ limit: 10
114
+ ```
115
+
116
+ ### 🔗 Dynamic Seed Data (Chaining Requests)
117
+ You can even test **dependency chains** where one endpoint requires data from another (e.g., get a User ID from a search result to query their profile).
118
+
119
+ **Supported Features:**
120
+ - **Dynamic Resolution:** Fetch a value from another API call before running the test.
121
+ - **Extraction:** Extract values from JSON responses or plain text.
122
+ - **Parameter Interpolation:** Use `{param}` in the dependency URL to chain multiple steps.
123
+
124
+ ```yaml
125
+ endpoints:
126
+ /user/{username}:
127
+ GET:
128
+ username:
129
+ # 1. Fetch the user list first
130
+ # 2. Extract the 'username' field from the response
131
+ from_endpoint: "GET /users/search?role=admin"
132
+ extract: "data.items.0.username"
133
+
134
+ /orders/{orderId}:
135
+ GET:
136
+ orderId:
137
+ # 1. Use the {userId} from our general seeds
138
+ # 2. Call /users/{userId}/latest-order
139
+ # 3. Extract the ID using Regex from a message string
140
+ from_endpoint: "GET /users/{userId}/latest-order"
141
+ extract: "message"
142
+ regex: "Order ID: ([0-9]+)"
143
+ ```
144
+
145
+ ---
146
+
147
+ ## 🛡️ What It Checks
148
+
149
+ ### Module A: The Integrity Test (Drift)
150
+ Checks if your API implementation matches your documentation.
151
+ - **Why?** DORA requires you to monitor if the service effectively supports your critical functions. If the API behaves differently than documented, it's a risk.
152
+
153
+ ### Module B: The Resilience Test
154
+ Stress tests the API to ensure it handles invalid inputs gracefully (`4xx` vs `5xx`).
155
+ - **Why?** DORA Article 25 calls for "Digital operational resilience testing".
156
+
157
+ ### Module C: Security Hygiene
158
+ Checks for common security headers and configurations.
159
+
160
+ ### Module D: The Report
161
+ Generates a PDF report: **"DORA ICT Third-Party Technical Risk Assessment"**.
162
+
163
+ ---
164
+
165
+ ## 📄 License
166
+
167
+ MIT
File without changes
@@ -0,0 +1,85 @@
1
+ import typer
2
+ import yaml
3
+ import os
4
+ from rich.console import Console
5
+ from rich.table import Table
6
+ from rich.panel import Panel
7
+ from .core import AuditEngine
8
+ from .reporting import generate_report
9
+
10
+ app = typer.Typer(help="DORA Audit CLI - Verify Compliance of OpenAI Specs")
11
+ console = Console()
12
+
13
+ def load_config(config_path: str):
14
+ if os.path.exists(config_path):
15
+ with open(config_path, "r") as f:
16
+ return yaml.safe_load(f)
17
+ return {}
18
+
19
+ def run_audit(
20
+ target: str = typer.Argument(..., help="URL or path to OpenAPI schema"),
21
+ api_key: str = typer.Option(None, "--key", "-k", help="API Key for authenticated endpoints"),
22
+ vendor: str = typer.Option("Vendor", "--vendor", "-v", help="Vendor name for the report"),
23
+ config: str = typer.Option(None, "--config", "-c", help="Path to .yaml configuration file")
24
+ ):
25
+ """
26
+ Run a DORA audit against an OpenAPI schema.
27
+ """
28
+ console.print(Panel(f"[bold blue]Starting DORA Audit for {vendor}[/bold blue]", border_style="blue"))
29
+ console.print(f"🔎 Scanning [bold]{target}[/bold]...")
30
+
31
+ # 1. Load Config
32
+ seed_data = {}
33
+ if config:
34
+ config_data = load_config(config)
35
+ seed_data = config_data.get("seed_data", {})
36
+
37
+ if seed_data:
38
+ console.print(f"[green]Loaded {len(seed_data)} seed values from {config}[/green]")
39
+
40
+ try:
41
+ # 2. Pass seed_data to Engine
42
+ engine = AuditEngine(target=target, api_key=api_key, seed_data=seed_data)
43
+
44
+ results = engine.run_full_audit()
45
+
46
+ # Display Summary Table
47
+ table = Table(title="Audit Summary")
48
+ table.add_column("Module", style="cyan", no_wrap=True)
49
+ table.add_column("Status", style="bold")
50
+ table.add_column("Issues (Pass/Fail)", style="magenta")
51
+
52
+ # Drift
53
+ drift_pass = len([r for r in results["drift_check"] if r.get("status") == "PASS"])
54
+ drift_fail = len([r for r in results["drift_check"] if r.get("status") != "PASS"])
55
+ drift_status = "[bold red]FAIL[/bold red]" if drift_fail > 0 else "[bold green]PASS[/bold green]"
56
+ table.add_row("Module A: Integrity", drift_status, f"{drift_pass} / {drift_fail}")
57
+
58
+ # Resilience
59
+ res_pass = len([r for r in results["resilience"] if r.get("status") == "PASS"])
60
+ res_fail = len([r for r in results["resilience"] if r.get("status") != "PASS"])
61
+ res_status = "[bold red]FAIL[/bold red]" if res_fail > 0 else "[bold green]PASS[/bold green]"
62
+ table.add_row("Module B: Resilience", res_status, f"{res_pass} / {res_fail}")
63
+
64
+ # Security
65
+ sec_pass = len([r for r in results["security"] if r.get("status") == "PASS"])
66
+ sec_fail = len([r for r in results["security"] if r.get("status") != "PASS"])
67
+ sec_status = "[bold red]FAIL[/bold red]" if sec_fail > 0 else "[bold green]PASS[/bold green]"
68
+ table.add_row("Module C: Security", sec_status, f"{sec_pass} / {sec_fail}")
69
+
70
+ console.print(table)
71
+
72
+ # Generate Report
73
+ report_path = generate_report(vendor, results)
74
+
75
+ console.print(Panel(f"[bold green]Audit Complete![/bold green]\n📄 Report generated: [link={report_path}]{report_path}[/link]", border_style="green"))
76
+
77
+ except Exception as e:
78
+ console.print(f"[bold red]Error:[/bold red] {str(e)}")
79
+ raise typer.Exit(code=1)
80
+
81
+ def main():
82
+ typer.run(run_audit)
83
+
84
+ if __name__ == "__main__":
85
+ main()
@@ -0,0 +1,470 @@
1
+ import schemathesis
2
+ from typing import List, Dict, Any
3
+ import requests
4
+ import re
5
+ from schemathesis import checks
6
+ from schemathesis.specs.openapi import checks as oai_checks
7
+ from schemathesis.checks import CheckContext, ChecksConfig
8
+ import html
9
+ import os
10
+
11
+ class AuditEngine:
12
+ def __init__(self, target: str, api_key: str = None, seed_data: Dict[str, Any] = None):
13
+ self.target = target
14
+ self.api_key = api_key
15
+ self.seed_data = seed_data or {}
16
+ self.base_url = None
17
+ self.dynamic_cache = {} # Cache for dynamic seed values
18
+
19
+ try:
20
+ if os.path.exists(target) and os.path.isfile(target):
21
+ print(f"DEBUG: Loading schema from local file: {target}")
22
+ self.schema = schemathesis.openapi.from_path(target)
23
+ else:
24
+ self.schema = schemathesis.openapi.from_url(target)
25
+
26
+ # Priority 1: Extract from the 'servers' field in the spec
27
+ resolved_url = None
28
+ if hasattr(self.schema, "raw_schema"):
29
+ servers = self.schema.raw_schema.get("servers", [])
30
+ if servers and isinstance(servers, list) and len(servers) > 0:
31
+ spec_server_url = servers[0].get("url")
32
+ if spec_server_url:
33
+ resolved_url = spec_server_url
34
+ print(f"DEBUG: Found server URL in specification: {resolved_url}")
35
+
36
+ # Priority 2: Use whatever schemathesis resolved automatically (fallback)
37
+ if not resolved_url:
38
+ resolved_url = getattr(self.schema, "base_url", None)
39
+ print(f"DEBUG: Falling back to Schemathesis resolved base_url: {resolved_url}")
40
+
41
+ if not resolved_url and self.target and not os.path.exists(self.target):
42
+ # Fallback: Derive from target URL (e.g., remove swagger.json)
43
+ try:
44
+ from urllib.parse import urlparse, urlunparse
45
+ parsed = urlparse(self.target)
46
+ path_parts = parsed.path.split('/')
47
+ # Simple heuristic: remove the last segment (e.g. swagger.json) to get base
48
+ if '.' in path_parts[-1]:
49
+ path_parts.pop()
50
+ new_path = '/'.join(path_parts)
51
+ resolved_url = urlunparse(parsed._replace(path=new_path))
52
+ print(f"DEBUG: Derived base_url from schema_url: {resolved_url}")
53
+ except Exception as e:
54
+ print(f"DEBUG: Failed to derive base_url from schema_url: {e}")
55
+
56
+ print(f"DEBUG: Final resolved base_url for engine: {resolved_url}")
57
+ self.base_url = resolved_url
58
+ if resolved_url:
59
+ try:
60
+ self.schema.base_url = resolved_url
61
+ except Exception:
62
+ pass
63
+ except Exception as e:
64
+ # Handle invalid URL or schema loading error gracefully
65
+ print(f"Error loading schema: {e}")
66
+ if target and (target.startswith("http") or os.path.exists(target)):
67
+ pass # Allow to continue if it's just a warning, but schemathesis might fail later
68
+ else:
69
+ raise ValueError(f"Failed to load OpenAPI schema from {target}. Error: {str(e)}")
70
+
71
+ def _resolve_dynamic_value(self, config_value: Any) -> Any:
72
+ """Resolves dynamic seed values like `from_endpoint`"""
73
+ if not isinstance(config_value, dict) or "from_endpoint" not in config_value:
74
+ return config_value
75
+
76
+ endpoint_def = config_value["from_endpoint"]
77
+ if endpoint_def in self.dynamic_cache:
78
+ return self.dynamic_cache[endpoint_def]
79
+
80
+ try:
81
+ method, path = endpoint_def.split(" ", 1)
82
+
83
+ # Interpolate path parameters (e.g., /user/{id}) from general seeds
84
+ if '{' in path:
85
+ general_seeds = self.seed_data.get('general', {})
86
+
87
+ def replace_param(match):
88
+ param_name = match.group(1)
89
+ if param_name in general_seeds:
90
+ return str(general_seeds[param_name])
91
+ print(f"WARNING: Missing seed value for {{{param_name}}} in dynamic endpoint {endpoint_def}")
92
+ return match.group(0) # Leave as is
93
+
94
+ path = re.sub(r"\{([a-zA-Z0-9_]+)\}", replace_param, path)
95
+
96
+ url = f"{self.base_url.rstrip('/')}/{path.lstrip('/')}"
97
+
98
+ headers = {}
99
+ if self.api_key:
100
+ auth_header = self.api_key if self.api_key.lower().startswith("bearer ") else f"Bearer {self.api_key}"
101
+ headers["Authorization"] = auth_header
102
+
103
+ print(f"AUDIT LOG: Resolving dynamic seed from {method} {path}")
104
+ response = requests.request(method, url, headers=headers)
105
+
106
+ if response.status_code >= 400:
107
+ print(f"WARNING: Dynamic seed request failed with {response.status_code}")
108
+ return None
109
+
110
+ result = None
111
+ extract_key = config_value.get("extract")
112
+ regex_pattern = config_value.get("regex")
113
+
114
+ # JSON Extraction
115
+ if extract_key:
116
+ try:
117
+ json_data = response.json()
118
+ # Simple key traversal for now (e.g. 'data.id')
119
+ keys = extract_key.split('.')
120
+ val = json_data
121
+ for k in keys:
122
+ if isinstance(val, dict):
123
+ val = val.get(k)
124
+ else:
125
+ val = None
126
+ break
127
+ result = val
128
+ except Exception:
129
+ print("WARNING: Failed to parse JSON or extract key")
130
+ else:
131
+ # Default to text body
132
+ result = response.text
133
+
134
+ # Regex Extraction
135
+ if regex_pattern and result is not None:
136
+ match = re.search(regex_pattern, str(result))
137
+ if match:
138
+ # Return first group if exists, else the whole match
139
+ result = match.group(1) if match.groups() else match.group(0)
140
+
141
+ self.dynamic_cache[endpoint_def] = result
142
+ return result
143
+
144
+ except Exception as e:
145
+ print(f"ERROR: Failed to resolve dynamic seed: {e}")
146
+ return None
147
+
148
+ def _apply_seed_data(self, case):
149
+ """Helper to inject seed data into test cases with hierarchy: General < Verbs < Endpoints"""
150
+ if not self.seed_data:
151
+ return
152
+
153
+ # Determine if using hierarchical structure
154
+ is_hierarchical = any(k in self.seed_data for k in ['general', 'verbs', 'endpoints'])
155
+
156
+ if is_hierarchical:
157
+ # 1. Start with General
158
+ merged_data = self.seed_data.get('general', {}).copy()
159
+
160
+ # 2. Apply Verb-specific
161
+ if hasattr(case, 'operation'):
162
+ method = case.operation.method.upper()
163
+ path = case.operation.path
164
+
165
+ verb_data = self.seed_data.get('verbs', {}).get(method, {})
166
+ merged_data.update(verb_data)
167
+
168
+ # 3. Apply Endpoint-specific
169
+ # precise match on path template
170
+ endpoint_data = self.seed_data.get('endpoints', {}).get(path, {}).get(method, {})
171
+ merged_data.update(endpoint_data)
172
+ else:
173
+ # Legacy flat structure
174
+ merged_data = self.seed_data.copy() # Copy to avoid mutating original config
175
+
176
+ # Resolve dynamic values for the final merged dataset
177
+ resolved_data = {}
178
+ for k, v in merged_data.items():
179
+ resolved_val = self._resolve_dynamic_value(v)
180
+ if resolved_val is not None:
181
+ resolved_data[k] = resolved_val
182
+
183
+ # Inject into Path Parameters (e.g., /users/{userId})
184
+ if hasattr(case, 'path_parameters') and case.path_parameters:
185
+ for key in case.path_parameters:
186
+ if key in resolved_data:
187
+ case.path_parameters[key] = resolved_data[key]
188
+
189
+ # Inject into Query Parameters (e.g., ?status=active)
190
+ if hasattr(case, 'query') and case.query:
191
+ for key in case.query:
192
+ if key in resolved_data:
193
+ case.query[key] = resolved_data[key]
194
+
195
+ # Inject into Headers (e.g., X-Tenant-ID)
196
+ if hasattr(case, 'headers') and case.headers:
197
+ for key in case.headers:
198
+ if key in resolved_data:
199
+ case.headers[key] = str(resolved_data[key])
200
+
201
+ def run_drift_check(self) -> List[Dict]:
202
+ """
203
+ Module A: The 'Docs vs. Code' Drift Check (The Integrity Test)
204
+ Uses schemathesis to verify if the API implementation matches the spec.
205
+ """
206
+ results = []
207
+ # Mapping check names to actual functions
208
+ check_map = {
209
+ "not_a_server_error": checks.not_a_server_error,
210
+ "status_code_conformance": oai_checks.status_code_conformance,
211
+ "response_schema_conformance": oai_checks.response_schema_conformance
212
+ }
213
+ check_names = list(check_map.keys())
214
+
215
+ # Schemathesis 4.x checks require a context object
216
+ checks_config = ChecksConfig()
217
+ check_ctx = CheckContext(
218
+ override=None,
219
+ auth=None,
220
+ headers=None,
221
+ config=checks_config,
222
+ transport_kwargs=None,
223
+ )
224
+
225
+ for op in self.schema.get_all_operations():
226
+ # Handle Result type (Ok/Err) wrapping if present
227
+ operation = op.ok() if hasattr(op, "ok") else op
228
+
229
+ try:
230
+ # Generate test case
231
+ try:
232
+ case = operation.as_strategy().example()
233
+ except (AttributeError, Exception):
234
+ try:
235
+ cases = list(operation.make_case())
236
+ case = cases[0] if cases else None
237
+ except (AttributeError, Exception):
238
+ case = None
239
+
240
+ if not case:
241
+ continue
242
+
243
+ self._apply_seed_data(case)
244
+
245
+ formatted_path = operation.path
246
+ if case.path_parameters:
247
+ for key, value in case.path_parameters.items():
248
+ formatted_path = formatted_path.replace(f"{{{key}}}", f"{{{key}:{value}}}")
249
+
250
+ print(f"AUDIT LOG: Testing endpoint {operation.method.upper()} {formatted_path}")
251
+
252
+ headers = {}
253
+ if self.api_key:
254
+ auth_header = self.api_key if self.api_key.lower().startswith("bearer ") else f"Bearer {self.api_key}"
255
+ headers["Authorization"] = auth_header
256
+
257
+ # Call the API
258
+ target_url = f"{self.base_url.rstrip('/')}/{formatted_path.lstrip('/')}"
259
+ print(f"AUDIT LOG: Calling {operation.method.upper()} {target_url}")
260
+
261
+ response = case.call(base_url=self.base_url, headers=headers)
262
+ print(f"AUDIT LOG: Response Status Code: {response.status_code}")
263
+
264
+ # We manually call the check function to ensure arguments are passed correctly.
265
+ for check_name in check_names:
266
+ check_func = check_map[check_name]
267
+ try:
268
+ # Direct call: check_func(ctx, response, case)
269
+ check_func(check_ctx, response, case)
270
+
271
+ # If we get here, the check passed
272
+ results.append({
273
+ "module": "A",
274
+ "endpoint": f"{operation.method.upper()} {operation.path}",
275
+ "issue": f"{check_name} - Passed",
276
+ "status": "PASS",
277
+ "severity": "INFO",
278
+ "details": f"Status: {response.status_code}"
279
+ })
280
+
281
+ except AssertionError as e:
282
+ # This catches actual drift (e.g., Schema validation failed)
283
+ # Capture and format detailed error info
284
+ validation_errors = []
285
+
286
+ # Safely get causes if they exist and are iterable
287
+ causes = getattr(e, "causes", None)
288
+ if causes:
289
+ for cause in causes:
290
+ if hasattr(cause, "message"):
291
+ validation_errors.append(cause.message)
292
+ else:
293
+ validation_errors.append(str(cause))
294
+
295
+ if not validation_errors:
296
+ validation_errors.append(str(e) or "Validation failed")
297
+
298
+ err_msg = "<br>".join(validation_errors)
299
+ safe_err = html.escape(err_msg)
300
+
301
+ # Add helpful context (Status & Body Preview)
302
+ context_msg = f"Status: {response.status_code}"
303
+ try:
304
+ if response.content:
305
+ preview = response.text[:500]
306
+ safe_preview = html.escape(preview)
307
+ context_msg += f"<br>Response: {safe_preview}"
308
+ except Exception:
309
+ pass
310
+
311
+ full_details = f"<strong>Error:</strong> {safe_err}<br><br><strong>Context:</strong><br>{context_msg}"
312
+
313
+ print(f"AUDIT LOG: Validation {check_name} failed: {err_msg}")
314
+ results.append({
315
+ "module": "A",
316
+ "endpoint": f"{operation.method.upper()} {operation.path}",
317
+ "issue": f"Schema Drift Detected ({check_name})",
318
+ "status": "FAIL",
319
+ "details": full_details,
320
+ "severity": "HIGH"
321
+ })
322
+ except Exception as e:
323
+ # This catches unexpected coding errors
324
+ print(f"AUDIT LOG: Error executing check {check_name}: {str(e)}")
325
+ results.append({
326
+ "module": "A",
327
+ "endpoint": f"{operation.method.upper()} {operation.path}",
328
+ "issue": f"Check Execution Error ({check_name})",
329
+ "status": "FAIL",
330
+ "details": str(e),
331
+ "severity": "HIGH"
332
+ })
333
+
334
+ except Exception as e:
335
+ print(f"AUDIT LOG: Critical Error during endpoint test: {str(e)}")
336
+ continue
337
+
338
+ return results
339
+
340
+ def run_resilience_tests(self) -> List[Dict]:
341
+ """
342
+ Module B: The 'Resilience' Stress Test (Art. 24 & 25)
343
+ Checks for Rate Limiting and Timeout gracefully handling.
344
+ """
345
+ results = []
346
+ ops = list(self.schema.get_all_operations())
347
+ if not ops:
348
+ return []
349
+
350
+ print("AUDIT LOG: Starting Module B: Resilience Stress Test (flooding requests)...")
351
+
352
+
353
+ operation = ops[0].ok() if hasattr(ops[0], "ok") else ops[0]
354
+
355
+ # Simulate flooding
356
+ responses = []
357
+ for _ in range(50):
358
+ try:
359
+ case = operation.as_strategy().example()
360
+ except (AttributeError, Exception):
361
+ try:
362
+ cases = list(operation.make_case())
363
+ case = cases[0] if cases else None
364
+ except (AttributeError, Exception):
365
+ case = None
366
+
367
+ if case:
368
+ self._apply_seed_data(case)
369
+
370
+ headers = {}
371
+ if self.api_key:
372
+ auth_header = self.api_key if self.api_key.lower().startswith("bearer ") else f"Bearer {self.api_key}"
373
+ headers["Authorization"] = auth_header
374
+
375
+ responses.append(case.call(base_url=self.base_url, headers=headers))
376
+
377
+ has_429 = any(r.status_code == 429 for r in responses)
378
+ has_500 = any(r.status_code == 500 for r in responses)
379
+
380
+ if not has_429 and has_500:
381
+ results.append({
382
+ "module": "B",
383
+ "issue": "Poor Resilience: 500 Error during flood",
384
+ "status": "FAIL",
385
+ "details": "The API returned 500 Internal Server Error instead of 429 Too Many Requests when flooded.",
386
+ "severity": "CRITICAL"
387
+ })
388
+ elif not has_429:
389
+ results.append({
390
+ "module": "B",
391
+ "issue": "No Rate Limiting Enforced",
392
+ "status": "FAIL",
393
+ "details": "The API did not return 429 Too Many Requests during high volume testing.",
394
+ "severity": "MEDIUM"
395
+ })
396
+ else:
397
+ results.append({
398
+ "module": "B",
399
+ "issue": "Rate Limiting Functional",
400
+ "status": "PASS",
401
+ "details": "The API correctly returned 429 Too Many Requests when flooded.",
402
+ "severity": "INFO"
403
+ })
404
+
405
+ if not has_500:
406
+ results.append({
407
+ "module": "B",
408
+ "issue": "Stress Handling",
409
+ "status": "PASS",
410
+ "details": "No 500 Internal Server Errors were observed during stress testing.",
411
+ "severity": "INFO"
412
+ })
413
+
414
+ return results
415
+
416
+ def run_security_hygiene(self) -> List[Dict]:
417
+ """
418
+ Module C: Security Hygiene Check
419
+ Checks for TLS and Auth leakage in URL.
420
+ """
421
+ results = []
422
+ print(f"AUDIT LOG: Checking Security Hygiene for base URL: {self.base_url}")
423
+ if self.base_url and not self.base_url.startswith("https"):
424
+ results.append({
425
+ "module": "C",
426
+ "issue": "Insecure Connection (No TLS)",
427
+ "status": "FAIL",
428
+ "details": "The API base URL does not use HTTPS.",
429
+ "severity": "CRITICAL"
430
+ })
431
+ else:
432
+ results.append({
433
+ "module": "C",
434
+ "issue": "Secure Connection (TLS)",
435
+ "status": "PASS",
436
+ "details": "The API uses HTTPS.",
437
+ "severity": "INFO"
438
+ })
439
+
440
+ auth_leakage_found = False
441
+ for op in self.schema.get_all_operations():
442
+ operation = op.ok() if hasattr(op, "ok") else op
443
+ endpoint = operation.path
444
+ if "key" in endpoint.lower() or "token" in endpoint.lower():
445
+ auth_leakage_found = True
446
+ results.append({
447
+ "module": "C",
448
+ "issue": "Auth Leakage Risk",
449
+ "status": "FAIL",
450
+ "details": f"Endpoint '{endpoint}' indicates auth tokens might be passed in the URL.",
451
+ "severity": "HIGH"
452
+ })
453
+
454
+ if not auth_leakage_found:
455
+ results.append({
456
+ "module": "C",
457
+ "issue": "No Auth Leakage in URLs",
458
+ "status": "PASS",
459
+ "details": "No endpoints found with 'key' or 'token' in the path, suggesting safe header-based auth.",
460
+ "severity": "INFO"
461
+ })
462
+
463
+ return results
464
+
465
+ def run_full_audit(self) -> Dict:
466
+ return {
467
+ "drift_check": self.run_drift_check(),
468
+ "resilience": self.run_resilience_tests(),
469
+ "security": self.run_security_hygiene()
470
+ }
@@ -0,0 +1,209 @@
1
+ import os
2
+ from datetime import datetime
3
+ from jinja2 import Environment, FileSystemLoader
4
+ from weasyprint import HTML
5
+
6
+ TEMPLATE_DIR = os.path.join(os.path.dirname(__file__), "templates")
7
+ REPORTS_DIR = "reports"
8
+
9
+ if not os.path.exists(REPORTS_DIR):
10
+ os.makedirs(REPORTS_DIR)
11
+
12
+ def generate_report(vendor_name: str, audit_results: dict) -> str:
13
+ """
14
+ Module D: The Compliance Report (The Deliverable)
15
+ Generates a branded PDF report.
16
+ """
17
+ # Calculate score (simple MVP logic)
18
+ # Filter out PASS results for scoring
19
+ drift_issues = [r for r in audit_results["drift_check"] if r.get("status") != "PASS"]
20
+ resilience_issues = [r for r in audit_results["resilience"] if r.get("status") != "PASS"]
21
+ security_issues = [r for r in audit_results["security"] if r.get("status") != "PASS"]
22
+
23
+ drift_score = max(0, 100 - len(drift_issues) * 10)
24
+ resilience_score = max(0, 100 - len(resilience_issues) * 15)
25
+ security_score = max(0, 100 - len(security_issues) * 20)
26
+
27
+ total_score = (drift_score + resilience_score + security_score) / 3
28
+
29
+ # Pass/Fail based on score
30
+ is_compliant = total_score >= 80
31
+
32
+ context = {
33
+ "vendor_name": vendor_name,
34
+ "date": datetime.now().strftime("%Y-%m-%d"),
35
+ "score": round(total_score),
36
+ "is_compliant": is_compliant,
37
+ "results": audit_results
38
+ }
39
+
40
+ # Helper to render findings tables
41
+ def render_findings_table(module_name, findings):
42
+ if not findings:
43
+ return f"<p class='no-issues'>✅ No issues found in {module_name}.</p>"
44
+
45
+ rows = ""
46
+ for f in findings:
47
+ endpoint = f.get('endpoint', 'Global')
48
+ status = f.get('status', 'FAIL')
49
+
50
+ if status == "PASS":
51
+ severity_class = "pass"
52
+ severity_text = "PASS"
53
+ else:
54
+ severity_class = f.get('severity', 'LOW').lower()
55
+ severity_text = f.get('severity')
56
+
57
+ rows += f"""
58
+ <tr>
59
+ <td><span class="badge badge-{severity_class}">{severity_text}</span></td>
60
+ <td><code>{endpoint}</code></td>
61
+ <td><strong>{f.get('issue')}</strong></td>
62
+ <td>{f.get('details')}</td>
63
+ </tr>
64
+ """
65
+
66
+ return f"""
67
+ <table>
68
+ <thead>
69
+ <tr>
70
+ <th style="width: 10%">Status</th>
71
+ <th style="width: 25%">Endpoint</th>
72
+ <th style="width: 25%">Issue</th>
73
+ <th>Technical Details</th>
74
+ </tr>
75
+ </thead>
76
+ <tbody>
77
+ {rows}
78
+ </tbody>
79
+ </table>
80
+ """
81
+
82
+ html_content = f"""
83
+ <html>
84
+ <head>
85
+ <style>
86
+ @page {{ margin: 50px; }}
87
+ body {{ font-family: 'Inter', 'Helvetica Neue', Helvetica, Arial, sans-serif; color: #1e293b; line-height: 1.5; }}
88
+ .header {{
89
+ background: linear-gradient(135deg, #1e1b4b 0%, #4338ca 100%);
90
+ color: white;
91
+ padding: 40px;
92
+ text-align: center;
93
+ border-radius: 12px;
94
+ margin-bottom: 40px;
95
+ }}
96
+ .header h1 {{ margin: 0; font-size: 28px; letter-spacing: -0.5px; }}
97
+ .header p {{ margin: 10px 0 0; opacity: 0.8; font-size: 16px; }}
98
+
99
+ .summary-grid {{ display: flex; justify-content: space-between; margin-bottom: 40px; }}
100
+ .summary-card {{
101
+ background: #f8fafc;
102
+ padding: 20px;
103
+ border-radius: 8px;
104
+ width: 45%;
105
+ border: 1px solid #e2e8f0;
106
+ }}
107
+
108
+ .score-big {{ font-size: 56px; font-weight: 800; margin: 10px 0; }}
109
+ .status-badge {{
110
+ display: inline-block;
111
+ padding: 6px 16px;
112
+ border-radius: 99px;
113
+ font-weight: 700;
114
+ text-transform: uppercase;
115
+ font-size: 14px;
116
+ }}
117
+ .status-pass {{ background: #dcfce7; color: #166534; }}
118
+ .status-fail {{ background: #fee2e2; color: #991b1b; }}
119
+
120
+ .section {{ margin-top: 40px; page-break-inside: avoid; }}
121
+ .section h3 {{ border-bottom: 2px solid #e2e8f0; padding-bottom: 10px; color: #0f172a; margin-bottom: 20px; }}
122
+
123
+ table {{ width: 100%; border-collapse: collapse; margin-top: 10px; font-size: 12px; table-layout: fixed; word-wrap: break-word; }}
124
+ th, td {{ padding: 12px; text-align: left; border-bottom: 1px solid #e2e8f0; vertical-align: top; }}
125
+ td {{ word-break: break-word; white-space: pre-wrap; }}
126
+ th {{ background-color: #f1f5f9; color: #475569; font-weight: 600; text-transform: uppercase; letter-spacing: 0.5px; }}
127
+
128
+ .badge {{
129
+ padding: 4px 8px;
130
+ border-radius: 4px;
131
+ font-size: 10px;
132
+ font-weight: 700;
133
+ color: white;
134
+ text-transform: uppercase;
135
+ }}
136
+ .badge-critical {{ background: #ef4444; }}
137
+ .badge-high {{ background: #f97316; }}
138
+ .badge-medium {{ background: #eab308; }}
139
+ .badge-low {{ background: #3b82f6; }}
140
+ .badge-pass {{ background: #16a34a; }}
141
+
142
+ .no-issues {{ color: #059669; font-weight: 500; padding: 10px 0; }}
143
+ code {{ font-family: 'Courier New', monospace; background: #f1f5f9; padding: 2px 4px; border-radius: 3px; font-size: 11px; }}
144
+
145
+ footer {{ margin-top: 50px; text-align: center; color: #94a3b8; font-size: 10px; border-top: 1px solid #e2e8f0; padding-top: 20px; }}
146
+ </style>
147
+ </head>
148
+ <body>
149
+ <div class="header">
150
+ <h1>DORA ICT Third-Party Risk Assessment</h1>
151
+ <p>Vendor Compliance Audit for <strong>{vendor_name}</strong></p>
152
+ <p>Report Date: {datetime.now().strftime("%B %d, %Y")}</p>
153
+ </div>
154
+
155
+ <div class="summary-grid">
156
+ <div class="summary-card">
157
+ <p style="margin:0; color:#64748b; font-weight:600;">Overall Risk Score</p>
158
+ <div class="score-big" style="color: {'#166534' if is_compliant else '#991b1b'}">{round(total_score)}<span style="font-size: 24px; color:#94a3b8; font-weight:400;">/100</span></div>
159
+ </div>
160
+ <div class="summary-card">
161
+ <p style="margin:0; color:#64748b; font-weight:600;">Compliance Status</p>
162
+ <div style="margin-top: 20px;">
163
+ <span class="status-badge {'status-pass' if is_compliant else 'status-fail'}">
164
+ {'COMPLIANT (PASS)' if is_compliant else 'NON-COMPLIANT (FAIL)'}
165
+ </span>
166
+ </div>
167
+ </div>
168
+ </div>
169
+
170
+ <div class="section">
171
+ <h3>Technical Findings - Module A: Schema Integrity (Docs vs. Code)</h3>
172
+ <p style="font-size: 13px; color: #64748b; margin-bottom: 15px;">
173
+ This check verifies if the actual API implementation adheres to the provided OpenAPI specification.
174
+ Discrepancies here indicate "Schema Drift," which violates DORA requirements for accurate ICT documentation.
175
+ </p>
176
+ {render_findings_table("Module A", audit_results['drift_check'])}
177
+ </div>
178
+
179
+ <div class="section">
180
+ <h3>Technical Findings - Module B: Resilience Stress Test</h3>
181
+ <p style="font-size: 13px; color: #64748b; margin-bottom: 15px;">
182
+ Assesses high-load behavior and error handling (DORA Art. 24 & 25).
183
+ Checks if the system gracefully handles request flooding with appropriate 429 status codes.
184
+ </p>
185
+ {render_findings_table("Module B", audit_results['resilience'])}
186
+ </div>
187
+
188
+ <div class="section">
189
+ <h3>Technical Findings - Module C: Security Hygiene</h3>
190
+ <p style="font-size: 13px; color: #64748b; margin-bottom: 15px;">
191
+ Evaluates baseline security controls including TLS encryption and sensitive information leakage in URLs.
192
+ </p>
193
+ {render_findings_table("Module C", audit_results['security'])}
194
+ </div>
195
+
196
+ <footer>
197
+ <p>CONFIDENTIAL - FOR INTERNAL AUDIT PURPOSES ONLY</p>
198
+ <p>Generated by PanDoraSpec</p>
199
+ </footer>
200
+ </body>
201
+ </html>
202
+ """
203
+
204
+ filename = f"{vendor_name.replace(' ', '_')}_{datetime.now().strftime('%Y%m%d%H%M%S')}.pdf"
205
+ filepath = os.path.join(REPORTS_DIR, filename)
206
+
207
+ HTML(string=html_content).write_pdf(filepath)
208
+
209
+ return filepath
@@ -0,0 +1,182 @@
1
+ Metadata-Version: 2.4
2
+ Name: pandoraspec
3
+ Version: 0.2.0
4
+ Summary: DORA Compliance Auditor for OpenAPI Specs
5
+ Author-email: Ulises Merlan <ulimerlan@gmail.com>
6
+ License: MIT
7
+ Requires-Python: >=3.9
8
+ Description-Content-Type: text/markdown
9
+ Requires-Dist: schemathesis==4.9.1
10
+ Requires-Dist: typer[all]
11
+ Requires-Dist: rich
12
+ Requires-Dist: weasyprint
13
+ Requires-Dist: jinja2
14
+ Requires-Dist: requests
15
+
16
+ # PanDoraSpec
17
+
18
+ **The Open DORA Compliance Engine for OpenAPI Specs.**
19
+
20
+ PanDoraSpec is a CLI tool that performs deep technical due diligence on APIs to verify compliance with **DORA (Digital Operational Resilience Act)** requirements. It compares OpenAPI/Swagger specifications against real-world implementation to detect schema drift, resilience gaps, and security issues.
21
+
22
+ ---
23
+
24
+ ## 📦 Installation
25
+
26
+ ```bash
27
+ pip install pandoraspec
28
+ ```
29
+
30
+ ### System Requirements
31
+ The PDF report generation requires `weasyprint`, which depends on **Pango**.
32
+
33
+ **macOS:**
34
+ ```bash
35
+ brew install pango
36
+ ```
37
+
38
+ **Debian / Ubuntu:**
39
+ ```bash
40
+ sudo apt-get install libpango-1.0-0 libpangoft2-1.0-0
41
+ ```
42
+
43
+ ## 🛠️ Development Setup
44
+
45
+ To run the CLI locally without reinstalling after every change:
46
+
47
+ 1. **Clone & CD**:
48
+ ```bash
49
+ git clone ...
50
+ cd pandoraspec
51
+ ```
52
+
53
+ 2. **Create & Activate Virtual Environment**:
54
+ It's recommended to use a virtual environment to keep dependencies isolated.
55
+ ```bash
56
+ python3 -m venv venv
57
+ source venv/bin/activate # On Windows: venv\Scripts\activate
58
+ ```
59
+
60
+ 3. **Editable Install**:
61
+ ```bash
62
+ pip install -e .
63
+ ```
64
+ This links the `pandoraspec` command directly to your source code. Any changes you make will be reflected immediately.
65
+
66
+ ## 🚀 Usage
67
+
68
+ Run the audit directly from your terminal.
69
+
70
+ ### Basic Scan
71
+ ```bash
72
+ pandoraspec https://petstore.swagger.io/v2/swagger.json
73
+ ```
74
+
75
+ ### With Options
76
+ ```bash
77
+ pandoraspec https://api.example.com/spec.json --vendor "Stripe" --key "sk_live_..."
78
+ ```
79
+
80
+ ### Local File
81
+ ```bash
82
+ pandoraspec ./openapi.yaml
83
+ ```
84
+
85
+ ---
86
+
87
+ ## 🏎️ Zero-Config Testing (DORA Compliance)
88
+
89
+ For standard **DORA compliance**, you simply need to verify that your API implementation matches its specification. **No configuration is required.**
90
+
91
+ ```bash
92
+ pandoraspec https://petstore.swagger.io/v2/swagger.json
93
+ ```
94
+
95
+ This runs a **fuzzing** audit where random data is generated based on your schema types (e.g., sending random integers for IDs).
96
+ - **Value:** This is sufficient to prove that your API correctly handles unexpected inputs and adheres to the basic contract (e.g., returning 400 Bad Request instead of 500 Server Error).
97
+ - **Limitation:** Detailed business logic requiring valid IDs (e.g., `GET /user/{id}` where `{id}` must exist) may return `404 Not Found`. This is acceptable for a compliance scan but may not fully exercise deeper code paths.
98
+
99
+ ---
100
+
101
+ ## 🧠 Advanced Testing with Seed Data
102
+
103
+ To test **specific business workflows** (e.g., successfully retrieving a user profile), you can provide "Seed Data". This tells PanDoraSpec to use known, valid values instead of random fuzzing data.
104
+
105
+ ```bash
106
+ pandoraspec https://petstore.swagger.io/v2/swagger.json --config seed_parameters.yaml
107
+ ```
108
+
109
+ ### Configuration Hierarchy
110
+ You can define seed values at three levels of specificity. The engine resolves values in this order: **Endpoints > Verbs > General**.
111
+
112
+ ```yaml
113
+ seed_data:
114
+ # 1. General: Applies to EVERYTHING (path params, query params, headers)
115
+ general:
116
+ username: "test_user"
117
+ limit: 50
118
+
119
+ # 2. Verbs: Applies only to specific HTTP methods (Overwrites General)
120
+ verbs:
121
+ POST:
122
+ username: "admin_user" # Creation requests use a different user
123
+
124
+ # 3. Endpoints: Applies only to specific routes (Overwrites Everything)
125
+ endpoints:
126
+ /users/me:
127
+ GET:
128
+ limit: 10
129
+ ```
130
+
131
+ ### 🔗 Dynamic Seed Data (Chaining Requests)
132
+ You can even test **dependency chains** where one endpoint requires data from another (e.g., get a User ID from a search result to query their profile).
133
+
134
+ **Supported Features:**
135
+ - **Dynamic Resolution:** Fetch a value from another API call before running the test.
136
+ - **Extraction:** Extract values from JSON responses or plain text.
137
+ - **Parameter Interpolation:** Use `{param}` in the dependency URL to chain multiple steps.
138
+
139
+ ```yaml
140
+ endpoints:
141
+ /user/{username}:
142
+ GET:
143
+ username:
144
+ # 1. Fetch the user list first
145
+ # 2. Extract the 'username' field from the response
146
+ from_endpoint: "GET /users/search?role=admin"
147
+ extract: "data.items.0.username"
148
+
149
+ /orders/{orderId}:
150
+ GET:
151
+ orderId:
152
+ # 1. Use the {userId} from our general seeds
153
+ # 2. Call /users/{userId}/latest-order
154
+ # 3. Extract the ID using Regex from a message string
155
+ from_endpoint: "GET /users/{userId}/latest-order"
156
+ extract: "message"
157
+ regex: "Order ID: ([0-9]+)"
158
+ ```
159
+
160
+ ---
161
+
162
+ ## 🛡️ What It Checks
163
+
164
+ ### Module A: The Integrity Test (Drift)
165
+ Checks if your API implementation matches your documentation.
166
+ - **Why?** DORA requires you to monitor if the service effectively supports your critical functions. If the API behaves differently than documented, it's a risk.
167
+
168
+ ### Module B: The Resilience Test
169
+ Stress tests the API to ensure it handles invalid inputs gracefully (`4xx` vs `5xx`).
170
+ - **Why?** DORA Article 25 calls for "Digital operational resilience testing".
171
+
172
+ ### Module C: Security Hygiene
173
+ Checks for common security headers and configurations.
174
+
175
+ ### Module D: The Report
176
+ Generates a PDF report: **"DORA ICT Third-Party Technical Risk Assessment"**.
177
+
178
+ ---
179
+
180
+ ## 📄 License
181
+
182
+ MIT
@@ -0,0 +1,12 @@
1
+ README.md
2
+ pyproject.toml
3
+ pandoraspec/__init__.py
4
+ pandoraspec/cli.py
5
+ pandoraspec/core.py
6
+ pandoraspec/reporting.py
7
+ pandoraspec.egg-info/PKG-INFO
8
+ pandoraspec.egg-info/SOURCES.txt
9
+ pandoraspec.egg-info/dependency_links.txt
10
+ pandoraspec.egg-info/entry_points.txt
11
+ pandoraspec.egg-info/requires.txt
12
+ pandoraspec.egg-info/top_level.txt
@@ -0,0 +1,2 @@
1
+ [console_scripts]
2
+ pandoraspec = pandoraspec.cli:main
@@ -0,0 +1,6 @@
1
+ schemathesis==4.9.1
2
+ typer[all]
3
+ rich
4
+ weasyprint
5
+ jinja2
6
+ requests
@@ -0,0 +1 @@
1
+ pandoraspec
@@ -0,0 +1,29 @@
1
+ [build-system]
2
+ requires = ["setuptools", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "pandoraspec"
7
+ version = "0.2.0"
8
+ description = "DORA Compliance Auditor for OpenAPI Specs"
9
+ readme = "README.md"
10
+ authors = [{ name = "Ulises Merlan", email = "ulimerlan@gmail.com" }]
11
+ license = { text = "MIT" }
12
+ requires-python = ">=3.9"
13
+ dependencies = [
14
+ "schemathesis==4.9.1",
15
+ "typer[all]",
16
+ "rich",
17
+ "weasyprint",
18
+ "jinja2",
19
+ "requests"
20
+ ]
21
+
22
+ [project.scripts]
23
+ # This maps the terminal command 'pandoraspec' to your python code
24
+ pandoraspec = "pandoraspec.cli:main"
25
+
26
+ [tool.setuptools.packages.find]
27
+ where = ["."]
28
+ include = ["pandoraspec*"]
29
+ exclude = ["tests*"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+