reuse-api 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,180 @@
1
+ Metadata-Version: 2.4
2
+ Name: reuse-api
3
+ Version: 0.1.0
4
+ Summary: OpenAPI AI Readiness Scorecard & Sandbox — score and simulate your APIs for agentic use
5
+ Project-URL: Homepage, https://sheepseb.github.io/cli-jentic
6
+ Project-URL: Repository, https://github.com/SheepSeb/cli-jentic
7
+ Project-URL: Bug Tracker, https://github.com/SheepSeb/cli-jentic/issues
8
+ Author: SheepSeb
9
+ License: MIT
10
+ Keywords: agentic,agents,ai,api,llm,mock-server,openapi,sandbox,scorecard
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Environment :: Console
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Operating System :: OS Independent
16
+ Classifier: Programming Language :: Python :: 3
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Programming Language :: Python :: 3.13
20
+ Classifier: Topic :: Internet :: WWW/HTTP
21
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
22
+ Classifier: Topic :: Utilities
23
+ Requires-Python: >=3.11
24
+ Requires-Dist: flask>=3
25
+ Requires-Dist: openapi-spec-validator>=0.7
26
+ Requires-Dist: pydantic>=2
27
+ Requires-Dist: pyyaml>=6
28
+ Requires-Dist: requests>=2.31
29
+ Requires-Dist: typer[all]>=0.9
30
+ Description-Content-Type: text/markdown
31
+
32
+ # api-scorecard
33
+
34
+ A CLI tool that scores OpenAPI specs for AI-readiness across 6 dimensions — giving APIs a letter grade and actionable recommendations so they work well with AI agents and LLM tooling.
35
+
36
+ ## Installation
37
+
38
+ ```bash
39
+ # With uv (recommended)
40
+ uv pip install -e .
41
+
42
+ # Or with pip
43
+ pip install -e .
44
+ ```
45
+
46
+ ## Usage
47
+
48
+ ### Scorecard
49
+
50
+ ```bash
51
+ # Score an OpenAPI spec
52
+ api-scorecard score path/to/openapi.yaml
53
+
54
+ # Output as JSON
55
+ api-scorecard score path/to/openapi.yaml --json
56
+
57
+ # Run against the built-in sample spec to see how scoring works
58
+ api-scorecard demo
59
+ api-scorecard demo --json
60
+ ```
61
+
62
+ ### Sandbox
63
+
64
+ The sandbox spins up a local mock server from your spec and auto-probes every endpoint, reporting a **feasibility score** — how well the spec can actually be exercised by an agent.
65
+
66
+ ```bash
67
+ # Start a persistent mock server (press Ctrl+C to stop)
68
+ api-scorecard sandbox start path/to/openapi.yaml
69
+ api-scorecard sandbox start path/to/openapi.yaml --port 9000
70
+
71
+ # Probe all endpoints and get a report
72
+ api-scorecard sandbox probe path/to/openapi.yaml
73
+ api-scorecard sandbox probe path/to/openapi.yaml --json
74
+
75
+ # Run against the built-in sample spec
76
+ api-scorecard sandbox demo
77
+ api-scorecard sandbox demo --json
78
+ ```
79
+
80
+ The `probe` command:
81
+ 1. Starts an internal mock server
82
+ 2. Generates synthetic request payloads and path parameters from the spec's schemas
83
+ 3. Fires requests at every operation
84
+ 4. Reports per-endpoint status codes, response times, and any issues found
85
+ 5. Exits with code `2` if the feasibility score is below 50%
86
+
87
+ ### Exit codes
88
+
89
+ | Code | Meaning |
90
+ |------|---------|
91
+ | `0` | Success / score ≥ threshold |
92
+ | `1` | Error reading or parsing the spec |
93
+ | `2` | Score below threshold (< 60 for scorecard, < 50% for sandbox) |
94
+
95
+ ## What gets scored
96
+
97
+ Each spec is evaluated across 6 weighted dimensions:
98
+
99
+ | Dimension | Weight | What it checks |
100
+ |-----------|--------|----------------|
101
+ | **Foundational Compliance** | 20% | OpenAPI version, `info` object, paths defined, spec validity |
102
+ | **Developer Experience** | 15% | Operation IDs, summaries, request/response examples, parameter descriptions |
103
+ | **AI-Readiness & Agent Experience** | 20% | Meaningful descriptions, error responses (4xx/5xx), response schemas, schema property descriptions |
104
+ | **Agent Usability** | 20% | Semantic operation IDs, tag usage, parameter schemas, request body documentation |
105
+ | **Security & Governance** | 15% | Security schemes defined, operations secured, auth documentation |
106
+ | **AI Discoverability** | 10% | API-level description, tags defined, external docs |
107
+
108
+ ### Grading
109
+
110
+ | Score | Grade |
111
+ |-------|-------|
112
+ | 90–100 | A |
113
+ | 80–89 | B |
114
+ | 70–79 | C |
115
+ | 60–69 | D |
116
+ | < 60 | F |
117
+
118
+ ## Example output
119
+
120
+ ```
121
+ ╭─ API AI Readiness Scorecard ──────────────────────────────────╮
122
+ │ Task Manager API v1.0.0 │
123
+ │ sample_spec.yaml │
124
+ │ │
125
+ │ Overall Score: 42.3/100 Grade: F │
126
+ ╰────────────────────────────────────────────────────────────────╯
127
+
128
+ ╭──────────────────────────────────┬───────────┬───────┬──────────────────────────┬────────╮
129
+ │ Dimension │ Score │ Grade │ Progress │ Issues │
130
+ ├──────────────────────────────────┼───────────┼───────┼──────────────────────────┼────────┤
131
+ │ Foundational Compliance │ 75.0/100 │ C │ ███████████████░░░░░ │ 1 │
132
+ │ Developer Experience │ 38.5/100 │ F │ ███████░░░░░░░░░░░░░░ │ 3 │
133
+ │ AI-Readiness & Agent Experience │ 31.2/100 │ F │ ██████░░░░░░░░░░░░░░░ │ 4 │
134
+ │ ... │ ... │ ... │ ... │ ... │
135
+ ╰──────────────────────────────────┴───────────┴───────┴──────────────────────────┴────────╯
136
+
137
+ Issues & Recommendations
138
+
139
+ AI-Readiness & Agent Experience
140
+ x 4/6 operations lack descriptive intent (need >30 char description)
141
+ paths.*.*.description
142
+ ! 5/6 operations have no documented error responses (4xx/5xx) — agents cannot handle failure gracefully
143
+ paths.*.*.responses
144
+ ```
145
+
146
+ ## JSON output
147
+
148
+ Pass `--json` to get a machine-readable report:
149
+
150
+ ```bash
151
+ api-scorecard score openapi.yaml --json | jq '.overall_score'
152
+ ```
153
+
154
+ ```json
155
+ {
156
+ "api_name": "Task Manager API",
157
+ "api_version": "1.0.0",
158
+ "spec_path": "openapi.yaml",
159
+ "overall_score": 42.3,
160
+ "grade": "F",
161
+ "dimensions": [
162
+ {
163
+ "name": "Foundational Compliance",
164
+ "score": 75.0,
165
+ "grade": "C",
166
+ "issues": [...]
167
+ }
168
+ ]
169
+ }
170
+ ```
171
+
172
+ ## Development
173
+
174
+ ```bash
175
+ # Install with dev dependencies
176
+ uv pip install -e .
177
+
178
+ # Run directly without installing
179
+ python main.py score path/to/spec.yaml
180
+ ```
@@ -0,0 +1,25 @@
1
+ sandbox/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
2
+ sandbox/cli.py,sha256=Jl9CmFNt4capq3xoyNnZX_eGPQunue2GuwODq9qpVSw,4959
3
+ sandbox/generator.py,sha256=qlmYRD2lRHwVxtqv6OEDXjARkmK14isYxGLOd7AQuWE,3325
4
+ sandbox/models.py,sha256=Iur8o1GzXucDmOlDACU2W7B3KlvvKgJcuwcldtcophY,1098
5
+ sandbox/prober.py,sha256=8LaT3bvghRHgXiGMJSCUNKw9--fbN9zWjmP3NOQK1g0,7530
6
+ sandbox/report.py,sha256=gJzSH05tHVy5dooiF6QqXfzSlGVvW5djDwFr7Xq38FM,5369
7
+ sandbox/server.py,sha256=rwZWvxcf13TkxjZz3NXzVWVDVVKo2oo5ng-e80HRk7w,5022
8
+ scorecard/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
9
+ scorecard/cli.py,sha256=2Uk0THggLszo1C5Xw4BdN0xNPNXyDp2QZZjXYKtHxE4,2099
10
+ scorecard/models.py,sha256=LaQ1J3Yx09UHK8a6iM9G94glTt3XPjyr75Vnckstdc0,890
11
+ scorecard/parser.py,sha256=bx1mO6pxFNOGyCsR6To--5MoEzsUKTLT7EFxwLUZ0G4,511
12
+ scorecard/report.py,sha256=JS2mP2eO3BwgVxbRX_DcJRT7WjPFgQXmDJDZhowDC_o,3396
13
+ scorecard/scorer.py,sha256=2gI1d13gZR1rRJpUjr5-kk5xz_Rqtq7634asQmgP4Bk,1199
14
+ scorecard/dimensions/__init__.py,sha256=1LX_26nfxvNZTrbSzMVvm1h-AZIhpHpnMKMPHlHgavs,1089
15
+ scorecard/dimensions/agent_usability.py,sha256=WLezYSAtKkANV_4LrEnUdxS9inAwUaxp3YML3xIljIY,3444
16
+ scorecard/dimensions/ai_readiness.py,sha256=3mY2W4WA9d09u4nJxTeWsJWCB8trahjdpsAVvUEHEOI,3774
17
+ scorecard/dimensions/developer_experience.py,sha256=_r3BvdQ1z2t3CesaNrvV7SempjEejWdDvG7MPLBfFQo,4025
18
+ scorecard/dimensions/discoverability.py,sha256=z_7zhTbnJl3xjbU5o4kpU0T03A7kFlr0OLVXdLuWICc,4263
19
+ scorecard/dimensions/foundational.py,sha256=EDrHa90wAFD75YwlTQrdi376MrI6E7P65dhKNuOUNF4,2189
20
+ scorecard/dimensions/security.py,sha256=WZiqEKDfmOmomc6-lTJr3EK8_CuB1vfS4Sl7Kd1RZY8,3556
21
+ sandbox/data/sample_spec.yaml,sha256=MD6LXv2X2XccZ6KbjEh_8B19kHw-wohKYSgQ_Fz1H0Q,4208
22
+ reuse_api-0.1.0.dist-info/METADATA,sha256=RT3joQGqtyz4kC9ajFjIxFx1rVr_J6OoE1xLB2tVRZI,6961
23
+ reuse_api-0.1.0.dist-info/WHEEL,sha256=QccIxa26bgl1E6uMy58deGWi-0aeIkkangHcxk2kWfw,87
24
+ reuse_api-0.1.0.dist-info/entry_points.txt,sha256=4U9qCI7DkIw2pTJyJlgbs8hGsCOFDQQdO3MoQV4Y_UE,52
25
+ reuse_api-0.1.0.dist-info/RECORD,,
@@ -0,0 +1,4 @@
1
+ Wheel-Version: 1.0
2
+ Generator: hatchling 1.29.0
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
@@ -0,0 +1,2 @@
1
+ [console_scripts]
2
+ api-scorecard = scorecard.cli:app
sandbox/__init__.py ADDED
File without changes
sandbox/cli.py ADDED
@@ -0,0 +1,155 @@
1
+ from __future__ import annotations
2
+
3
+ import socket
4
+ import threading
5
+ import time
6
+ from pathlib import Path
7
+
8
+ import typer
9
+ from typing_extensions import Annotated
10
+ from werkzeug.serving import make_server
11
+
12
+ from scorecard.parser import load_spec
13
+ from .server import create_app
14
+ from .prober import probe_all
15
+ from .models import SandboxReport
16
+ from . import report as reporter
17
+
18
+ sandbox_app = typer.Typer(
19
+ name="sandbox",
20
+ help="Run a mock server from an OpenAPI spec and probe its endpoints.",
21
+ add_completion=False,
22
+ )
23
+
24
+ _SAMPLE_SPEC = Path(__file__).parent / "data" / "sample_spec.yaml"
25
+
26
+ DEFAULT_HOST = "127.0.0.1"
27
+ DEFAULT_PORT = 8765
28
+
29
+
30
+ def _find_free_port(preferred: int) -> int:
31
+ with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
32
+ s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
33
+ try:
34
+ s.bind((DEFAULT_HOST, preferred))
35
+ return preferred
36
+ except OSError:
37
+ s.bind((DEFAULT_HOST, 0))
38
+ return s.getsockname()[1]
39
+
40
+
41
+ def _wait_for_server(host: str, port: int, timeout: float = 5.0) -> bool:
42
+ deadline = time.monotonic() + timeout
43
+ while time.monotonic() < deadline:
44
+ try:
45
+ with socket.create_connection((host, port), timeout=0.2):
46
+ return True
47
+ except OSError:
48
+ time.sleep(0.1)
49
+ return False
50
+
51
+
52
+ def _load(spec_path: str) -> dict:
53
+ try:
54
+ return load_spec(spec_path)
55
+ except FileNotFoundError as exc:
56
+ typer.echo(f"Error: {exc}", err=True)
57
+ raise typer.Exit(1)
58
+ except Exception as exc:
59
+ typer.echo(f"Error parsing spec: {exc}", err=True)
60
+ raise typer.Exit(1)
61
+
62
+
63
+ @sandbox_app.command()
64
+ def start(
65
+ spec: Annotated[str, typer.Argument(help="Path to OpenAPI spec file (JSON or YAML)")],
66
+ port: Annotated[int, typer.Option("--port", "-p", help="Port to listen on")] = DEFAULT_PORT,
67
+ ) -> None:
68
+ """Start a local mock server from an OpenAPI spec. Press Ctrl+C to stop."""
69
+ spec_dict = _load(spec)
70
+ port = _find_free_port(port)
71
+ base_url = f"http://{DEFAULT_HOST}:{port}"
72
+
73
+ app = create_app(spec_dict)
74
+
75
+ reporter.console.print(f"\n[bold bright_magenta]Sandbox Mock Server[/]")
76
+ reporter.console.print(f" Spec: [dim]{spec}[/]")
77
+ reporter.console.print(f" URL: [bold]{base_url}[/]\n")
78
+ reporter.print_endpoints(spec_dict, base_url)
79
+ reporter.console.print("[dim]Press Ctrl+C to stop.[/]\n")
80
+
81
+ import logging
82
+ log = logging.getLogger("werkzeug")
83
+ log.setLevel(logging.ERROR)
84
+
85
+ srv = make_server(DEFAULT_HOST, port, app)
86
+ try:
87
+ srv.serve_forever()
88
+ except KeyboardInterrupt:
89
+ reporter.console.print("\n[dim]Server stopped.[/]")
90
+
91
+
92
+ def _probe(spec: str, port: int, as_json: bool) -> None:
93
+ spec_dict = _load(spec)
94
+ port = _find_free_port(port)
95
+ base_url = f"http://{DEFAULT_HOST}:{port}"
96
+
97
+ flask_app = create_app(spec_dict)
98
+
99
+ import logging
100
+ logging.getLogger("werkzeug").setLevel(logging.ERROR)
101
+
102
+ srv = make_server(DEFAULT_HOST, port, flask_app)
103
+ server_thread = threading.Thread(target=srv.serve_forever, daemon=True)
104
+ server_thread.start()
105
+
106
+ if not _wait_for_server(DEFAULT_HOST, port):
107
+ typer.echo("Error: mock server failed to start", err=True)
108
+ raise typer.Exit(1)
109
+
110
+ try:
111
+ if not as_json:
112
+ reporter.console.print(f"[dim]Mock server started at {base_url}. Probing endpoints...[/]\n")
113
+ results = probe_all(spec_dict, base_url)
114
+ finally:
115
+ srv.shutdown()
116
+
117
+ info = spec_dict.get("info") or {}
118
+ sandbox_report = SandboxReport(
119
+ api_name=info.get("title") or "Unknown API",
120
+ api_version=str(info.get("version") or "unknown"),
121
+ spec_path=spec,
122
+ base_url=base_url,
123
+ total_endpoints=len(results),
124
+ results=results,
125
+ )
126
+
127
+ if as_json:
128
+ reporter.print_json(sandbox_report)
129
+ else:
130
+ reporter.print_report(sandbox_report)
131
+
132
+ if sandbox_report.feasibility_score < 50:
133
+ raise typer.Exit(2)
134
+
135
+
136
+ @sandbox_app.command()
137
+ def probe(
138
+ spec: Annotated[str, typer.Argument(help="Path to OpenAPI spec file (JSON or YAML)")],
139
+ port: Annotated[int, typer.Option("--port", "-p", help="Port for the internal mock server")] = DEFAULT_PORT,
140
+ json: Annotated[bool, typer.Option("--json", help="Output results as JSON")] = False,
141
+ ) -> None:
142
+ """Auto-probe all endpoints against an internal mock server and show a feasibility report."""
143
+ _probe(spec=spec, port=port, as_json=json)
144
+
145
+
146
+ @sandbox_app.command()
147
+ def demo(
148
+ json: Annotated[bool, typer.Option("--json", help="Output results as JSON")] = False,
149
+ ) -> None:
150
+ """Probe the built-in sample spec (shows typical sandbox results)."""
151
+ if not _SAMPLE_SPEC.exists():
152
+ typer.echo("Error: sample_spec.yaml not found.", err=True)
153
+ raise typer.Exit(1)
154
+ typer.echo(f"Running sandbox probe on built-in sample spec: {_SAMPLE_SPEC}\n")
155
+ _probe(spec=str(_SAMPLE_SPEC), port=DEFAULT_PORT, as_json=json)
@@ -0,0 +1,163 @@
1
+ openapi: "3.0.3"
2
+ info:
3
+ title: "Task Manager API"
4
+ version: "1.0.0"
5
+ # Intentionally missing description
6
+
7
+ paths:
8
+ /tasks:
9
+ get:
10
+ operationId: "listTasks"
11
+ summary: "List all tasks"
12
+ description: "Returns a paginated list of all tasks belonging to the current user."
13
+ tags: ["tasks"]
14
+ responses:
15
+ "200":
16
+ description: "A list of tasks"
17
+ content:
18
+ application/json:
19
+ schema:
20
+ type: array
21
+ items:
22
+ $ref: "#/components/schemas/Task"
23
+ # Intentionally missing 4xx/5xx responses
24
+
25
+ post:
26
+ # Intentionally missing operationId
27
+ # Intentionally missing summary and description
28
+ # Intentionally missing tags
29
+ requestBody:
30
+ required: true
31
+ content:
32
+ application/json:
33
+ schema:
34
+ $ref: "#/components/schemas/CreateTask"
35
+ # Intentionally missing example
36
+ responses:
37
+ "201":
38
+ description: "Task created successfully"
39
+ "400":
40
+ description: "Invalid input"
41
+
42
+ /tasks/{taskId}:
43
+ get:
44
+ operationId: "getTask"
45
+ summary: "Get a task by ID"
46
+ description: "Retrieves a specific task by its unique identifier."
47
+ tags: ["tasks"]
48
+ parameters:
49
+ - name: taskId
50
+ in: path
51
+ required: true
52
+ # Intentionally missing description
53
+ schema:
54
+ type: string
55
+ responses:
56
+ "200":
57
+ description: "The requested task"
58
+ content:
59
+ application/json:
60
+ schema:
61
+ $ref: "#/components/schemas/Task"
62
+ example:
63
+ id: "abc123"
64
+ title: "Buy groceries"
65
+ done: false
66
+ createdAt: "2024-01-15T10:30:00Z"
67
+ "404":
68
+ description: "Task not found"
69
+
70
+ put:
71
+ operationId: "updateTask"
72
+ # Intentionally missing summary and description
73
+ # Intentionally missing tags
74
+ parameters:
75
+ - name: taskId
76
+ in: path
77
+ required: true
78
+ schema:
79
+ type: string
80
+ requestBody:
81
+ required: true
82
+ content:
83
+ application/json:
84
+ schema:
85
+ $ref: "#/components/schemas/CreateTask"
86
+ responses:
87
+ "200":
88
+ description: "Task updated"
89
+ # Intentionally missing error responses
90
+
91
+ delete:
92
+ # Intentionally missing operationId, summary, description, tags
93
+ parameters:
94
+ - name: taskId
95
+ in: path
96
+ required: true
97
+ schema:
98
+ type: string
99
+ responses:
100
+ "204":
101
+ description: "Task deleted"
102
+ # Intentionally missing 404
103
+
104
+ /users:
105
+ get:
106
+ operationId: "op1" # Intentionally bad operationId
107
+ # Intentionally missing summary, description, tags
108
+ responses:
109
+ "200":
110
+ description: "OK"
111
+ # Intentionally missing schema
112
+
113
+ /health:
114
+ get:
115
+ operationId: "healthCheck"
116
+ summary: "Health check"
117
+ description: "Returns the current health status of the API service."
118
+ tags: ["system"]
119
+ responses:
120
+ "200":
121
+ description: "Service is healthy"
122
+ content:
123
+ application/json:
124
+ schema:
125
+ type: object
126
+ properties:
127
+ status:
128
+ type: string
129
+ example: "ok"
130
+
131
+ # Intentionally no security schemes defined
132
+ # Intentionally no global security
133
+
134
+ components:
135
+ schemas:
136
+ Task:
137
+ type: object
138
+ properties:
139
+ id:
140
+ type: string
141
+ # Intentionally missing description
142
+ title:
143
+ type: string
144
+ description: "The task title"
145
+ done:
146
+ type: boolean
147
+ # Intentionally missing description
148
+ createdAt:
149
+ type: string
150
+ format: date-time
151
+ # Intentionally missing description
152
+
153
+ CreateTask:
154
+ type: object
155
+ required:
156
+ - title
157
+ properties:
158
+ title:
159
+ type: string
160
+ # Intentionally missing description
161
+ done:
162
+ type: boolean
163
+ # Intentionally missing description
sandbox/generator.py ADDED
@@ -0,0 +1,107 @@
1
+ """Generate synthetic values from JSON Schema definitions."""
2
+ from __future__ import annotations
3
+ from typing import Any
4
+
5
+
6
+ _STRING_FORMAT_DEFAULTS: dict[str, Any] = {
7
+ "date-time": "2024-01-15T10:30:00Z",
8
+ "date": "2024-01-15",
9
+ "time": "10:30:00",
10
+ "email": "user@example.com",
11
+ "uri": "https://example.com",
12
+ "uuid": "123e4567-e89b-12d3-a456-426614174000",
13
+ "hostname": "example.com",
14
+ "ipv4": "127.0.0.1",
15
+ "password": "s3cr3t",
16
+ "byte": "dGVzdA==",
17
+ "binary": "binary-data",
18
+ }
19
+
20
+
21
+ def resolve_ref(ref: str, spec: dict) -> dict:
22
+ """Resolve a local $ref like '#/components/schemas/Task'."""
23
+ if not ref.startswith("#/"):
24
+ return {}
25
+ parts = ref[2:].split("/")
26
+ obj: Any = spec
27
+ for part in parts:
28
+ if not isinstance(obj, dict):
29
+ return {}
30
+ obj = obj.get(part, {})
31
+ return obj if isinstance(obj, dict) else {}
32
+
33
+
34
+ def generate(schema: dict, spec: dict, _depth: int = 0) -> Any:
35
+ """Recursively generate a synthetic value that matches the given JSON Schema."""
36
+ if _depth > 6:
37
+ return None
38
+
39
+ # Resolve $ref
40
+ if "$ref" in schema:
41
+ schema = resolve_ref(schema["$ref"], spec)
42
+
43
+ # Inline example wins
44
+ if "example" in schema:
45
+ return schema["example"]
46
+
47
+ # allOf / anyOf / oneOf — use first branch
48
+ for keyword in ("allOf", "anyOf", "oneOf"):
49
+ if keyword in schema:
50
+ branches = schema[keyword]
51
+ if branches:
52
+ return generate(branches[0], spec, _depth + 1)
53
+
54
+ schema_type = schema.get("type")
55
+
56
+ if schema_type == "object" or "properties" in schema:
57
+ props = schema.get("properties") or {}
58
+ required = set(schema.get("required") or [])
59
+ result: dict[str, Any] = {}
60
+ for name, prop_schema in props.items():
61
+ if name in required or _depth == 0:
62
+ result[name] = generate(prop_schema, spec, _depth + 1)
63
+ return result
64
+
65
+ if schema_type == "array":
66
+ items = schema.get("items") or {}
67
+ return [generate(items, spec, _depth + 1)]
68
+
69
+ if schema_type == "string":
70
+ if "enum" in schema:
71
+ return schema["enum"][0]
72
+ fmt = schema.get("format", "")
73
+ return _STRING_FORMAT_DEFAULTS.get(fmt, schema.get("default", "string"))
74
+
75
+ if schema_type == "integer":
76
+ return schema.get("default", schema.get("minimum", 1))
77
+
78
+ if schema_type == "number":
79
+ return schema.get("default", schema.get("minimum", 1.0))
80
+
81
+ if schema_type == "boolean":
82
+ return schema.get("default", True)
83
+
84
+ if schema_type == "null":
85
+ return None
86
+
87
+ return schema.get("default")
88
+
89
+
90
+ def generate_path_param(param: dict, spec: dict) -> str:
91
+ """Generate a URL-safe value for a path parameter."""
92
+ if "example" in param:
93
+ return str(param["example"])
94
+ schema = param.get("schema") or {}
95
+ if "$ref" in schema:
96
+ schema = resolve_ref(schema["$ref"], spec)
97
+ if "example" in schema:
98
+ return str(schema["example"])
99
+ if "enum" in schema:
100
+ return str(schema["enum"][0])
101
+ fmt = schema.get("format", "")
102
+ if fmt == "uuid":
103
+ return "123e4567-e89b-12d3-a456-426614174000"
104
+ param_type = schema.get("type", "string")
105
+ if param_type == "integer":
106
+ return "1"
107
+ return f"example-{param.get('name', 'id')}"
sandbox/models.py ADDED
@@ -0,0 +1,46 @@
1
+ from __future__ import annotations
2
+ from typing import Any, Literal
3
+ from pydantic import BaseModel, computed_field
4
+
5
+
6
+ class ProbeIssue(BaseModel):
7
+ severity: Literal["error", "warning", "info"]
8
+ message: str
9
+
10
+
11
+ class ProbeResult(BaseModel):
12
+ path: str
13
+ method: str
14
+ status_code: int | None = None
15
+ success: bool
16
+ issues: list[ProbeIssue] = []
17
+ request_url: str
18
+ request_body: Any = None
19
+ response_body: Any = None
20
+ response_time_ms: float | None = None
21
+
22
+ @computed_field
23
+ @property
24
+ def label(self) -> str:
25
+ return f"{self.method.upper()} {self.path}"
26
+
27
+
28
+ class SandboxReport(BaseModel):
29
+ api_name: str
30
+ api_version: str
31
+ spec_path: str
32
+ base_url: str
33
+ total_endpoints: int
34
+ results: list[ProbeResult]
35
+
36
+ @computed_field
37
+ @property
38
+ def successful(self) -> int:
39
+ return sum(1 for r in self.results if r.success)
40
+
41
+ @computed_field
42
+ @property
43
+ def feasibility_score(self) -> float:
44
+ if not self.total_endpoints:
45
+ return 0.0
46
+ return round((self.successful / self.total_endpoints) * 100, 1)