reuse-api 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,37 @@
1
+ name: Deploy GitHub Pages
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - web
7
+
8
+ permissions:
9
+ contents: read
10
+ pages: write
11
+ id-token: write
12
+
13
+ concurrency:
14
+ group: pages
15
+ cancel-in-progress: true
16
+
17
+ jobs:
18
+ deploy:
19
+ environment:
20
+ name: github-pages
21
+ url: ${{ steps.deployment.outputs.page_url }}
22
+ runs-on: ubuntu-latest
23
+ steps:
24
+ - name: Checkout
25
+ uses: actions/checkout@v4
26
+
27
+ - name: Setup Pages
28
+ uses: actions/configure-pages@v5
29
+
30
+ - name: Upload artifact
31
+ uses: actions/upload-pages-artifact@v3
32
+ with:
33
+ path: "."
34
+
35
+ - name: Deploy to GitHub Pages
36
+ id: deployment
37
+ uses: actions/deploy-pages@v4
@@ -0,0 +1,55 @@
1
+ name: Publish to PyPI
2
+
3
+ on:
4
+ push:
5
+ tags:
6
+ - "v*"
7
+
8
+ permissions:
9
+ contents: read
10
+
11
+ jobs:
12
+ build:
13
+ name: Build distribution
14
+ runs-on: ubuntu-latest
15
+ steps:
16
+ - uses: actions/checkout@v4
17
+
18
+ - name: Install uv
19
+ uses: astral-sh/setup-uv@v5
20
+ with:
21
+ enable-cache: true
22
+
23
+ - name: Set up Python
24
+ run: uv python install 3.11
25
+
26
+ - name: Install dependencies
27
+ run: uv sync
28
+
29
+ - name: Build wheel and sdist
30
+ run: uv build
31
+
32
+ - name: Upload build artifacts
33
+ uses: actions/upload-artifact@v4
34
+ with:
35
+ name: dist
36
+ path: dist/
37
+
38
+ publish:
39
+ name: Publish to PyPI
40
+ needs: build
41
+ runs-on: ubuntu-latest
42
+ environment:
43
+ name: pypi
44
+ url: https://pypi.org/project/reuse-api/
45
+ steps:
46
+ - name: Download build artifacts
47
+ uses: actions/download-artifact@v4
48
+ with:
49
+ name: dist
50
+ path: dist/
51
+
52
+ - name: Publish to PyPI
53
+ uses: pypa/gh-action-pypi-publish@release/v1
54
+ with:
55
+ password: ${{ secrets.PYPI_TOKEN }}
@@ -0,0 +1,10 @@
1
+ # Python-generated files
2
+ __pycache__/
3
+ *.py[oc]
4
+ build/
5
+ dist/
6
+ wheels/
7
+ *.egg-info
8
+
9
+ # Virtual environments
10
+ .venv
@@ -0,0 +1 @@
1
+ 3.11
@@ -0,0 +1,180 @@
1
+ Metadata-Version: 2.4
2
+ Name: reuse-api
3
+ Version: 0.1.0
4
+ Summary: OpenAPI AI Readiness Scorecard & Sandbox — score and simulate your APIs for agentic use
5
+ Project-URL: Homepage, https://sheepseb.github.io/cli-jentic
6
+ Project-URL: Repository, https://github.com/SheepSeb/cli-jentic
7
+ Project-URL: Bug Tracker, https://github.com/SheepSeb/cli-jentic/issues
8
+ Author: SheepSeb
9
+ License: MIT
10
+ Keywords: agentic,agents,ai,api,llm,mock-server,openapi,sandbox,scorecard
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Environment :: Console
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Operating System :: OS Independent
16
+ Classifier: Programming Language :: Python :: 3
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Programming Language :: Python :: 3.13
20
+ Classifier: Topic :: Internet :: WWW/HTTP
21
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
22
+ Classifier: Topic :: Utilities
23
+ Requires-Python: >=3.11
24
+ Requires-Dist: flask>=3
25
+ Requires-Dist: openapi-spec-validator>=0.7
26
+ Requires-Dist: pydantic>=2
27
+ Requires-Dist: pyyaml>=6
28
+ Requires-Dist: requests>=2.31
29
+ Requires-Dist: typer[all]>=0.9
30
+ Description-Content-Type: text/markdown
31
+
32
+ # api-scorecard
33
+
34
+ A CLI tool that scores OpenAPI specs for AI-readiness across 6 dimensions — giving APIs a letter grade and actionable recommendations so they work well with AI agents and LLM tooling.
35
+
36
+ ## Installation
37
+
38
+ ```bash
39
+ # With uv (recommended)
40
+ uv pip install -e .
41
+
42
+ # Or with pip
43
+ pip install -e .
44
+ ```
45
+
46
+ ## Usage
47
+
48
+ ### Scorecard
49
+
50
+ ```bash
51
+ # Score an OpenAPI spec
52
+ api-scorecard score path/to/openapi.yaml
53
+
54
+ # Output as JSON
55
+ api-scorecard score path/to/openapi.yaml --json
56
+
57
+ # Run against the built-in sample spec to see how scoring works
58
+ api-scorecard demo
59
+ api-scorecard demo --json
60
+ ```
61
+
62
+ ### Sandbox
63
+
64
+ The sandbox spins up a local mock server from your spec and auto-probes every endpoint, reporting a **feasibility score** — how well the spec can actually be exercised by an agent.
65
+
66
+ ```bash
67
+ # Start a persistent mock server (press Ctrl+C to stop)
68
+ api-scorecard sandbox start path/to/openapi.yaml
69
+ api-scorecard sandbox start path/to/openapi.yaml --port 9000
70
+
71
+ # Probe all endpoints and get a report
72
+ api-scorecard sandbox probe path/to/openapi.yaml
73
+ api-scorecard sandbox probe path/to/openapi.yaml --json
74
+
75
+ # Run against the built-in sample spec
76
+ api-scorecard sandbox demo
77
+ api-scorecard sandbox demo --json
78
+ ```
79
+
80
+ The `probe` command:
81
+ 1. Starts an internal mock server
82
+ 2. Generates synthetic request payloads and path parameters from the spec's schemas
83
+ 3. Fires requests at every operation
84
+ 4. Reports per-endpoint status codes, response times, and any issues found
85
+ 5. Exits with code `2` if the feasibility score is below 50%
86
+
87
+ ### Exit codes
88
+
89
+ | Code | Meaning |
90
+ |------|---------|
91
+ | `0` | Success / score ≥ threshold |
92
+ | `1` | Error reading or parsing the spec |
93
+ | `2` | Score below threshold (< 60 for scorecard, < 50% for sandbox) |
94
+
95
+ ## What gets scored
96
+
97
+ Each spec is evaluated across 6 weighted dimensions:
98
+
99
+ | Dimension | Weight | What it checks |
100
+ |-----------|--------|----------------|
101
+ | **Foundational Compliance** | 20% | OpenAPI version, `info` object, paths defined, spec validity |
102
+ | **Developer Experience** | 15% | Operation IDs, summaries, request/response examples, parameter descriptions |
103
+ | **AI-Readiness & Agent Experience** | 20% | Meaningful descriptions, error responses (4xx/5xx), response schemas, schema property descriptions |
104
+ | **Agent Usability** | 20% | Semantic operation IDs, tag usage, parameter schemas, request body documentation |
105
+ | **Security & Governance** | 15% | Security schemes defined, operations secured, auth documentation |
106
+ | **AI Discoverability** | 10% | API-level description, tags defined, external docs |
107
+
108
+ ### Grading
109
+
110
+ | Score | Grade |
111
+ |-------|-------|
112
+ | 90–100 | A |
113
+ | 80–89 | B |
114
+ | 70–79 | C |
115
+ | 60–69 | D |
116
+ | < 60 | F |
117
+
118
+ ## Example output
119
+
120
+ ```
121
+ ╭─ API AI Readiness Scorecard ──────────────────────────────────╮
122
+ │ Task Manager API v1.0.0 │
123
+ │ sample_spec.yaml │
124
+ │ │
125
+ │ Overall Score: 42.3/100 Grade: F │
126
+ ╰────────────────────────────────────────────────────────────────╯
127
+
128
+ ╭──────────────────────────────────┬───────────┬───────┬──────────────────────────┬────────╮
129
+ │ Dimension │ Score │ Grade │ Progress │ Issues │
130
+ ├──────────────────────────────────┼───────────┼───────┼──────────────────────────┼────────┤
131
+ │ Foundational Compliance │ 75.0/100 │ C │ ███████████████░░░░░ │ 1 │
132
+ │ Developer Experience │ 38.5/100 │ F │ ███████░░░░░░░░░░░░░░ │ 3 │
133
+ │ AI-Readiness & Agent Experience │ 31.2/100 │ F │ ██████░░░░░░░░░░░░░░░ │ 4 │
134
+ │ ... │ ... │ ... │ ... │ ... │
135
+ ╰──────────────────────────────────┴───────────┴───────┴──────────────────────────┴────────╯
136
+
137
+ Issues & Recommendations
138
+
139
+ AI-Readiness & Agent Experience
140
+ x 4/6 operations lack descriptive intent (need >30 char description)
141
+ paths.*.*.description
142
+ ! 5/6 operations have no documented error responses (4xx/5xx) — agents cannot handle failure gracefully
143
+ paths.*.*.responses
144
+ ```
145
+
146
+ ## JSON output
147
+
148
+ Pass `--json` to get a machine-readable report:
149
+
150
+ ```bash
151
+ api-scorecard score openapi.yaml --json | jq '.overall_score'
152
+ ```
153
+
154
+ ```json
155
+ {
156
+ "api_name": "Task Manager API",
157
+ "api_version": "1.0.0",
158
+ "spec_path": "openapi.yaml",
159
+ "overall_score": 42.3,
160
+ "grade": "F",
161
+ "dimensions": [
162
+ {
163
+ "name": "Foundational Compliance",
164
+ "score": 75.0,
165
+ "grade": "C",
166
+ "issues": [...]
167
+ }
168
+ ]
169
+ }
170
+ ```
171
+
172
+ ## Development
173
+
174
+ ```bash
175
+ # Install with dev dependencies
176
+ uv pip install -e .
177
+
178
+ # Run directly without installing
179
+ python main.py score path/to/spec.yaml
180
+ ```
@@ -0,0 +1,149 @@
1
+ # api-scorecard
2
+
3
+ A CLI tool that scores OpenAPI specs for AI-readiness across 6 dimensions — giving APIs a letter grade and actionable recommendations so they work well with AI agents and LLM tooling.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ # With uv (recommended)
9
+ uv pip install -e .
10
+
11
+ # Or with pip
12
+ pip install -e .
13
+ ```
14
+
15
+ ## Usage
16
+
17
+ ### Scorecard
18
+
19
+ ```bash
20
+ # Score an OpenAPI spec
21
+ api-scorecard score path/to/openapi.yaml
22
+
23
+ # Output as JSON
24
+ api-scorecard score path/to/openapi.yaml --json
25
+
26
+ # Run against the built-in sample spec to see how scoring works
27
+ api-scorecard demo
28
+ api-scorecard demo --json
29
+ ```
30
+
31
+ ### Sandbox
32
+
33
+ The sandbox spins up a local mock server from your spec and auto-probes every endpoint, reporting a **feasibility score** — how well the spec can actually be exercised by an agent.
34
+
35
+ ```bash
36
+ # Start a persistent mock server (press Ctrl+C to stop)
37
+ api-scorecard sandbox start path/to/openapi.yaml
38
+ api-scorecard sandbox start path/to/openapi.yaml --port 9000
39
+
40
+ # Probe all endpoints and get a report
41
+ api-scorecard sandbox probe path/to/openapi.yaml
42
+ api-scorecard sandbox probe path/to/openapi.yaml --json
43
+
44
+ # Run against the built-in sample spec
45
+ api-scorecard sandbox demo
46
+ api-scorecard sandbox demo --json
47
+ ```
48
+
49
+ The `probe` command:
50
+ 1. Starts an internal mock server
51
+ 2. Generates synthetic request payloads and path parameters from the spec's schemas
52
+ 3. Fires requests at every operation
53
+ 4. Reports per-endpoint status codes, response times, and any issues found
54
+ 5. Exits with code `2` if the feasibility score is below 50%
55
+
56
+ ### Exit codes
57
+
58
+ | Code | Meaning |
59
+ |------|---------|
60
+ | `0` | Success / score ≥ threshold |
61
+ | `1` | Error reading or parsing the spec |
62
+ | `2` | Score below threshold (< 60 for scorecard, < 50% for sandbox) |
63
+
64
+ ## What gets scored
65
+
66
+ Each spec is evaluated across 6 weighted dimensions:
67
+
68
+ | Dimension | Weight | What it checks |
69
+ |-----------|--------|----------------|
70
+ | **Foundational Compliance** | 20% | OpenAPI version, `info` object, paths defined, spec validity |
71
+ | **Developer Experience** | 15% | Operation IDs, summaries, request/response examples, parameter descriptions |
72
+ | **AI-Readiness & Agent Experience** | 20% | Meaningful descriptions, error responses (4xx/5xx), response schemas, schema property descriptions |
73
+ | **Agent Usability** | 20% | Semantic operation IDs, tag usage, parameter schemas, request body documentation |
74
+ | **Security & Governance** | 15% | Security schemes defined, operations secured, auth documentation |
75
+ | **AI Discoverability** | 10% | API-level description, tags defined, external docs |
76
+
77
+ ### Grading
78
+
79
+ | Score | Grade |
80
+ |-------|-------|
81
+ | 90–100 | A |
82
+ | 80–89 | B |
83
+ | 70–79 | C |
84
+ | 60–69 | D |
85
+ | < 60 | F |
86
+
87
+ ## Example output
88
+
89
+ ```
90
+ ╭─ API AI Readiness Scorecard ──────────────────────────────────╮
91
+ │ Task Manager API v1.0.0 │
92
+ │ sample_spec.yaml │
93
+ │ │
94
+ │ Overall Score: 42.3/100 Grade: F │
95
+ ╰────────────────────────────────────────────────────────────────╯
96
+
97
+ ╭──────────────────────────────────┬───────────┬───────┬──────────────────────────┬────────╮
98
+ │ Dimension │ Score │ Grade │ Progress │ Issues │
99
+ ├──────────────────────────────────┼───────────┼───────┼──────────────────────────┼────────┤
100
+ │ Foundational Compliance │ 75.0/100 │ C │ ███████████████░░░░░ │ 1 │
101
+ │ Developer Experience │ 38.5/100 │ F │ ███████░░░░░░░░░░░░░░ │ 3 │
102
+ │ AI-Readiness & Agent Experience │ 31.2/100 │ F │ ██████░░░░░░░░░░░░░░░ │ 4 │
103
+ │ ... │ ... │ ... │ ... │ ... │
104
+ ╰──────────────────────────────────┴───────────┴───────┴──────────────────────────┴────────╯
105
+
106
+ Issues & Recommendations
107
+
108
+ AI-Readiness & Agent Experience
109
+ x 4/6 operations lack descriptive intent (need >30 char description)
110
+ paths.*.*.description
111
+ ! 5/6 operations have no documented error responses (4xx/5xx) — agents cannot handle failure gracefully
112
+ paths.*.*.responses
113
+ ```
114
+
115
+ ## JSON output
116
+
117
+ Pass `--json` to get a machine-readable report:
118
+
119
+ ```bash
120
+ api-scorecard score openapi.yaml --json | jq '.overall_score'
121
+ ```
122
+
123
+ ```json
124
+ {
125
+ "api_name": "Task Manager API",
126
+ "api_version": "1.0.0",
127
+ "spec_path": "openapi.yaml",
128
+ "overall_score": 42.3,
129
+ "grade": "F",
130
+ "dimensions": [
131
+ {
132
+ "name": "Foundational Compliance",
133
+ "score": 75.0,
134
+ "grade": "C",
135
+ "issues": [...]
136
+ }
137
+ ]
138
+ }
139
+ ```
140
+
141
+ ## Development
142
+
143
+ ```bash
144
+ # Install with dev dependencies
145
+ uv pip install -e .
146
+
147
+ # Run directly without installing
148
+ python main.py score path/to/spec.yaml
149
+ ```