botvisibility 1.0.0 → 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +166 -0
  2. package/package.json +1 -1
package/README.md ADDED
@@ -0,0 +1,166 @@
1
+ # BotVisibility CLI
2
+
3
+ Scan any URL to check how visible and usable it is to AI agents. Like Lighthouse for AI agent readiness.
4
+
5
+ ```bash
6
+ npx botvisibility stripe.com
7
+ ```
8
+
9
+ ## What it does
10
+
11
+ BotVisibility runs 28+ automated checks across 4 levels to measure how well your site works with AI agents like Claude, GPT, Copilot, and autonomous agent frameworks.
12
+
13
+ Without agent-ready metadata and APIs, agents burn 5-100x more tokens through HTML scraping, trial-and-error discovery, and retry loops. A fully unoptimized site can cost agents **120,000-500,000+ excess tokens per session**.
14
+
15
+ The CLI tells you exactly what's missing and how to fix it.
16
+
17
+ ## Install & run
18
+
19
+ No install needed. Just run:
20
+
21
+ ```bash
22
+ npx botvisibility <url>
23
+ ```
24
+
25
+ Or install globally:
26
+
27
+ ```bash
28
+ npm install -g botvisibility
29
+ botvisibility stripe.com
30
+ ```
31
+
32
+ ## Usage
33
+
34
+ ```bash
35
+ # Basic URL scan
36
+ npx botvisibility https://example.com
37
+
38
+ # JSON output for CI/CD
39
+ npx botvisibility stripe.com --json
40
+
41
+ # Full scan with local repo analysis (unlocks Level 3 code checks + Level 4)
42
+ npx botvisibility https://myapp.com --repo ./
43
+
44
+ # Combined scan with JSON output
45
+ npx botvisibility mysite.com --repo ../my-backend --json
46
+ ```
47
+
48
+ ## What it checks
49
+
50
+ ### Level 1: Discoverable (12 checks)
51
+
52
+ Bots can find you. These checks verify that AI agents can discover your site's capabilities without scraping HTML.
53
+
54
+ | Check | What it looks for |
55
+ |-------|-------------------|
56
+ | llms.txt | Machine-readable site description at /llms.txt |
57
+ | Agent Card | Capability declaration at /.well-known/agent-card.json |
58
+ | OpenAPI Spec | Published API specification |
59
+ | robots.txt AI Policy | AI crawler directives in robots.txt |
60
+ | Documentation Accessibility | Public dev docs without auth walls |
61
+ | CORS Headers | Cross-origin access for browser-based agents |
62
+ | AI Meta Tags | llms:description, llms:url, llms:instructions meta tags |
63
+ | Skill File | Structured agent instructions at /skill.md |
64
+ | AI Site Profile | Site manifest at /.well-known/ai.json |
65
+ | Skills Index | Skills catalog at /.well-known/skills/index.json |
66
+ | Link Headers | HTML link elements pointing to AI discovery files |
67
+ | MCP Server | Model Context Protocol endpoint discovery |
68
+
69
+ ### Level 2: Usable (9 checks)
70
+
71
+ Your API works for agents. Authentication, error handling, and core operations are agent-compatible.
72
+
73
+ | Check | What it looks for |
74
+ |-------|-------------------|
75
+ | API Read Operations | GET/list/search endpoints in API spec |
76
+ | API Write Operations | POST/PUT/PATCH/DELETE endpoints |
77
+ | API Primary Action | Core value action available via API |
78
+ | API Key Authentication | Simple API key auth (not just OAuth) |
79
+ | Scoped API Keys | Permission-scoped API keys |
80
+ | OpenID Configuration | OIDC discovery document |
81
+ | Structured Error Responses | JSON errors with codes, not HTML error pages |
82
+ | Async Operations | Job ID + polling for long-running operations |
83
+ | Idempotency Support | Idempotency key support on write endpoints |
84
+
85
+ ### Level 3: Optimized (7 checks)
86
+
87
+ Agents work efficiently. Pagination, filtering, caching, and MCP tools reduce token waste.
88
+
89
+ | Check | What it looks for |
90
+ |-------|-------------------|
91
+ | Sparse Fields | fields/select parameter to request only needed data |
92
+ | Cursor Pagination | Cursor-based pagination on list endpoints |
93
+ | Search & Filtering | Server-side filter and search parameters |
94
+ | Bulk Operations | Batch create/update/delete endpoints |
95
+ | Rate Limit Headers | X-RateLimit-* headers on API responses |
96
+ | Caching Headers | ETag, Cache-Control, Last-Modified headers |
97
+ | MCP Tool Quality | Well-described MCP tools with input schemas |
98
+
99
+ With `--repo`, Level 3 checks also scan your codebase for these patterns in code, catching implementations that the web scanner can't detect from HTTP responses alone.
100
+
101
+ ### Level 4: Agent-Native (7 checks, --repo required)
102
+
103
+ First-class agent support. These checks require local code access.
104
+
105
+ | Check | What it looks for |
106
+ |-------|-------------------|
107
+ | Intent-Based Endpoints | High-level action endpoints (e.g., /send-invoice) |
108
+ | Agent Sessions | Persistent session management for multi-step interactions |
109
+ | Scoped Agent Tokens | Agent-specific tokens with capability limits |
110
+ | Agent Audit Logs | API actions logged with agent identifiers |
111
+ | Sandbox Environment | Test environment for safe agent experimentation |
112
+ | Consequence Labels | Annotations marking irreversible/destructive actions |
113
+ | Native Tool Schemas | Ready-to-use tool definitions for agent frameworks |
114
+
115
+ ## Scoring
116
+
117
+ BotVisibility uses a weighted cross-level algorithm:
118
+
119
+ - **L1 Discoverable**: Pass 50%+ of L1 checks
120
+ - **L2 Usable**: Pass 50%+ of L1 AND 50%+ of L2 (or 35%+ L1 with 75%+ L2)
121
+ - **L3 Optimized**: Achieve L2 AND pass 50%+ of L3 (or 35%+ L2 with 75%+ L3)
122
+ - **L4 Agent-Native**: Requires `--repo` flag for code-level analysis
123
+
124
+ This rewards sites that invest in higher-level capabilities even if some lower-level items are still missing.
125
+
126
+ ## CI/CD integration
127
+
128
+ Add to your CI pipeline to catch agent-readiness regressions:
129
+
130
+ ```yaml
131
+ # GitHub Actions
132
+ - name: Check BotVisibility
133
+ run: |
134
+ SCORE=$(npx botvisibility mysite.com --json | jq '.currentLevel')
135
+ if [ "$SCORE" -lt 1 ]; then
136
+ echo "BotVisibility score below Level 1"
137
+ exit 1
138
+ fi
139
+ ```
140
+
141
+ ## The agent tax
142
+
143
+ Every unoptimized interaction costs AI agents extra tokens:
144
+
145
+ | Without | With | Savings |
146
+ |---------|------|---------|
147
+ | Scrape HTML (30,000 tokens) | Read llms.txt (500 tokens) | 98% |
148
+ | Guess API endpoints (100,000 tokens) | Read OpenAPI spec (15,000 tokens) | 85% |
149
+ | Parse HTML errors (10,000 tokens) | Read JSON error (50 tokens) | 99% |
150
+ | Fetch all fields (2,000 tokens) | Sparse fields (200 tokens) | 90% |
151
+
152
+ At Claude Sonnet 4.6 rates, a single unoptimized session costs $0.83 vs $0.07 optimized. At 1,000 agent visits/day, that's **$22,800/month** in wasted tokens.
153
+
154
+ Read the full analysis: [The Agent Tax](https://botvisibility.com/agent-tax)
155
+
156
+ ## Links
157
+
158
+ - **Scanner**: [botvisibility.com](https://botvisibility.com)
159
+ - **Checklist**: [botvisibility.com/checklist](https://botvisibility.com/checklist)
160
+ - **Badge**: [botvisibility.com/badge](https://botvisibility.com/badge)
161
+ - **Agent Tax Whitepaper**: [botvisibility.com/agent-tax](https://botvisibility.com/agent-tax)
162
+ - **GitHub**: [github.com/jjanisheck/botvisibility](https://github.com/jjanisheck/botvisibility)
163
+
164
+ ## License
165
+
166
+ MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "botvisibility",
3
- "version": "1.0.0",
3
+ "version": "1.1.0",
4
4
  "description": "Scan any URL to check if it's ready for AI agents",
5
5
  "main": "dist/index.js",
6
6
  "bin": {