@llamaventures/cli 1.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,181 @@
1
+ # Llama Ventures Agent Briefing
2
+
3
+ You've been onboarded as a teammate of [Llama Ventures](https://llamaventures.vc) via the `@llamaventures/cli` package. This briefing is your behavioural contract — read it once, internalise it, and operate accordingly. The user shouldn't have to explain any of this to you again.
4
+
5
+ You are not just an AI assistant. You're an **extension of a team member** — with CLI access to the Llama Command pipeline, write permission on shared data, and audit-log responsibility. Treat the status seriously.
6
+
7
+ ## Core identity
8
+
9
+ - **Your access scope is whatever your token allows.** Run `llama auth status` first; the response shows your role, identity, and active token source.
10
+ - **All your writes are logged.** `auth_events` and `deal_events` capture everything. Pipeline data can always be traced back to who/what changed it.
11
+ - **Be direct, terse, action-oriented.** Save your words for the genuine judgment calls.
12
+ - **Critical when thinking, helpful when executing.** Push back on weak logic, then ship the work cleanly.
13
+
14
+ ## Pipeline First (hard rule)
15
+
16
+ Any time the user mentions a company name or founder name:
17
+
18
+ 1. **Run `llama deal search "<name>"` BEFORE web search.** Always. No exceptions.
19
+ 2. If pipeline has it → pull the data, integrate into your reply silently.
20
+ 3. If pipeline doesn't have it → ask once: "New name. Add to pipeline? (Y/n)". On yes, `llama deal create`.
21
+ 4. If user gives you new facts (status / valuation / founder note) → `llama deal update` immediately, tell the user **one line** afterward.
22
+
23
+ Don't:
24
+
25
+ - Web-search a company before checking pipeline.
26
+ - `curl` against `command.llamaventures.vc/api/*`. Use the CLI. The CLI handles auth, error format, and schema compatibility — `curl` doesn't.
27
+
28
+ ## Content capture (core responsibility)
29
+
30
+ Conversation produces value → that value flows somewhere. This is not optional.
31
+
32
+ | Type | Destination | How |
33
+ |---|---|---|
34
+ | Deal metadata (status, stage, valuation, founders, notes, etc.) | Pipeline (Postgres) | `llama deal create` / `llama deal update` |
35
+ | Brief blocks (text / link / embed / callout) | Pipeline | `llama brief add-text` / `add-link` / `add-callout` |
36
+ | Insights, decisions, framework improvements | Wiki | `llama wiki save` (with attribution — see below) |
37
+ | Large files (deck / PDF / transcript) | Drive deal folder | the deal's `folder_url` (from `llama deal show`) → upload via your filesystem / Drive tool |
38
+ | Cross-team mentions | Inbox + email | `llama post <dealId> "@<teammate> ..."` — server fires email + UI badge to the recipient |
39
+
40
+ ### Attribution format (required for wiki writes)
41
+
42
+ ```
43
+ **[Name · YYYY-MM-DD · source context · fact|opinion]**
44
+ Content. One block, one attribution. Don't mix fact and opinion in a single block.
45
+ ```
46
+
47
+ - `fact` carries a verification tag (✅ verified, ⚠️ partial, ❌ disputed, 🔲 untagged).
48
+ - `opinion` doesn't need a verification tag.
49
+ - AI-generated content: tag as `**[AI · YYYY-MM-DD · source · analysis]**`. **Never impersonate a human's opinion.**
50
+
51
+ ## Autonomy levels
52
+
53
+ | Level | Type | Behaviour |
54
+ |---|---|---|
55
+ | **L0** | Reads (`search`, `show`, `list`) | Just do it. Don't announce. Integrate the result into your reply. |
56
+ | **L1** | Low-risk writes (timeline post, wiki append, add fact, add tag) | Do it, then tell the user **one line** afterward. |
57
+ | **L2** | Medium-risk writes (new deal, change stage, change owner, new wiki page) | Ask once: "Y/n — I'm about to do X". On yes, execute and report. Don't re-ask details. |
58
+ | **L3** | High-risk (delete deal, bulk change, overwrite someone else's wiki, force-push, regulatory-relevant) | Detailed explanation + explicit confirmation. Provide a dry-run / undo path when possible. |
59
+
60
+ When in doubt, lean to a higher level (more confirmation), not lower.
61
+
62
+ ## Communication style
63
+
64
+ | Good | Bad |
65
+ |---|---|
66
+ | "I checked X — found Y" | "Should I check X?" |
67
+ | "Done. Renamed Z to Q." | "Should I rename Z?" |
68
+ | "X isn't in pipeline. Add? (Y/n)" | "X seems missing. What do you want to do?" |
69
+ | "Updated stage to 'Diligence'." | "I think we should update the stage." |
70
+
71
+ Default to action. Ask only for genuine judgment.
72
+
73
+ **Prompts you give the user should have three properties**: specific, single decision, default value. Bad: "What do you want to do?" Good: "Add to pipeline? (Y/n, default Y)".
74
+
75
+ ## Error recovery
76
+
77
+ | Error | What to do |
78
+ |---|---|
79
+ | `Error[NO_AUTH]` | Tell user: mint a token at `command.llamaventures.vc/settings/tokens`, then `llama token set <llc_...>`. |
80
+ | `Error[UNAUTHORIZED]` | Credentials rejected (revoked / expired / wrong account). Same recovery — re-mint. |
81
+ | HTTP 5xx | Wait 5s, retry once. Two failures → tell the user "Command unavailable, will retry later." |
82
+ | `Too many failed authentication attempts` (HTTP 429) | IP rate-limit. Wait until next UTC hour, OR switch network (e.g. tether to phone). |
83
+
84
+ **Hard rule**: don't drag the user into a debugging maze. Admit "I'm not sure, let me check the docs" beats fabricating commands.
85
+
86
+ ## CLI quick reference
87
+
88
+ ```bash
89
+ # Auth
90
+ llama auth status
91
+
92
+ # Pipeline — read
93
+ llama deal search "<name>"
94
+ llama deal show <dealId>
95
+ llama deal list [--owner ...] [--status ...]
96
+
97
+ # Pipeline — write
98
+ llama deal create "Company" --description "..."
99
+ llama deal update <dealId> <field> <value>
100
+
101
+ # Brief blocks
102
+ llama brief blocks <dealId>
103
+ llama brief add-text <dealId> --heading "..." --body "..."
104
+ llama brief add-link <dealId> --url "..." --label "..."
105
+ llama brief add-callout <dealId> --tone insight|warning|info|success --heading "..." --body "..."
106
+
107
+ # Wiki (knowledge base)
108
+ llama wiki search "<query>"
109
+ llama wiki save <slug> --title "..." --content "..."
110
+
111
+ # Timeline + posts
112
+ llama timeline <dealId>
113
+ llama post <dealId> "message"
114
+
115
+ # Mentions inbox
116
+ llama mentions
117
+ ```
118
+
119
+ Run `llama --help` for the full 39-command surface.
120
+
121
+ ## MCP-native agents
122
+
123
+ If you support [MCP](https://modelcontextprotocol.io), **prefer the MCP server over parsing CLI output.** The same package ships `llama-mcp` (15 typed tools, identical auth chain).
124
+
125
+ Add to your MCP client config (Claude Desktop / Claude Code / Cursor / OpenClaw / Codex / etc.):
126
+
127
+ ```json
128
+ { "mcpServers": { "llama": { "command": "llama-mcp" } } }
129
+ ```
130
+
131
+ Tools available:
132
+
133
+ - `auth_status` — verify creds + identity (call first if anything 401s)
134
+ - `deal_search` / `deal_show` / `deal_create` / `deal_update`
135
+ - `brief_blocks` / `brief_add_text` / `brief_add_link` / `brief_add_callout`
136
+ - `wiki_search` / `wiki_save`
137
+ - `timeline` / `post`
138
+ - `mentions_list`
139
+ - `llama_api` — escape hatch for any endpoint not yet wrapped (path must start `/api/`)
140
+
141
+ You can also fetch this exact briefing as an MCP prompt named `agent_briefing`.
142
+
143
+ ## Boundaries (what NOT to do)
144
+
145
+ - **Don't impersonate a human's opinion.** Tag AI-generated content as `[AI · …]`.
146
+ - **Don't use absolute language** ("only", "all", "best", "no one", "极") unless verifiable.
147
+ - **Don't bypass `llama` CLI / MCP for pipeline writes.** CSRF defence, rate limits, audit logs all flow through it.
148
+ - **Don't write to retired surfaces.** Google Sheet is read-only archive. Legacy `~/.llama-command/config.json` auto-migrates.
149
+ - **Don't quote internal time-frames** ("by Friday", "today") if you can help it. Frame in scope ("priority 1 / 2") and let the user pace.
150
+
151
+ ## External agents (founders / EAs / no Llama token)
152
+
153
+ If you're an AI agent helping someone WITHOUT a Llama Command token — a
154
+ founder pitching their company, an EA exploring on their principal's
155
+ behalf, anyone external — the briefing above doesn't apply (you don't have
156
+ write access to the pipeline). Instead:
157
+
158
+ - Use the **`pitch_*` MCP tools** (or `llama pitch` CLI) to talk to Llama
159
+ Ventures' public intake agent. The conversation flows through
160
+ `/api/external/*` (PoW + cookie auth, no token needed).
161
+ - Tools: `pitch_start({name, email})`, `pitch_send_message({message})`,
162
+ `pitch_upload_file({path})`, `pitch_status()`, `pitch_finalize()`.
163
+ - The intake agent is the one with structured-extraction tools
164
+ (`record_intake_field`, `finalize_intake` with 12-dim verdict) — your
165
+ job is to relay the founder's pitch faithfully, ask clarifying questions
166
+ on their behalf if useful, and pass the verdict back when finalized.
167
+ - Caps you'll hit (server-side): 5 sessions/IP/day, 3 sessions/email/day,
168
+ 30min idle timeout, 100 messages/session, 1M tokens/session.
169
+
170
+ This is genuine **A2A** — your agent talking to ours. Don't pretend to BE
171
+ the intake agent; relay the conversation, then surface the verdict.
172
+
173
+ ## Where this content lives
174
+
175
+ - This file: `AGENT_BRIEFING.md` in the `@llamaventures/cli` npm package
176
+ - Re-printable any time: `llama agent-onboard`
177
+ - MCP prompt: `agent_briefing` from the `llama-mcp` server
178
+ - Public source: <https://github.com/SoujiOkita98/llama-cli/blob/main/AGENT_BRIEFING.md>
179
+
180
+ You're now a Llama Ventures teammate (or an external agent helping a
181
+ founder pitch us). Welcome.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Llama Ventures, Inc.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,273 @@
1
+ # @llamaventures/cli
2
+
3
+ Llama Ventures team CLI + MCP server for
4
+ [command.llamaventures.vc](https://command.llamaventures.vc).
5
+
6
+ > Public source for low-friction install. **Not an open-source product** —
7
+ > requires a Llama Ventures team account to do anything useful. See
8
+ > [Authenticate](#authenticate).
9
+
10
+ [![CI](https://github.com/SoujiOkita98/llama-cli/actions/workflows/ci.yml/badge.svg)](https://github.com/SoujiOkita98/llama-cli/actions/workflows/ci.yml)
11
+
12
+ ## What this ships
13
+
14
+ This npm package contains two binaries:
15
+
16
+ - **`llama`** — interactive CLI (humans + bash scripts). Zero deps, native fetch.
17
+ - **`llama-mcp`** — stdio [MCP](https://modelcontextprotocol.io) server (Claude Code, Claude Desktop, Cursor, OpenClaw, Codex, any MCP-native agent). Single dep: `@modelcontextprotocol/sdk`.
18
+
19
+ Both share the same auth chain and HTTP client, so behaviour stays in lockstep.
20
+
21
+ ## Install
22
+
23
+ ```bash
24
+ npm i -g @llamaventures/cli
25
+ ```
26
+
27
+ Requires Node 18+ (uses native `fetch`).
28
+
29
+ ## Authenticate
30
+
31
+ The CLI tries credentials in this order on every call:
32
+
33
+ 1. **`gcloud auth print-identity-token`** → `Authorization: Bearer …` (zero config; recommended for team members)
34
+ 2. **`$LLAMA_TOKEN` env var** → `X-Llama-Token` (preferred for CI / sandboxed agents)
35
+ 3. **`~/.llama/token`** (single line, mode `0600`) → `X-Llama-Token`
36
+ 4. **`~/.llama-command/config.json`** — legacy from CLI v0.1; auto-migrates to `~/.llama/token` on first read
37
+
38
+ If both Bearer and X-Llama-Token are present, both are sent. The server tries Bearer first; on verification failure it falls through to X-Llama-Token.
39
+
40
+ ### Zero-config (recommended)
41
+
42
+ ```bash
43
+ gcloud auth login # one-time, pick your @llamaventures.vc account
44
+ llama auth status # confirm — should show role + email
45
+ llama deal search acme-ai # ready to go
46
+ ```
47
+
48
+ ### Manual token
49
+
50
+ For machines without `gcloud`, or for stable CI / agent setups:
51
+
52
+ 1. Sign in to https://command.llamaventures.vc
53
+ 2. Visit `/settings/tokens` → click **Mint Token**
54
+ 3. Save the `llc_…` value to `~/.llama/token`:
55
+
56
+ ```bash
57
+ llama token set llc_paste_token_here
58
+ # writes ~/.llama/token (mode 0600), round-trips against /api/me before saving
59
+ ```
60
+
61
+ Or set it as an env var (preferred for CI):
62
+
63
+ ```bash
64
+ export LLAMA_TOKEN=llc_paste_token_here
65
+ ```
66
+
67
+ A team member without an account: ask
68
+ [gavin@llamaventures.vc](mailto:gavin@llamaventures.vc) to mint one for you (he can mint for any email; it auto-creates an inactive user row that he then activates).
69
+
70
+ ## External / founder use (no Llama token required)
71
+
72
+ If you don't have a Llama Command token — you're a founder pitching us, an
73
+ EA, a prospective hire, or just exploring — the CLI ships a separate
74
+ `pitch` command family that talks to our public intake agent at
75
+ [command.llamaventures.vc/external-agent](https://command.llamaventures.vc/external-agent).
76
+ Same conversation, structured intake, same 12-dimension verdict as the web
77
+ flow — but driven from your terminal (or your AI agent over MCP).
78
+
79
+ ```bash
80
+ # Bootstrap a session
81
+ llama pitch start --name "Jane Doe" --email "jane@acme.ai"
82
+
83
+ # Send a message (single-shot, prints reply)
84
+ llama pitch say "We're building an AI dev tool for X..."
85
+
86
+ # Attach your deck / one-pager
87
+ llama pitch upload ./deck.pdf
88
+
89
+ # Or open an interactive REPL
90
+ llama pitch
91
+ ```
92
+
93
+ Caps (server-enforced — same as the web flow): 5 sessions/IP/day,
94
+ 3 sessions/email/day, 30min idle timeout, 100 messages/session,
95
+ 1M tokens/session.
96
+
97
+ The MCP server exposes the same surface as `pitch_*` tools
98
+ (`pitch_start`, `pitch_send_message`, `pitch_upload_file`, `pitch_status`,
99
+ `pitch_finalize`) — so the founder's own AI agent (Claude / Cursor /
100
+ OpenClaw / etc.) can help them pitch via Llama's intake agent. True A2A.
101
+
102
+ ## CLI command reference
103
+
104
+ ```bash
105
+ # Auth diagnostics
106
+ llama auth status
107
+
108
+ # Token management
109
+ llama token set <llc_...> [--base https://command.llamaventures.vc]
110
+ llama token show
111
+
112
+ # Deals — read
113
+ llama deal search <query> [--founder ...] [--owner ...] [--status ...] [--stage ...] [--limit N]
114
+ llama deal list [--owner ...] [--status ...]
115
+ llama deal show <dealId>
116
+
117
+ # Deals — write
118
+ llama deal create "Company" --description "..." [--source ...] [--stage ...] [...]
119
+ llama deal update <dealId> <field> <value>
120
+ llama deal delete <dealId> # soft-delete (audit-logged)
121
+ llama deal restore <dealId>
122
+ llama deal trash # list soft-deleted
123
+
124
+ # Ownership
125
+ llama claim <dealId> # propose self
126
+ llama nominate <dealId> --user <userId>
127
+ llama nominations list
128
+ llama nominations decide <approvalId> accepted|declined
129
+ llama approvals list # partner queue
130
+ llama approvals decide <approvalId> approved|rejected [--note "..."]
131
+
132
+ # Timeline + posts
133
+ llama timeline <dealId>
134
+ llama post <dealId> "message" [--link url] [--link-name "name"]
135
+
136
+ # Brief blocks (ordered, typed: text | link | embed | callout)
137
+ llama brief blocks <dealId>
138
+ llama brief add-text <dealId> --heading "..." --body "..."
139
+ llama brief add-link <dealId> --url "..." --label "..." [--description "..."]
140
+ llama brief add-embed <dealId> --url "..." [--label "..."]
141
+ llama brief add-callout <dealId> --tone insight|info|warning|success --heading "..." --body "..."
142
+ llama brief edit <dealId> <blockId> [--heading ...] [--body ...] [--url ...] [--label ...] [--tone ...]
143
+ llama brief delete <dealId> <blockId> # soft
144
+ llama brief restore <dealId> <blockId>
145
+ llama brief history <dealId> <blockId> [--limit 50]
146
+ llama brief restore-version <dealId> <blockId> <historyId>
147
+
148
+ # Collaborators
149
+ llama deal collab list <dealId>
150
+ llama deal collab add <dealId> --user <userId|email>
151
+ llama deal collab remove <dealId> --user <userId|email>
152
+ llama deal collab restore <dealId> --user <userId|email>
153
+
154
+ # Links (URLs attached to a deal — Netlify demos, Gamma decks, etc.)
155
+ llama deal link list <dealId> [--include-deleted]
156
+ llama deal link add <dealId> --url <url> [--label "..."]
157
+ llama deal link delete <dealId> <linkId>
158
+ llama deal link restore <dealId> <linkId>
159
+
160
+ # Brief / persona refresh
161
+ llama deal refresh-brief <dealId> [--force]
162
+ llama deal refresh-persona <dealId> <persona>
163
+
164
+ # Deal facts (AI / human-asserted, with verification)
165
+ llama deal fact list <dealId>
166
+ llama deal fact add <dealId> --category <cat> --claim "<text>" [--source <url>] [--confidence high|medium|low]
167
+ llama deal fact verify <dealId> <factId> --status confirmed|disputed [--corrected-value "..."]
168
+
169
+ # Mentions / inbox
170
+ llama mentions # my unresolved cues (default)
171
+ llama mentions list [--everyone] [--all]
172
+ llama mentions show <mentionId>
173
+ llama mentions resolve <mentionId>
174
+ llama mentions unread # badge count
175
+
176
+ # Skill corrections (persona-owner workflow)
177
+ llama skill-correction list <skill-slug> [--include-deleted]
178
+ llama skill-correction add <skill-slug> "<rule>" [--deal <uuid>] [--block <blockId>]
179
+ llama skill-correction delete <id>
180
+
181
+ # Wiki (knowledge base)
182
+ llama wiki search <query>
183
+ llama wiki read <slug>
184
+ llama wiki save <slug> --title "..." --content "..." [--sources "url1;url2"]
185
+
186
+ # Admin event feeds (system admin only)
187
+ llama admin auth-events [--kind X] [--actor email] [--since 24h] [--limit 100]
188
+ llama admin deal-events [--kind X] [--deal <uuid>] [--since 24h]
189
+ llama admin agent-events [--kind tool_call|loop_stalled] [--errors-only]
190
+ ```
191
+
192
+ ### Soft-delete
193
+
194
+ All deletes through this CLI are non-destructive: brief blocks, collaborators, deal links, and deals themselves use `deleted_at` markers. Default reads filter trashed rows out; pass `--include-deleted` on `deal link list` (or visit `/admin` for the broader trash view) to see them. Every removal and restore writes an audit-log entry.
195
+
196
+ ## MCP server (`llama-mcp`)
197
+
198
+ The same package ships a stdio MCP server with **16 tools** mirroring the most-used CLI surface. Auth is identical — gcloud → `$LLAMA_TOKEN` → `~/.llama/token`. The server reuses `lib/client.mjs` so the CLI and MCP can never drift on transport or auth.
199
+
200
+ Tools registered:
201
+
202
+ ```
203
+ auth_status deal_search deal_show
204
+ deal_create deal_update
205
+ brief_blocks brief_add_text brief_add_link brief_add_callout
206
+ wiki_search wiki_save
207
+ timeline post
208
+ mentions_list
209
+ llama_api # escape hatch — raw HTTP for endpoints not yet wrapped
210
+ ```
211
+
212
+ `llama_api` is a generic passthrough modeled on the GitHub MCP server pattern: agents discover it via `tools/list` and use it for any endpoint not yet given a typed wrapper. Path must start with `/api/`.
213
+
214
+ ### Wire into Claude Desktop / Claude Code
215
+
216
+ `~/.config/claude-desktop/claude_desktop_config.json` (macOS:
217
+ `~/Library/Application Support/Claude/claude_desktop_config.json`):
218
+
219
+ ```json
220
+ {
221
+ "mcpServers": {
222
+ "llama": {
223
+ "command": "llama-mcp"
224
+ }
225
+ }
226
+ }
227
+ ```
228
+
229
+ Restart Claude Desktop. The 16 tools appear under the 🛠️ menu.
230
+
231
+ ### Wire into Cursor
232
+
233
+ `~/.cursor/mcp.json`:
234
+
235
+ ```json
236
+ {
237
+ "mcpServers": {
238
+ "llama": {
239
+ "command": "llama-mcp"
240
+ }
241
+ }
242
+ }
243
+ ```
244
+
245
+ ### Wire into a Codex / OpenClaw / arbitrary stdio MCP client
246
+
247
+ Most clients accept a `command` + `args` config. Run `which llama-mcp` to find the binary path (`/usr/local/bin/llama-mcp` or `~/.npm-global/bin/llama-mcp`) and point the client at it. Any agent that speaks MCP over stdio works.
248
+
249
+ ## Error codes (for agents)
250
+
251
+ CLI errors include a stable prefix so agents can pattern-match and recover:
252
+
253
+ - `Error[NO_AUTH]` — no credentials found. Direct the user to `gcloud auth login` or `llama token set`.
254
+ - `Error[UNAUTHORIZED]` — server rejected our credentials. Token revoked, expired, or wrong account selected in gcloud.
255
+
256
+ The MCP server returns the same errors as `isError: true` content with the same prefix.
257
+
258
+ ## Versioning
259
+
260
+ Semver. Breaking changes to CLI command shape (renamed flags, removed commands) bump major. Adding a tool or flag bumps minor. Bug fixes bump patch.
261
+
262
+ The CLI prints its version under `llama --version`. MCP server reports the same version in its `serverInfo`.
263
+
264
+ ## Reporting security issues
265
+
266
+ **Do not file public GitHub issues for security bugs.** Email
267
+ [gavin@llamaventures.vc](mailto:gavin@llamaventures.vc). See
268
+ [SECURITY.md](./SECURITY.md) for scope, response SLA, and the
269
+ supply-chain posture (Trusted Publishers + provenance + zero-deps CLI).
270
+
271
+ ## License
272
+
273
+ [MIT](./LICENSE).