@opendirectory.dev/skills 0.1.35 → 0.1.37
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/registry.json +22 -0
- package/skills/npm-downloads-to-leads/.env.example +4 -0
- package/skills/npm-downloads-to-leads/README.md +146 -0
- package/skills/npm-downloads-to-leads/SKILL.md +670 -0
- package/skills/npm-downloads-to-leads/evals/evals.json +119 -0
- package/skills/npm-downloads-to-leads/references/outreach-timing.md +163 -0
- package/skills/npm-downloads-to-leads/references/velocity-scoring.md +136 -0
- package/skills/npm-downloads-to-leads/scripts/fetch.py +372 -0
- package/skills/sdk-adoption-tracker/.env.example +3 -0
- package/skills/sdk-adoption-tracker/README.md +153 -0
- package/skills/sdk-adoption-tracker/SKILL.md +808 -0
- package/skills/sdk-adoption-tracker/evals/evals.json +108 -0
- package/skills/sdk-adoption-tracker/references/import-patterns.md +183 -0
- package/skills/sdk-adoption-tracker/references/scoring-guide.md +148 -0
- package/skills/sdk-adoption-tracker/scripts/fetch.py +462 -0
|
@@ -0,0 +1,808 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sdk-adoption-tracker
|
|
3
|
+
description: Given your SDK or library name, searches GitHub code search for public repos that import or require it, classifies each repo as company org, affiliated developer, solo developer, or tutorial noise, scores by adoption signal strength, detects new adopters by date, and outputs a ranked list of who is building on you with outreach context per high-signal company. Use when asked to find who uses your SDK, track SDK adoption, find companies building on your library, identify warm leads from existing SDK users, or see which orgs import your package. Trigger when a user says "who is using my SDK", "find repos that import my library", "track adoption of my package", "which companies are building on my SDK", "find my SDK users on GitHub", or "show me who imports my package".
|
|
4
|
+
compatibility: [claude-code, gemini-cli, github-copilot]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# SDK Adoption Tracker
|
|
8
|
+
|
|
9
|
+
Take an SDK name. Search GitHub for public repos that import it. Score each repo by company signal, activity, and noise indicators. Enrich high-signal repos with owner and contributor data. Output a ranked adoption report with outreach context for company adopters.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
**Critical rule:** Every repo in the output must exist in the GitHub code search API response. Every company name must come from the GitHub user or org API `company` or `name` field. Every contributor handle must come from the GitHub contributors API response. If any field is empty in the API, write "not listed" -- do not infer, guess, or extrapolate.
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## Common Mistakes
|
|
18
|
+
|
|
19
|
+
| The agent will want to... | Why that's wrong |
|
|
20
|
+
|---|---|
|
|
21
|
+
| Run code search without GITHUB_TOKEN | Unauthenticated code search hits a 3 req/min secondary rate limit and fails on any meaningful scan. GITHUB_TOKEN is required. Stop at Step 1 with a clear error if it is missing. |
|
|
22
|
+
| Include forks of the SDK itself | Repos that fork the SDK are contributors or mirrors, not adopters. Filter out repos where `fork == true` AND the repo name matches the SDK name. |
|
|
23
|
+
| Send all 500 raw search results to the AI | Code search can return up to 500 results, most of which are noise. Filter and score locally first. Send only the top 20 high-signal repos to the AI analysis step. |
|
|
24
|
+
| Report tutorial and example repos as adopters | Repos with "example", "tutorial", "demo", "learn", "sample", "playground", "starter" in the name or description are not production users. Mark as tutorial_noise and exclude from lead briefs. |
|
|
25
|
+
| Invent company names or contact handles | Every company name must come from the GitHub `company` or org `name` field. Every contributor handle must come from the contributors API response. If a field is empty, write "not listed". |
|
|
26
|
+
| Use one import pattern for all ecosystems | `require('sdk')` will not find Python users. Auto-detect ecosystem from the SDK name and build ecosystem-specific patterns. Ask the user if auto-detection is ambiguous. |
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## Step 1: Setup Check
|
|
31
|
+
|
|
32
|
+
```bash
|
|
33
|
+
if [ -z "$GITHUB_TOKEN" ]; then
|
|
34
|
+
echo "ERROR: GITHUB_TOKEN is required for code search."
|
|
35
|
+
echo "Add a token at github.com/settings/tokens (no scopes needed for public repos)."
|
|
36
|
+
echo "Without it, GitHub code search hits a 3 req/min secondary rate limit and fails."
|
|
37
|
+
exit 1
|
|
38
|
+
fi
|
|
39
|
+
echo "GITHUB_TOKEN: set"
|
|
40
|
+
curl -s -H "Authorization: Bearer $GITHUB_TOKEN" \
|
|
41
|
+
-H "Accept: application/vnd.github+json" \
|
|
42
|
+
"https://api.github.com/rate_limit" | python3 -c "
|
|
43
|
+
import json, sys
|
|
44
|
+
d = json.load(sys.stdin)
|
|
45
|
+
search = d['resources']['search']
|
|
46
|
+
core = d['resources']['core']
|
|
47
|
+
print(f'Search rate: {search[\"remaining\"]}/{search[\"limit\"]} remaining')
|
|
48
|
+
print(f'Core rate: {core[\"remaining\"]}/{core[\"limit\"]} remaining')
|
|
49
|
+
"
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
If search remaining is 0: stop. Tell the user the reset time from `X-RateLimit-Reset`.
|
|
53
|
+
|
|
54
|
+
---
|
|
55
|
+
|
|
56
|
+
## Step 2: Gather Input
|
|
57
|
+
|
|
58
|
+
Collect from the conversation:
|
|
59
|
+
- SDK name (e.g. `@company/my-sdk`, `requests`, `github.com/org/go-sdk`)
|
|
60
|
+
- Optional: ecosystem override (`npm`, `python`, `go`, `gem`) -- auto-detected if not provided
|
|
61
|
+
- Optional: org/user to exclude from results (the SDK owner's own repos)
|
|
62
|
+
- Optional: product context string (used to personalize outreach messages)
|
|
63
|
+
|
|
64
|
+
**Auto-detect ecosystem:**
|
|
65
|
+
- Starts with `@` or contains `-`: npm
|
|
66
|
+
- snake_case with no `/` or `-`: python
|
|
67
|
+
- Contains `github.com/`: go
|
|
68
|
+
- Otherwise: ask the user
|
|
69
|
+
|
|
70
|
+
**If no SDK name is provided:** Ask: "Which SDK or library would you like to track? Provide the package name as it appears in import statements (e.g. `stripe`, `@clerk/nextjs`, `requests`)."
|
|
71
|
+
|
|
72
|
+
```bash
|
|
73
|
+
python3 << 'PYEOF'
|
|
74
|
+
import json, sys, re
|
|
75
|
+
|
|
76
|
+
sdk_name = "SDK_NAME_HERE"
|
|
77
|
+
ecosystem_override = "" # leave empty for auto-detect
|
|
78
|
+
exclude_owner = "" # optional: owner name to exclude (usually the SDK publisher)
|
|
79
|
+
product_context = "" # optional: what your product does
|
|
80
|
+
|
|
81
|
+
# Auto-detect ecosystem
|
|
82
|
+
if ecosystem_override:
|
|
83
|
+
ecosystem = ecosystem_override
|
|
84
|
+
elif sdk_name.startswith("@") or "-" in sdk_name:
|
|
85
|
+
ecosystem = "npm"
|
|
86
|
+
elif re.match(r'^[a-z][a-z0-9_]*$', sdk_name):
|
|
87
|
+
ecosystem = "python"
|
|
88
|
+
elif "github.com/" in sdk_name:
|
|
89
|
+
ecosystem = "go"
|
|
90
|
+
else:
|
|
91
|
+
ecosystem = "generic"
|
|
92
|
+
|
|
93
|
+
print(f"SDK: {sdk_name}")
|
|
94
|
+
print(f"Ecosystem: {ecosystem}")
|
|
95
|
+
print(f"Exclude owner: {exclude_owner or '(none)'}")
|
|
96
|
+
|
|
97
|
+
with open("/tmp/sat-input.json", "w") as f:
|
|
98
|
+
json.dump({
|
|
99
|
+
"sdk_name": sdk_name,
|
|
100
|
+
"ecosystem": ecosystem,
|
|
101
|
+
"exclude_owner": exclude_owner,
|
|
102
|
+
"product_context": product_context
|
|
103
|
+
}, f)
|
|
104
|
+
PYEOF
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
---
|
|
108
|
+
|
|
109
|
+
## Step 3: Search GitHub Code
|
|
110
|
+
|
|
111
|
+
Check for standalone script first -- it handles Steps 3-5 in one call.
|
|
112
|
+
|
|
113
|
+
```bash
|
|
114
|
+
ls scripts/fetch.py 2>/dev/null && echo "script available" || echo "script not found"
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
**If the script is available**, run it and skip to Step 6:
|
|
118
|
+
|
|
119
|
+
```bash
|
|
120
|
+
python3 scripts/fetch.py "$(python3 -c "import json; d=json.load(open('/tmp/sat-input.json')); print(d['sdk_name'])")" \
|
|
121
|
+
--ecosystem "$(python3 -c "import json; d=json.load(open('/tmp/sat-input.json')); print(d['ecosystem'])")" \
|
|
122
|
+
--exclude "$(python3 -c "import json; d=json.load(open('/tmp/sat-input.json')); print(d.get('exclude_owner',''))")" \
|
|
123
|
+
--output /tmp/sat-script-out.json
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
Then load into the temp file format Steps 6-8 expect:
|
|
127
|
+
|
|
128
|
+
```bash
|
|
129
|
+
python3 << 'PYEOF'
|
|
130
|
+
import json
|
|
131
|
+
|
|
132
|
+
out = json.load(open("/tmp/sat-script-out.json"))
|
|
133
|
+
json.dump(out["raw_results"], open("/tmp/sat-raw-results.json", "w"), indent=2)
|
|
134
|
+
json.dump(out["scored"], open("/tmp/sat-scored.json", "w"), indent=2)
|
|
135
|
+
json.dump(out["enriched"], open("/tmp/sat-enriched.json", "w"), indent=2)
|
|
136
|
+
print(f"Loaded: {len(out['raw_results'])} raw | {len(out['scored'])} scored | {len(out['enriched'])} enriched")
|
|
137
|
+
PYEOF
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
**If the script is not available**, run the inline code below.
|
|
141
|
+
|
|
142
|
+
Build import patterns and search GitHub code for each:
|
|
143
|
+
|
|
144
|
+
```bash
|
|
145
|
+
python3 << 'PYEOF'
|
|
146
|
+
import json, urllib.request, ssl, time, os
|
|
147
|
+
from datetime import datetime, timezone
|
|
148
|
+
|
|
149
|
+
ctx = ssl._create_unverified_context()
|
|
150
|
+
token = os.environ["GITHUB_TOKEN"]
|
|
151
|
+
headers = {
|
|
152
|
+
"Accept": "application/vnd.github+json",
|
|
153
|
+
"Authorization": f"Bearer {token}",
|
|
154
|
+
"User-Agent": "sdk-adoption-tracker/1.0"
|
|
155
|
+
}
|
|
156
|
+
|
|
157
|
+
data = json.load(open("/tmp/sat-input.json"))
|
|
158
|
+
sdk_name = data["sdk_name"]
|
|
159
|
+
ecosystem = data["ecosystem"]
|
|
160
|
+
|
|
161
|
+
# Build query patterns per ecosystem
|
|
162
|
+
if ecosystem == "npm":
|
|
163
|
+
# Use sdk name without @ prefix for bare searches
|
|
164
|
+
bare = sdk_name.lstrip("@").replace("/", "/")
|
|
165
|
+
queries = [
|
|
166
|
+
f'require("{sdk_name}")',
|
|
167
|
+
f"require('{sdk_name}')",
|
|
168
|
+
f'from "{sdk_name}"',
|
|
169
|
+
f"from '{sdk_name}'",
|
|
170
|
+
]
|
|
171
|
+
elif ecosystem == "python":
|
|
172
|
+
queries = [
|
|
173
|
+
f"import {sdk_name}",
|
|
174
|
+
f"from {sdk_name} import",
|
|
175
|
+
f"from {sdk_name}.",
|
|
176
|
+
]
|
|
177
|
+
elif ecosystem == "go":
|
|
178
|
+
queries = [f'"{sdk_name}"']
|
|
179
|
+
else:
|
|
180
|
+
queries = [sdk_name]
|
|
181
|
+
|
|
182
|
+
print(f"Building queries for {sdk_name} ({ecosystem}):")
|
|
183
|
+
for q in queries:
|
|
184
|
+
print(f" {q}")
|
|
185
|
+
|
|
186
|
+
seen_repos = {} # full_name -> first match
|
|
187
|
+
search_rate_remaining = 30
|
|
188
|
+
flags = []
|
|
189
|
+
|
|
190
|
+
for i, query in enumerate(queries):
|
|
191
|
+
if search_rate_remaining <= 2:
|
|
192
|
+
flags.append(f"Code search rate limit low ({search_rate_remaining}) -- skipped remaining patterns")
|
|
193
|
+
print(f" Rate limit low ({search_rate_remaining}), stopping early")
|
|
194
|
+
break
|
|
195
|
+
|
|
196
|
+
url = f"https://api.github.com/search/code?q={urllib.parse.quote(query)}&per_page=100"
|
|
197
|
+
req = urllib.request.Request(url, headers=headers)
|
|
198
|
+
|
|
199
|
+
try:
|
|
200
|
+
with urllib.request.urlopen(req, timeout=20, context=ctx) as resp:
|
|
201
|
+
search_rate_remaining = int(resp.headers.get("X-RateLimit-Remaining", 10))
|
|
202
|
+
raw = json.loads(resp.read())
|
|
203
|
+
|
|
204
|
+
items = raw.get("items", [])
|
|
205
|
+
total = raw.get("total_count", 0)
|
|
206
|
+
print(f" Pattern '{query}': {total} total, {len(items)} fetched | rate_remaining={search_rate_remaining}")
|
|
207
|
+
|
|
208
|
+
for item in items:
|
|
209
|
+
repo = item.get("repository", {})
|
|
210
|
+
full_name = repo.get("full_name", "")
|
|
211
|
+
if full_name and full_name not in seen_repos:
|
|
212
|
+
seen_repos[full_name] = {
|
|
213
|
+
"full_name": full_name,
|
|
214
|
+
"name": repo.get("name", ""),
|
|
215
|
+
"owner_login": repo.get("owner", {}).get("login", ""),
|
|
216
|
+
"owner_type": repo.get("owner", {}).get("type", ""),
|
|
217
|
+
"file_path": item.get("path", ""),
|
|
218
|
+
"matched_pattern": query,
|
|
219
|
+
"html_url": repo.get("html_url", ""),
|
|
220
|
+
"description": repo.get("description") or "",
|
|
221
|
+
}
|
|
222
|
+
|
|
223
|
+
except urllib.error.HTTPError as e:
|
|
224
|
+
if e.code == 403:
|
|
225
|
+
flags.append(f"Code search rate limit hit on pattern '{query}'")
|
|
226
|
+
print(f" Rate limit hit (403) on '{query}'")
|
|
227
|
+
break
|
|
228
|
+
else:
|
|
229
|
+
print(f" HTTP {e.code} on '{query}'")
|
|
230
|
+
except Exception as e:
|
|
231
|
+
print(f" Error on '{query}': {e}")
|
|
232
|
+
|
|
233
|
+
# Respect 10 req/min code search limit
|
|
234
|
+
if i < len(queries) - 1:
|
|
235
|
+
time.sleep(6)
|
|
236
|
+
|
|
237
|
+
results = list(seen_repos.values())
|
|
238
|
+
json.dump(results, open("/tmp/sat-raw-results.json", "w"), indent=2)
|
|
239
|
+
print(f"\nTotal unique repos found: {len(results)}")
|
|
240
|
+
if flags:
|
|
241
|
+
print("Flags:", flags)
|
|
242
|
+
|
|
243
|
+
import urllib.parse # ensure imported
|
|
244
|
+
PYEOF
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
**If 0 results:** Tell the user: "No repos found importing `{sdk_name}`. GitHub code search indexing takes 1-4 weeks for new packages. If the SDK is established, check the import patterns in `references/import-patterns.md`."
|
|
248
|
+
|
|
249
|
+
---
|
|
250
|
+
|
|
251
|
+
## Step 4: Score and Classify Repos
|
|
252
|
+
|
|
253
|
+
No API call. Pure Python. Filter noise, compute adoption score, classify each repo.
|
|
254
|
+
|
|
255
|
+
```bash
|
|
256
|
+
python3 << 'PYEOF'
|
|
257
|
+
import json
|
|
258
|
+
from datetime import datetime, timezone
|
|
259
|
+
|
|
260
|
+
data = json.load(open("/tmp/sat-input.json"))
|
|
261
|
+
exclude_owner = data.get("exclude_owner", "").lower()
|
|
262
|
+
sdk_name = data["sdk_name"].lower().split("/")[-1].replace("@", "")
|
|
263
|
+
|
|
264
|
+
results = json.load(open("/tmp/sat-raw-results.json"))
|
|
265
|
+
|
|
266
|
+
TUTORIAL_WORDS = {"example", "tutorial", "demo", "learn", "sample", "starter",
|
|
267
|
+
"boilerplate", "template", "playground", "test", "course", "workshop"}
|
|
268
|
+
|
|
269
|
+
scored = []
|
|
270
|
+
now = datetime.now(tz=timezone.utc)
|
|
271
|
+
|
|
272
|
+
for repo in results:
|
|
273
|
+
full_name = repo["full_name"]
|
|
274
|
+
owner_login = repo.get("owner_login", "").lower()
|
|
275
|
+
repo_name = repo.get("name", "").lower()
|
|
276
|
+
description = (repo.get("description") or "").lower()
|
|
277
|
+
owner_type = repo.get("owner_type", "User")
|
|
278
|
+
|
|
279
|
+
# Exclude the SDK owner's own repos
|
|
280
|
+
if exclude_owner and owner_login == exclude_owner.lower():
|
|
281
|
+
continue
|
|
282
|
+
|
|
283
|
+
# Detect tutorial noise
|
|
284
|
+
name_words = set(repo_name.replace("-", " ").replace("_", " ").split())
|
|
285
|
+
desc_words = set(description.split())
|
|
286
|
+
is_tutorial = bool((name_words | desc_words) & TUTORIAL_WORDS)
|
|
287
|
+
# Also exclude if repo name IS the SDK name (likely a fork)
|
|
288
|
+
if repo_name == sdk_name or repo_name.startswith(sdk_name + "-"):
|
|
289
|
+
is_tutorial = True # treat as noise
|
|
290
|
+
|
|
291
|
+
# Classification tier
|
|
292
|
+
if is_tutorial:
|
|
293
|
+
tier = "tutorial_noise"
|
|
294
|
+
elif owner_type == "Organization":
|
|
295
|
+
tier = "company_org"
|
|
296
|
+
else:
|
|
297
|
+
tier = "solo_dev" # will be upgraded to affiliated_dev in Step 5 if company field populated
|
|
298
|
+
|
|
299
|
+
# Adoption score (filled with partial data now, enriched in Step 5)
|
|
300
|
+
score = 0
|
|
301
|
+
if owner_type == "Organization": score += 50
|
|
302
|
+
if not is_tutorial: score += 20
|
|
303
|
+
# stars, days_since_push, is_fork, is_archived added in Step 5
|
|
304
|
+
|
|
305
|
+
scored.append({
|
|
306
|
+
**repo,
|
|
307
|
+
"tier": tier,
|
|
308
|
+
"is_tutorial": is_tutorial,
|
|
309
|
+
"adoption_score": score,
|
|
310
|
+
"enriched": False
|
|
311
|
+
})
|
|
312
|
+
|
|
313
|
+
# Sort: company_org first, then by tier
|
|
314
|
+
tier_order = {"company_org": 0, "affiliated_dev": 1, "solo_dev": 2, "tutorial_noise": 3}
|
|
315
|
+
scored.sort(key=lambda x: (tier_order.get(x["tier"], 9), -x["adoption_score"]))
|
|
316
|
+
|
|
317
|
+
json.dump(scored, open("/tmp/sat-scored.json", "w"), indent=2)
|
|
318
|
+
|
|
319
|
+
tiers = {}
|
|
320
|
+
for r in scored:
|
|
321
|
+
tiers[r["tier"]] = tiers.get(r["tier"], 0) + 1
|
|
322
|
+
|
|
323
|
+
print(f"Classification:")
|
|
324
|
+
for tier, count in sorted(tiers.items(), key=lambda x: tier_order.get(x[0], 9)):
|
|
325
|
+
print(f" {tier}: {count}")
|
|
326
|
+
print(f"Total: {len(scored)} repos")
|
|
327
|
+
|
|
328
|
+
non_noise = [r for r in scored if r["tier"] != "tutorial_noise"]
|
|
329
|
+
print(f"\nTop repos for enrichment (non-noise): {len(non_noise)}")
|
|
330
|
+
for r in non_noise[:5]:
|
|
331
|
+
print(f" {r['full_name']} ({r['tier']}) -- {r.get('description','')[:60]}")
|
|
332
|
+
PYEOF
|
|
333
|
+
```
|
|
334
|
+
|
|
335
|
+
**If all repos are tutorial_noise:** Stop. Tell the user: "All repos found appear to be tutorials or examples. No production adopters detected in public GitHub. The SDK may be too new, or the package name is generic enough that search results are dominated by examples."
|
|
336
|
+
|
|
337
|
+
---
|
|
338
|
+
|
|
339
|
+
## Step 5: Enrich High-Signal Repos
|
|
340
|
+
|
|
341
|
+
Fetch full repo metadata, owner profile, and top contributors for non-noise repos. Skip tutorial_noise repos entirely.
|
|
342
|
+
|
|
343
|
+
```bash
|
|
344
|
+
python3 << 'PYEOF'
|
|
345
|
+
import json, urllib.request, ssl, os, time
|
|
346
|
+
from datetime import datetime, timezone
|
|
347
|
+
|
|
348
|
+
ctx = ssl._create_unverified_context()
|
|
349
|
+
token = os.environ["GITHUB_TOKEN"]
|
|
350
|
+
headers = {
|
|
351
|
+
"Accept": "application/vnd.github+json",
|
|
352
|
+
"Authorization": f"Bearer {token}",
|
|
353
|
+
"User-Agent": "sdk-adoption-tracker/1.0"
|
|
354
|
+
}
|
|
355
|
+
|
|
356
|
+
scored = json.load(open("/tmp/sat-scored.json"))
|
|
357
|
+
core_remaining = 5000
|
|
358
|
+
flags = []
|
|
359
|
+
|
|
360
|
+
def gh_get(path):
|
|
361
|
+
global core_remaining
|
|
362
|
+
req = urllib.request.Request(f"https://api.github.com{path}", headers=headers)
|
|
363
|
+
try:
|
|
364
|
+
with urllib.request.urlopen(req, timeout=15, context=ctx) as resp:
|
|
365
|
+
remaining = resp.headers.get("X-RateLimit-Remaining")
|
|
366
|
+
if remaining:
|
|
367
|
+
core_remaining = int(remaining)
|
|
368
|
+
return json.loads(resp.read())
|
|
369
|
+
except urllib.error.HTTPError as e:
|
|
370
|
+
if e.code == 404:
|
|
371
|
+
return None
|
|
372
|
+
raise
|
|
373
|
+
except Exception:
|
|
374
|
+
return None
|
|
375
|
+
|
|
376
|
+
target = [r for r in scored if r["tier"] != "tutorial_noise"]
|
|
377
|
+
print(f"Enriching {len(target)} repos (skipping tutorial_noise)...")
|
|
378
|
+
|
|
379
|
+
enriched = []
|
|
380
|
+
|
|
381
|
+
for item in target:
|
|
382
|
+
full_name = item["full_name"]
|
|
383
|
+
owner_login = item["owner_login"]
|
|
384
|
+
owner_type = item["owner_type"]
|
|
385
|
+
|
|
386
|
+
if core_remaining <= 10:
|
|
387
|
+
flags.append(f"Core rate limit low ({core_remaining}) -- skipped enrichment for {full_name} and remaining repos")
|
|
388
|
+
enriched.append({**item, "enriched": False})
|
|
389
|
+
continue
|
|
390
|
+
|
|
391
|
+
# Fetch full repo metadata
|
|
392
|
+
repo_data = gh_get(f"/repos/{full_name}")
|
|
393
|
+
if not repo_data:
|
|
394
|
+
print(f" {full_name}: repo not found")
|
|
395
|
+
continue
|
|
396
|
+
|
|
397
|
+
stars = repo_data.get("stargazers_count", 0)
|
|
398
|
+
is_fork = repo_data.get("fork", False)
|
|
399
|
+
is_archived = repo_data.get("archived", False)
|
|
400
|
+
language = repo_data.get("language") or ""
|
|
401
|
+
description = repo_data.get("description") or item.get("description", "")
|
|
402
|
+
pushed_at = repo_data.get("pushed_at") or ""
|
|
403
|
+
created_at = repo_data.get("created_at") or ""
|
|
404
|
+
repo_url = repo_data.get("html_url", f"https://github.com/{full_name}")
|
|
405
|
+
|
|
406
|
+
# Compute days since last push
|
|
407
|
+
days_since_push = 999
|
|
408
|
+
if pushed_at:
|
|
409
|
+
pushed_dt = datetime.fromisoformat(pushed_at.replace("Z", "+00:00"))
|
|
410
|
+
days_since_push = (datetime.now(tz=timezone.utc) - pushed_dt).days
|
|
411
|
+
|
|
412
|
+
days_since_created = 999
|
|
413
|
+
if created_at:
|
|
414
|
+
created_dt = datetime.fromisoformat(created_at.replace("Z", "+00:00"))
|
|
415
|
+
days_since_created = (datetime.now(tz=timezone.utc) - created_dt).days
|
|
416
|
+
|
|
417
|
+
# Fetch owner profile (user or org)
|
|
418
|
+
owner_profile = {}
|
|
419
|
+
company = ""
|
|
420
|
+
org_website = ""
|
|
421
|
+
|
|
422
|
+
if owner_type == "Organization":
|
|
423
|
+
org_data = gh_get(f"/orgs/{owner_login}")
|
|
424
|
+
if org_data:
|
|
425
|
+
company = org_data.get("name") or owner_login
|
|
426
|
+
org_website = org_data.get("blog") or ""
|
|
427
|
+
owner_profile = {
|
|
428
|
+
"type": "org",
|
|
429
|
+
"name": org_data.get("name") or owner_login,
|
|
430
|
+
"description": org_data.get("description") or "",
|
|
431
|
+
"website": org_website,
|
|
432
|
+
"email": org_data.get("email") or "",
|
|
433
|
+
"public_repos": org_data.get("public_repos", 0),
|
|
434
|
+
"followers": org_data.get("followers", 0),
|
|
435
|
+
}
|
|
436
|
+
else:
|
|
437
|
+
user_data = gh_get(f"/users/{owner_login}")
|
|
438
|
+
if user_data:
|
|
439
|
+
company = user_data.get("company") or ""
|
|
440
|
+
owner_profile = {
|
|
441
|
+
"type": "user",
|
|
442
|
+
"name": user_data.get("name") or owner_login,
|
|
443
|
+
"company": company,
|
|
444
|
+
"bio": user_data.get("bio") or "",
|
|
445
|
+
"blog": user_data.get("blog") or "",
|
|
446
|
+
"followers": user_data.get("followers", 0),
|
|
447
|
+
"twitter_username": user_data.get("twitter_username") or "not listed",
|
|
448
|
+
}
|
|
449
|
+
# Upgrade tier if company field is populated
|
|
450
|
+
if company and item["tier"] == "solo_dev":
|
|
451
|
+
item["tier"] = "affiliated_dev"
|
|
452
|
+
|
|
453
|
+
# Fetch top contributors (skip if rate limit low)
|
|
454
|
+
top_contributors = []
|
|
455
|
+
if core_remaining > 20:
|
|
456
|
+
contributors = gh_get(f"/repos/{full_name}/contributors?per_page=3")
|
|
457
|
+
if contributors:
|
|
458
|
+
top_contributors = [
|
|
459
|
+
{"login": c.get("login", ""), "contributions": c.get("contributions", 0)}
|
|
460
|
+
for c in contributors[:3]
|
|
461
|
+
]
|
|
462
|
+
|
|
463
|
+
# Compute final adoption score
|
|
464
|
+
score = 0
|
|
465
|
+
if owner_type == "Organization": score += 50
|
|
466
|
+
if company and company.strip(): score += 20
|
|
467
|
+
score += min(stars, 500) / 10
|
|
468
|
+
if days_since_push < 30: score += 30
|
|
469
|
+
if days_since_push < 7: score += 20
|
|
470
|
+
if not is_fork: score += 10
|
|
471
|
+
if not is_archived: score += 10
|
|
472
|
+
if not item.get("is_tutorial", False): score += 20
|
|
473
|
+
|
|
474
|
+
tier = item["tier"]
|
|
475
|
+
if is_archived or is_fork:
|
|
476
|
+
tier = "tutorial_noise" if item.get("is_tutorial") else tier
|
|
477
|
+
|
|
478
|
+
enriched_item = {
|
|
479
|
+
**item,
|
|
480
|
+
"description": description,
|
|
481
|
+
"stars": stars,
|
|
482
|
+
"language": language,
|
|
483
|
+
"is_fork": is_fork,
|
|
484
|
+
"is_archived": is_archived,
|
|
485
|
+
"days_since_push": days_since_push,
|
|
486
|
+
"days_since_created": days_since_created,
|
|
487
|
+
"pushed_at": pushed_at,
|
|
488
|
+
"created_at": created_at,
|
|
489
|
+
"repo_url": repo_url,
|
|
490
|
+
"tier": tier,
|
|
491
|
+
"adoption_score": round(score, 1),
|
|
492
|
+
"company": company,
|
|
493
|
+
"org_website": org_website,
|
|
494
|
+
"owner_profile": owner_profile,
|
|
495
|
+
"top_contributors": top_contributors,
|
|
496
|
+
"enriched": True,
|
|
497
|
+
}
|
|
498
|
+
enriched.append(enriched_item)
|
|
499
|
+
|
|
500
|
+
print(f" {full_name} | tier={tier} | score={round(score,1)} | "
|
|
501
|
+
f"stars={stars} | pushed={days_since_push}d ago | "
|
|
502
|
+
f"company={company or 'not listed'} | rate={core_remaining}")
|
|
503
|
+
time.sleep(0.1)
|
|
504
|
+
|
|
505
|
+
enriched.sort(key=lambda x: -x["adoption_score"])
|
|
506
|
+
|
|
507
|
+
json.dump(enriched, open("/tmp/sat-enriched.json", "w"), indent=2)
|
|
508
|
+
print(f"\nEnrichment complete: {len(enriched)} repos | rate_remaining={core_remaining}")
|
|
509
|
+
if flags:
|
|
510
|
+
for f in flags:
|
|
511
|
+
print(f" FLAG: {f}")
|
|
512
|
+
PYEOF
|
|
513
|
+
```
|
|
514
|
+
|
|
515
|
+
---
|
|
516
|
+
|
|
517
|
+
## Step 6: Generate Adoption Briefs
|
|
518
|
+
|
|
519
|
+
Print top adopters, then generate outreach briefs for high-signal company repos.
|
|
520
|
+
|
|
521
|
+
```bash
|
|
522
|
+
python3 << 'PYEOF'
|
|
523
|
+
import json
|
|
524
|
+
from datetime import datetime, timezone
|
|
525
|
+
|
|
526
|
+
enriched = json.load(open("/tmp/sat-enriched.json"))
|
|
527
|
+
data = json.load(open("/tmp/sat-input.json"))
|
|
528
|
+
product_context = data.get("product_context", "")
|
|
529
|
+
sdk_name = data["sdk_name"]
|
|
530
|
+
|
|
531
|
+
high_signal = [r for r in enriched if r["adoption_score"] >= 80]
|
|
532
|
+
medium = [r for r in enriched if 40 <= r["adoption_score"] < 80]
|
|
533
|
+
noise = [r for r in enriched if r["adoption_score"] < 40 or r["tier"] == "tutorial_noise"]
|
|
534
|
+
|
|
535
|
+
print("=== DATA FOR ADOPTION BRIEF GENERATION ===")
|
|
536
|
+
print(f"SDK: {sdk_name}")
|
|
537
|
+
print(f"Product context: {product_context or '(none provided)'}")
|
|
538
|
+
print()
|
|
539
|
+
|
|
540
|
+
for item in (high_signal + medium)[:20]:
|
|
541
|
+
prof = item.get("owner_profile", {})
|
|
542
|
+
contribs = item.get("top_contributors", [])
|
|
543
|
+
primary = contribs[0] if contribs else {}
|
|
544
|
+
|
|
545
|
+
print(f"REPO: {item['full_name']} (tier={item['tier']}, score={item['adoption_score']})")
|
|
546
|
+
print(f" Stars: {item.get('stars', 0)} | Language: {item.get('language','?')} | "
|
|
547
|
+
f"Pushed: {item.get('days_since_push', '?')} days ago")
|
|
548
|
+
print(f" Description: {item.get('description','none')}")
|
|
549
|
+
print(f" SDK found in: {item.get('file_path','?')}")
|
|
550
|
+
print(f" Owner type: {item.get('owner_type','?')} | Company: {item.get('company','not listed')}")
|
|
551
|
+
if prof.get("type") == "org":
|
|
552
|
+
print(f" Org: {prof.get('name')} | Website: {prof.get('website','none')} | "
|
|
553
|
+
f"Repos: {prof.get('public_repos',0)}")
|
|
554
|
+
elif prof.get("type") == "user":
|
|
555
|
+
print(f" User: {prof.get('name')} | Company: {prof.get('company','not listed')} | "
|
|
556
|
+
f"Twitter: {prof.get('twitter_username','not listed')} | "
|
|
557
|
+
f"Followers: {prof.get('followers',0)}")
|
|
558
|
+
if primary:
|
|
559
|
+
print(f" Top contributor: @{primary.get('login')} ({primary.get('contributions',0)} commits)")
|
|
560
|
+
print()
|
|
561
|
+
PYEOF
|
|
562
|
+
```
|
|
563
|
+
|
|
564
|
+
Using the repo data printed above, generate an adoption brief for each HIGH-SIGNAL repo (score >= 80).
|
|
565
|
+
|
|
566
|
+
Rules:
|
|
567
|
+
- Every repo name, star count, and file path must come from the printed data -- do not modify
|
|
568
|
+
- Every contributor handle must come from the printed "Top contributor" line -- if none listed, write "not listed"
|
|
569
|
+
- Every company name must come from the printed "Company" or "Org" line -- if "not listed", write "not listed"
|
|
570
|
+
- "Why reach out" must reference specific signals from the data (score, stars, days since push, company)
|
|
571
|
+
- "Suggested message" must name the repo, the specific file where the SDK was found, and connect to product_context if provided
|
|
572
|
+
- No em dashes. No forbidden words: powerful, robust, seamless, innovative, game-changing, streamline, leverage, transform
|
|
573
|
+
|
|
574
|
+
Write your briefs to `/tmp/sat-briefs.json` with this structure:
|
|
575
|
+
|
|
576
|
+
```json
|
|
577
|
+
{
|
|
578
|
+
"adoption_briefs": [
|
|
579
|
+
{
|
|
580
|
+
"repo": "owner/repo-name",
|
|
581
|
+
"tier": "company_org",
|
|
582
|
+
"adoption_score": 124.0,
|
|
583
|
+
"company": "Company Name or not listed",
|
|
584
|
+
"top_contributor": "@handle or not listed",
|
|
585
|
+
"twitter": "@handle or not listed",
|
|
586
|
+
"stars": 234,
|
|
587
|
+
"language": "TypeScript",
|
|
588
|
+
"sdk_file": "src/api/client.ts",
|
|
589
|
+
"why_reach_out": "2-3 sentences specific to this repo's signals",
|
|
590
|
+
"suggested_message": "2-4 sentences naming the repo, SDK file, and product_context connection"
|
|
591
|
+
}
|
|
592
|
+
]
|
|
593
|
+
}
|
|
594
|
+
```
|
|
595
|
+
|
|
596
|
+
After writing, confirm:
|
|
597
|
+
|
|
598
|
+
```bash
|
|
599
|
+
python3 -c "
|
|
600
|
+
import json
|
|
601
|
+
d = json.load(open('/tmp/sat-briefs.json'))
|
|
602
|
+
print(f'Briefs generated: {len(d.get(\"adoption_briefs\", []))}')
|
|
603
|
+
for b in d['adoption_briefs']:
|
|
604
|
+
print(f' {b[\"repo\"]} ({b[\"tier\"]}): score={b[\"adoption_score\"]} company={b[\"company\"]}')
|
|
605
|
+
"
|
|
606
|
+
```
|
|
607
|
+
|
|
608
|
+
---
|
|
609
|
+
|
|
610
|
+
## Step 7: Self-QA
|
|
611
|
+
|
|
612
|
+
```bash
|
|
613
|
+
python3 << 'PYEOF'
|
|
614
|
+
import json
|
|
615
|
+
|
|
616
|
+
raw = json.load(open("/tmp/sat-raw-results.json"))
|
|
617
|
+
enriched = json.load(open("/tmp/sat-enriched.json"))
|
|
618
|
+
briefs = json.load(open("/tmp/sat-briefs.json"))
|
|
619
|
+
|
|
620
|
+
failures = []
|
|
621
|
+
|
|
622
|
+
# Verify every repo in briefs exists in raw search results
|
|
623
|
+
raw_full_names = {r["full_name"] for r in raw}
|
|
624
|
+
for brief in briefs.get("adoption_briefs", []):
|
|
625
|
+
if brief.get("repo") not in raw_full_names:
|
|
626
|
+
failures.append(f"Brief for unknown repo '{brief.get('repo')}' not in code search results -- removed")
|
|
627
|
+
|
|
628
|
+
briefs["adoption_briefs"] = [
|
|
629
|
+
b for b in briefs.get("adoption_briefs", []) if b.get("repo") in raw_full_names
|
|
630
|
+
]
|
|
631
|
+
|
|
632
|
+
# Verify briefs are sorted by adoption_score descending
|
|
633
|
+
scores = [b["adoption_score"] for b in briefs.get("adoption_briefs", [])]
|
|
634
|
+
if scores != sorted(scores, reverse=True):
|
|
635
|
+
briefs["adoption_briefs"].sort(key=lambda x: -x["adoption_score"])
|
|
636
|
+
failures.append("Re-sorted briefs by adoption_score descending")
|
|
637
|
+
|
|
638
|
+
# Check for em dashes
|
|
639
|
+
briefs_str = json.dumps(briefs)
|
|
640
|
+
if "—" in briefs_str:
|
|
641
|
+
briefs_str = briefs_str.replace("—", " - ")
|
|
642
|
+
briefs = json.loads(briefs_str)
|
|
643
|
+
failures.append("Fixed: em dash characters removed from briefs")
|
|
644
|
+
|
|
645
|
+
# Check for forbidden words
|
|
646
|
+
forbidden = ["powerful", "robust", "seamless", "innovative", "game-changing",
|
|
647
|
+
"streamline", "leverage", "transform", "revolutionize"]
|
|
648
|
+
full_text = json.dumps(briefs).lower()
|
|
649
|
+
for word in forbidden:
|
|
650
|
+
if word in full_text:
|
|
651
|
+
failures.append(f"Warning: forbidden word '{word}' found in briefs -- review before presenting")
|
|
652
|
+
|
|
653
|
+
# Check required fields
|
|
654
|
+
for brief in briefs.get("adoption_briefs", []):
|
|
655
|
+
for field in ["repo", "tier", "adoption_score", "company", "top_contributor",
|
|
656
|
+
"why_reach_out", "suggested_message"]:
|
|
657
|
+
if brief.get(field) is None:
|
|
658
|
+
failures.append(f"Missing field '{field}' in brief for {brief.get('repo', '?')}")
|
|
659
|
+
|
|
660
|
+
output = {
|
|
661
|
+
"enriched": enriched,
|
|
662
|
+
"briefs": briefs,
|
|
663
|
+
"data_quality_flags": failures
|
|
664
|
+
}
|
|
665
|
+
json.dump(output, open("/tmp/sat-output.json", "w"), indent=2)
|
|
666
|
+
print(f"QA complete. Issues found: {len(failures)}")
|
|
667
|
+
for f in failures:
|
|
668
|
+
print(f" - {f}")
|
|
669
|
+
if not failures:
|
|
670
|
+
print("All QA checks passed.")
|
|
671
|
+
PYEOF
|
|
672
|
+
```
|
|
673
|
+
|
|
674
|
+
---
|
|
675
|
+
|
|
676
|
+
## Step 8: Save and Present Output
|
|
677
|
+
|
|
678
|
+
```bash
|
|
679
|
+
python3 << 'PYEOF'
|
|
680
|
+
import json, os
|
|
681
|
+
from datetime import datetime, timezone
|
|
682
|
+
|
|
683
|
+
output = json.load(open("/tmp/sat-output.json"))
|
|
684
|
+
enriched = output["enriched"]
|
|
685
|
+
briefs_map = {b["repo"]: b for b in output["briefs"].get("adoption_briefs", [])}
|
|
686
|
+
flags = output["data_quality_flags"]
|
|
687
|
+
data = json.load(open("/tmp/sat-input.json"))
|
|
688
|
+
sdk_name = data["sdk_name"]
|
|
689
|
+
date_str = datetime.now(tz=timezone.utc).strftime("%Y-%m-%d")
|
|
690
|
+
|
|
691
|
+
high_signal = [r for r in enriched if r["adoption_score"] >= 80]
|
|
692
|
+
medium = [r for r in enriched if 40 <= r["adoption_score"] < 80]
|
|
693
|
+
noise = [r for r in enriched if r["adoption_score"] < 40 or r["tier"] == "tutorial_noise"]
|
|
694
|
+
|
|
695
|
+
# Compute velocity buckets
|
|
696
|
+
new_7d = sum(1 for r in enriched if r.get("days_since_created", 999) <= 7)
|
|
697
|
+
new_30d = sum(1 for r in enriched if r.get("days_since_created", 999) <= 30)
|
|
698
|
+
|
|
699
|
+
# Load previous snapshot for comparison
|
|
700
|
+
slug = sdk_name.replace("@", "").replace("/", "-")
|
|
701
|
+
prev_repos = set()
|
|
702
|
+
prev_path = f"docs/sdk-adopters/"
|
|
703
|
+
if os.path.isdir(prev_path):
|
|
704
|
+
import glob
|
|
705
|
+
prev_files = sorted(glob.glob(f"{prev_path}{slug}-*.json"))
|
|
706
|
+
if prev_files:
|
|
707
|
+
try:
|
|
708
|
+
prev_data = json.load(open(prev_files[-1]))
|
|
709
|
+
prev_repos = {r["full_name"] for r in prev_data.get("enriched", [])}
|
|
710
|
+
except Exception:
|
|
711
|
+
pass
|
|
712
|
+
|
|
713
|
+
new_since_last = len({r["full_name"] for r in enriched} - prev_repos) if prev_repos else None
|
|
714
|
+
|
|
715
|
+
tier_counts = {}
|
|
716
|
+
for r in enriched:
|
|
717
|
+
tier_counts[r["tier"]] = tier_counts.get(r["tier"], 0) + 1
|
|
718
|
+
|
|
719
|
+
lines = [
|
|
720
|
+
f"## SDK Adoption Report: {sdk_name}",
|
|
721
|
+
f"Repos found: {len(enriched)} | Company repos: {tier_counts.get('company_org', 0)} | "
|
|
722
|
+
f"Active (30 days): {sum(1 for r in enriched if r.get('days_since_push', 999) <= 30)} | Date: {date_str}",
|
|
723
|
+
"",
|
|
724
|
+
"---",
|
|
725
|
+
"",
|
|
726
|
+
"### Adoption Velocity",
|
|
727
|
+
f"New repos last 7 days: {new_7d}",
|
|
728
|
+
f"New repos last 30 days: {new_30d}",
|
|
729
|
+
]
|
|
730
|
+
if new_since_last is not None:
|
|
731
|
+
lines.append(f"New since last run: {new_since_last}")
|
|
732
|
+
lines += ["", "---", ""]
|
|
733
|
+
|
|
734
|
+
if high_signal or medium:
|
|
735
|
+
lines += ["### Top Adopters", ""]
|
|
736
|
+
lines += [
|
|
737
|
+
"| Rank | Repo | Stars | Tier | Score | Pushed | Language |",
|
|
738
|
+
"|---|---|---|---|---|---|---|",
|
|
739
|
+
]
|
|
740
|
+
for i, r in enumerate((high_signal + medium)[:15], 1):
|
|
741
|
+
pushed_label = f"{r.get('days_since_push','?')}d ago"
|
|
742
|
+
lines.append(
|
|
743
|
+
f"| {i} | [{r['full_name']}]({r.get('repo_url', '')}) | "
|
|
744
|
+
f"{r.get('stars',0):,} | {r['tier']} | {r['adoption_score']} | "
|
|
745
|
+
f"{pushed_label} | {r.get('language','?')} |"
|
|
746
|
+
)
|
|
747
|
+
lines += ["", "---", ""]
|
|
748
|
+
|
|
749
|
+
if high_signal:
|
|
750
|
+
lines += ["### Adoption Briefs (score >= 80)", ""]
|
|
751
|
+
for r in high_signal:
|
|
752
|
+
brief = briefs_map.get(r["full_name"], {})
|
|
753
|
+
prof = r.get("owner_profile", {})
|
|
754
|
+
lines.append(f"#### {r['full_name']} [score: {r['adoption_score']}]")
|
|
755
|
+
lines.append(f"Owner: {r['owner_login']} ({r['owner_type']})")
|
|
756
|
+
lines.append(f"Stars: {r.get('stars',0):,} | Language: {r.get('language','?')} | "
|
|
757
|
+
f"Last pushed: {r.get('days_since_push','?')} days ago")
|
|
758
|
+
if r.get("description"):
|
|
759
|
+
lines.append(f"What they're building: {r['description']}")
|
|
760
|
+
lines.append(f"SDK found in: {r.get('file_path','?')}")
|
|
761
|
+
if r.get("company") and r["company"] != "not listed":
|
|
762
|
+
lines.append(f"Company: {r['company']}")
|
|
763
|
+
if r.get("org_website"):
|
|
764
|
+
lines.append(f"Website: {r['org_website']}")
|
|
765
|
+
contribs = r.get("top_contributors", [])
|
|
766
|
+
if contribs:
|
|
767
|
+
lines.append(f"Top contributor: @{contribs[0]['login']} ({contribs[0]['contributions']} commits)")
|
|
768
|
+
lines.append("")
|
|
769
|
+
if brief.get("why_reach_out"):
|
|
770
|
+
lines.append(f"**Why reach out:** {brief['why_reach_out']}")
|
|
771
|
+
if brief.get("suggested_message"):
|
|
772
|
+
lines.append(f"\n**Suggested message:**")
|
|
773
|
+
lines.append(f"> {brief['suggested_message']}")
|
|
774
|
+
lines += ["", "---", ""]
|
|
775
|
+
|
|
776
|
+
lines += [
|
|
777
|
+
"### Adoption Breakdown",
|
|
778
|
+
"",
|
|
779
|
+
"| Tier | Count |",
|
|
780
|
+
"|---|---|",
|
|
781
|
+
]
|
|
782
|
+
for tier in ["company_org", "affiliated_dev", "solo_dev", "tutorial_noise"]:
|
|
783
|
+
count = tier_counts.get(tier, 0)
|
|
784
|
+
lines.append(f"| {tier} | {count} |")
|
|
785
|
+
|
|
786
|
+
lines += ["", "---", ""]
|
|
787
|
+
lines.append(f"Data quality notes: {'; '.join(flags) if flags else 'None'}")
|
|
788
|
+
|
|
789
|
+
output_dir = f"docs/sdk-adopters"
|
|
790
|
+
os.makedirs(output_dir, exist_ok=True)
|
|
791
|
+
md_path = f"{output_dir}/{slug}-{date_str}.md"
|
|
792
|
+
json_path = f"{output_dir}/{slug}-{date_str}.json"
|
|
793
|
+
|
|
794
|
+
open(md_path, "w").write("\n".join(lines))
|
|
795
|
+
json.dump({"enriched": enriched, "briefs": output["briefs"]}, open(json_path, "w"), indent=2)
|
|
796
|
+
|
|
797
|
+
print("\n".join(lines))
|
|
798
|
+
print(f"\nSaved to: {md_path}")
|
|
799
|
+
print(f"JSON snapshot: {json_path} (used for velocity tracking on next run)")
|
|
800
|
+
PYEOF
|
|
801
|
+
```
|
|
802
|
+
|
|
803
|
+
Clean up temp files:
|
|
804
|
+
|
|
805
|
+
```bash
|
|
806
|
+
rm -f /tmp/sat-input.json /tmp/sat-raw-results.json /tmp/sat-scored.json \
|
|
807
|
+
/tmp/sat-enriched.json /tmp/sat-briefs.json /tmp/sat-output.json
|
|
808
|
+
```
|