@chainpatrol/cli 0.4.1 → 0.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,23 @@
|
|
|
1
1
|
# @chainpatrol/cli
|
|
2
2
|
|
|
3
|
+
## 0.5.0
|
|
4
|
+
|
|
5
|
+
### Minor Changes
|
|
6
|
+
|
|
7
|
+
- f8d9454: Add five new healthchecks covering the takedown pipeline and asset liveness, expanding the runnable surface for `chainpatrol healthchecks run`:
|
|
8
|
+
|
|
9
|
+
- **`takedowns.todo-volume`** — counts takedowns sitting in TODO. Default warn=50, fail=100. Catches automation gaps on new threat surfaces or manual-filing capacity issues.
|
|
10
|
+
- **`takedowns.in-progress-volume`** — counts takedowns currently IN*PROGRESS regardless of age. Default warn=30, fail=75. Complements `takedowns.stale-in-progress` (age-based) by catching submission-format / vendor-side issues \_before* items go stale.
|
|
11
|
+
- **`takedowns.cancelled-count`** — counts transitions into CANCELLED from the `TakedownEvent` log over a rolling window. Default 7d / warn=3 / fail=10. Replaces the placeholder `takedowns.cancelled-rate` registry entry. Uses the event log rather than `Takedown.updatedAt` so the cancellation date is correct even if the takedown has since been edited.
|
|
12
|
+
- **`takedowns.automation-off`** — flags orgs with takedown service enabled but `isAutomatedTakedownsActive` off for too long. Default warn=30d, fail=60d. Age is derived from the most recent `SERVICES_AUTOMATED_TAKEDOWNS_UPDATED` `OrganizationEvent`, falling back to `Organization.updatedAt`. Orgs with takedown service entirely disabled are skipped.
|
|
13
|
+
- **`assets.dead-asset-spike`** — compares `DETECTED_AS_DEAD` events in the current window against the prior baseline rate; warns on a multiplier exceeding the threshold (default 24h vs 7d, ×2 warn / ×4 fail) once the current count clears the `minSpikeCount` floor. Catches liveness-checker regressions after platform changes.
|
|
14
|
+
|
|
15
|
+
The bundled CLI skill gets a rewritten **Takedowns** section covering all four takedown checks and a new top-level **Assets** section covering `dead-asset-spike` plus guidance on the still-unimplemented `assets.dead-but-alive` / `assets.alive-but-marked-dead` (which require live HTTP probes and are not a fit for synchronous healthchecks).
|
|
16
|
+
|
|
17
|
+
The public `HealthcheckResult.category` enum gains `"assets"`. `observed` and `threshold` records now also accept booleans and null (for fields like `automatedTakedownsActive` and the nullable `ratio` on the spike check).
|
|
18
|
+
|
|
19
|
+
`HealthcheckResult` also gains an `appUrl` field (string or null) that deep-links to the relevant filtered admin page in the web app. For example, `takedowns.stale-in-progress` returns `https://app.chainpatrol.io/admin/<slug>/takedowns?takedownStatus=IN_PROGRESS&sortBy=oldest` and `reviewing.backlog` returns `https://app.chainpatrol.io/admin/<slug>/review?excludeWatchlisted=true`. The CLI prints `View in app: <url>` after each result. Checks without a sensible filtered view (`detections.silent-configs`, `assets.dead-asset-spike`) return `null`. The base URL respects `BETTER_AUTH_URL` and falls back to `https://app.chainpatrol.io`.
|
|
20
|
+
|
|
3
21
|
## 0.4.1
|
|
4
22
|
|
|
5
23
|
### Patch Changes
|
|
@@ -321,10 +321,13 @@ chainpatrol --json healthchecks run reviewing.backlog --org <slug>
|
|
|
321
321
|
chainpatrol --json healthchecks run --all --org <slug>
|
|
322
322
|
\`\`\`
|
|
323
323
|
|
|
324
|
-
Each implemented check has a stable id of the form \`category.name
|
|
325
|
-
\`detections.silent-configs\`, \`reviewing.backlog\`,
|
|
324
|
+
Each implemented check has a stable id of the form \`category.name\`. Implemented
|
|
325
|
+
ids today: \`detections.silent-configs\`, \`reviewing.backlog\`,
|
|
326
326
|
\`reviewing.old-proposals\`, \`reviewing.watchlist-backlog\`,
|
|
327
|
-
\`reviewing.watchlist-old\`, \`takedowns.
|
|
327
|
+
\`reviewing.watchlist-old\`, \`takedowns.todo-volume\`,
|
|
328
|
+
\`takedowns.in-progress-volume\`, \`takedowns.stale-in-progress\`,
|
|
329
|
+
\`takedowns.cancelled-count\`, \`takedowns.automation-off\`,
|
|
330
|
+
\`assets.dead-asset-spike\`.
|
|
328
331
|
|
|
329
332
|
### Pending proposals: "Needs Review" vs "Watchlisted"
|
|
330
333
|
|
|
@@ -344,6 +347,15 @@ grade them with separate checks:
|
|
|
344
347
|
severity is **capped at warn** so they never block on the same SLA as
|
|
345
348
|
Needs Review. When reporting findings, treat these as lower-priority.
|
|
346
349
|
|
|
350
|
+
Each healthcheck result includes an \`appUrl\` field (string or null) that
|
|
351
|
+
deep-links to the relevant filtered admin page in the web app \u2014 e.g. the
|
|
352
|
+
takedowns page filtered to IN_PROGRESS for \`takedowns.stale-in-progress\`,
|
|
353
|
+
or the review page filtered to oldest pending for \`reviewing.old-proposals\`.
|
|
354
|
+
**When reporting a non-OK healthcheck to the user, always surface the
|
|
355
|
+
\`appUrl\` so they can jump straight to the right view.** Some checks
|
|
356
|
+
(\`detections.silent-configs\`, \`assets.dead-asset-spike\`) emit \`null\`
|
|
357
|
+
because no filterable list page exists for that signal yet.
|
|
358
|
+
|
|
347
359
|
Implemented checks today:
|
|
348
360
|
|
|
349
361
|
- **detections.silent-configs** \u2014 equivalent to \`detections healthcheck\`,
|
|
@@ -358,8 +370,27 @@ Implemented checks today:
|
|
|
358
370
|
- **reviewing.watchlist-old** \u2014 counts watchlisted pending proposals older
|
|
359
371
|
than the warn-age threshold (default 30 days) and lists the oldest.
|
|
360
372
|
Severity capped at warn.
|
|
373
|
+
- **takedowns.todo-volume** \u2014 counts takedowns in TODO (default warn=50,
|
|
374
|
+
fail=100). Pile-ups here usually mean an automation gap on a new threat
|
|
375
|
+
surface, or manual-filing capacity issues.
|
|
376
|
+
- **takedowns.in-progress-volume** \u2014 counts takedowns currently IN_PROGRESS
|
|
377
|
+
regardless of age (default warn=30, fail=75). Complements
|
|
378
|
+
\`stale-in-progress\` \u2014 a high count signals vendor-side or
|
|
379
|
+
submission-format problems even before items go stale.
|
|
361
380
|
- **takedowns.stale-in-progress** \u2014 counts takedowns sitting in IN_PROGRESS
|
|
362
381
|
past the staleness threshold (default 7 days) and lists the oldest.
|
|
382
|
+
- **takedowns.cancelled-count** \u2014 counts CANCELLED transitions from the
|
|
383
|
+
TakedownEvent log over a rolling window (default 7d, warn=3, fail=10).
|
|
384
|
+
Cancellations should be rare; a spike usually means a proposal-funnel
|
|
385
|
+
quality problem or misuse of the CANCELLED status.
|
|
386
|
+
- **takedowns.automation-off** \u2014 flags orgs with takedown service enabled
|
|
387
|
+
but \`isAutomatedTakedownsActive\` off for too long (default warn=30d,
|
|
388
|
+
fail=60d). Skipped for orgs with takedown service entirely disabled.
|
|
389
|
+
- **assets.dead-asset-spike** \u2014 compares DEAD-detection events in the
|
|
390
|
+
current window against the prior baseline rate; warns on a multiplier
|
|
391
|
+
exceeding the threshold (default 24h vs 7d, \xD72 warn / \xD74 fail) once the
|
|
392
|
+
current count clears the \`minSpikeCount\` floor. Catches liveness-checker
|
|
393
|
+
regressions after platform changes.
|
|
363
394
|
|
|
364
395
|
The following checks are listed by \`healthchecks list\` (\`implemented: false\`)
|
|
365
396
|
but **not yet implemented on the backend** \u2014 when the agent surfaces them in
|
|
@@ -373,8 +404,11 @@ a healthcheck report, mark them explicitly as "manual check, no API yet":
|
|
|
373
404
|
human approvers in the review history. Use \`metrics breakdown\` as a proxy.
|
|
374
405
|
- **blocklisting.gsb-cancelled-rate** \u2014 Google Safe Browsing submission state
|
|
375
406
|
is not yet exposed in the public API.
|
|
376
|
-
- **
|
|
377
|
-
|
|
407
|
+
- **assets.dead-but-alive** / **assets.alive-but-marked-dead** \u2014 require live
|
|
408
|
+
HTTP probes against asset URLs, which is not a synchronous-healthcheck
|
|
409
|
+
shape. Until a dedicated probe command exists, sample a handful manually
|
|
410
|
+
from \`assets list --status DEAD\` (or ALIVE) and verify in a browser. Notify
|
|
411
|
+
ChainPatrol engineering if the liveness checker looks miscalibrated.
|
|
378
412
|
|
|
379
413
|
When the user asks to "run a healthcheck on org X", the canonical command is:
|
|
380
414
|
|
|
@@ -775,19 +809,23 @@ exposed in the public API, this remains a manual / engineering-team check.
|
|
|
775
809
|
|
|
776
810
|
### Takedowns
|
|
777
811
|
|
|
812
|
+
The takedown pipeline has three stages \u2014 TODO (queued, not yet filed),
|
|
813
|
+
IN_PROGRESS (filed, waiting on vendor / customer / refile), and a terminal
|
|
814
|
+
state (COMPLETED or CANCELLED). Healthchecks cover pile-ups at each stage,
|
|
815
|
+
plus quality/configuration issues.
|
|
816
|
+
|
|
778
817
|
#### Too Many Takedowns in ToDo
|
|
779
818
|
|
|
780
819
|
Can mean a gap in automated takedowns not being implemented for some new area
|
|
781
820
|
of threats. It can also mean the areas that require manual takedowns are
|
|
782
821
|
being missed by the takedown team.
|
|
783
822
|
|
|
784
|
-
**Run via CLI:** **
|
|
785
|
-
in
|
|
786
|
-
|
|
787
|
-
|
|
788
|
-
|
|
789
|
-
|
|
790
|
-
\`metrics summary\` for that baseline) is the finding.
|
|
823
|
+
**Run via CLI:** **Implemented as \`healthchecks run takedowns.todo-volume\`.**
|
|
824
|
+
Counts takedowns sitting in TODO (default warn=50, fail=100). For raw
|
|
825
|
+
breakdowns by type, cross-reference with
|
|
826
|
+
\`chainpatrol --json metrics breakdown --org <slug> --by assetType\` \u2014
|
|
827
|
+
items piled up on a specific platform usually point at an automation
|
|
828
|
+
gap there.
|
|
791
829
|
|
|
792
830
|
#### Too Many Takedowns In Progress
|
|
793
831
|
|
|
@@ -795,12 +833,16 @@ Typically means something is wrong with the submission itself. The takedown
|
|
|
795
833
|
may need to be resubmitted, the vendor asked for more evidence, or we may
|
|
796
834
|
have submitted it in the wrong place.
|
|
797
835
|
|
|
798
|
-
**Run via CLI:**
|
|
799
|
-
|
|
800
|
-
|
|
801
|
-
|
|
802
|
-
|
|
803
|
-
|
|
836
|
+
**Run via CLI:** Two checks, complementary:
|
|
837
|
+
|
|
838
|
+
- **\`healthchecks run takedowns.in-progress-volume\`** \u2014 counts all
|
|
839
|
+
IN_PROGRESS takedowns regardless of age (default warn=30, fail=75).
|
|
840
|
+
Catches a vendor-side or submission-format problem before items go
|
|
841
|
+
stale.
|
|
842
|
+
- **\`healthchecks run takedowns.stale-in-progress\`** \u2014 counts IN_PROGRESS
|
|
843
|
+
takedowns past a staleness threshold (default 7 days), lists the oldest
|
|
844
|
+
offenders. Any non-zero value is worth investigating; a growing count
|
|
845
|
+
across snapshots strongly suggests vendor-side or format issues.
|
|
804
846
|
|
|
805
847
|
#### Too Many Cancelled Takedowns
|
|
806
848
|
|
|
@@ -809,14 +851,13 @@ do this takedown" for some reason. Cases like adding an item to the blocklist
|
|
|
809
851
|
when it's already taken down are treated as completed, not cancelled. So even
|
|
810
852
|
3 cancelled takedowns in a 7-day period is too many.
|
|
811
853
|
|
|
812
|
-
**Run via CLI:** **
|
|
813
|
-
|
|
814
|
-
|
|
815
|
-
|
|
816
|
-
|
|
817
|
-
|
|
818
|
-
|
|
819
|
-
cancellations.
|
|
854
|
+
**Run via CLI:** **Implemented as \`healthchecks run takedowns.cancelled-count\`.**
|
|
855
|
+
Counts transitions into the CANCELLED status from the TakedownEvent log
|
|
856
|
+
within the lookback window (default 7 days, warn=3, fail=10). The check
|
|
857
|
+
uses the event log rather than \`Takedown.updatedAt\` so it correctly
|
|
858
|
+
attributes the cancellation date even if the takedown has since been
|
|
859
|
+
edited. A spike usually means a quality problem in the proposal funnel
|
|
860
|
+
or the CANCELLED status being used as a catch-all.
|
|
820
861
|
|
|
821
862
|
#### Automated Takedowns Turned Off for Over 30 Days
|
|
822
863
|
|
|
@@ -824,12 +865,56 @@ Automated takedowns should be on by default for nearly every organization.
|
|
|
824
865
|
Any issue that would make you want to turn off automated takedowns should be
|
|
825
866
|
resolved within 30 days.
|
|
826
867
|
|
|
827
|
-
**Run via CLI:** **
|
|
828
|
-
|
|
829
|
-
|
|
830
|
-
|
|
831
|
-
|
|
832
|
-
|
|
868
|
+
**Run via CLI:** **Implemented as \`healthchecks run takedowns.automation-off\`.**
|
|
869
|
+
Checks \`Organization.isAutomatedTakedownsActive\` and derives the
|
|
870
|
+
off-duration from the most recent \`SERVICES_AUTOMATED_TAKEDOWNS_UPDATED\`
|
|
871
|
+
entry in \`OrganizationEvent\` (default warn 30 days, fail 60 days). Orgs
|
|
872
|
+
with takedown service entirely disabled (\`isTakedownsActive=0\`) are
|
|
873
|
+
skipped \u2014 automation being off is implied in that case.
|
|
874
|
+
|
|
875
|
+
### Assets
|
|
876
|
+
|
|
877
|
+
Healthchecks on the asset model \u2014 specifically asset liveness state, which
|
|
878
|
+
the takedown team depends on for follow-up.
|
|
879
|
+
|
|
880
|
+
#### Spike in Recently Dead Assets
|
|
881
|
+
|
|
882
|
+
A sudden spike in DEAD detections can be a good signal (takedowns or platform
|
|
883
|
+
moderation working) but it can also mean the liveness checker is
|
|
884
|
+
misclassifying assets after a platform change, captcha rollout, or anti-bot
|
|
885
|
+
update. If many assets become dead at once, sample a few manually.
|
|
886
|
+
|
|
887
|
+
**Run via CLI:** **Implemented as \`healthchecks run assets.dead-asset-spike\`.**
|
|
888
|
+
Compares \`DETECTED_AS_DEAD\` events in the current window (default 24h)
|
|
889
|
+
against the baseline rate from the prior \`baselineDays\` (default 7d).
|
|
890
|
+
Severity fires only when the current count clears \`minSpikeCount\` (default
|
|
891
|
+
10) AND exceeds the multiplier (default warn \xD72, fail \xD74). The
|
|
892
|
+
\`minSpikeCount\` floor suppresses noise on orgs with near-zero baseline
|
|
893
|
+
activity. When this fires, pull a sample of recent DEAD assets, verify a
|
|
894
|
+
few in a browser, and notify ChainPatrol engineering if the sample is
|
|
895
|
+
clearly still live.
|
|
896
|
+
|
|
897
|
+
#### Assets Marked Dead but Still Online / Assets Not Marked Dead Even Though They Are Down
|
|
898
|
+
|
|
899
|
+
These two opposite failure modes are **not implemented as healthchecks**
|
|
900
|
+
(listed as \`assets.dead-but-alive\` and \`assets.alive-but-marked-dead\` with
|
|
901
|
+
\`implemented: false\`). Both require live HTTP probes against asset URLs,
|
|
902
|
+
which is not a synchronous-healthcheck shape.
|
|
903
|
+
|
|
904
|
+
Until a dedicated probe command exists:
|
|
905
|
+
|
|
906
|
+
- For "marked dead but still alive": sample a handful of recently-DEAD
|
|
907
|
+
assets, open them in a browser, and watch for any that load. Common
|
|
908
|
+
causes: bot protection, geo-blocking, rate limits, or liveness logic
|
|
909
|
+
that does not handle the asset type correctly.
|
|
910
|
+
- For "alive but marked dead": after a known takedown event, sample
|
|
911
|
+
assets that *should* be dead but are still marked alive. Common causes:
|
|
912
|
+
cached responses, soft-404 pages, parked-domain redirects, platform
|
|
913
|
+
suspension pages still returning 200.
|
|
914
|
+
|
|
915
|
+
In both cases, if liveness looks miscalibrated for a class of assets,
|
|
916
|
+
notify ChainPatrol engineering \u2014 the checker likely needs a tuning pass for
|
|
917
|
+
that platform.
|
|
833
918
|
`;
|
|
834
919
|
}
|
|
835
920
|
function getBundledSkillVersion() {
|
package/dist/cli.js
CHANGED
|
@@ -13,7 +13,7 @@ import {
|
|
|
13
13
|
getCliVersion,
|
|
14
14
|
isSkillInstalled,
|
|
15
15
|
readInstalledSkillVersion
|
|
16
|
-
} from "./chunk-
|
|
16
|
+
} from "./chunk-BSK4YHFA.js";
|
|
17
17
|
import "./chunk-IUZB3DQW.js";
|
|
18
18
|
import {
|
|
19
19
|
DateTime
|
|
@@ -1049,7 +1049,7 @@ async function main() {
|
|
|
1049
1049
|
thresholds.minResults = cli.flags.minResults;
|
|
1050
1050
|
if (cli.flags.lookbackHours !== void 0)
|
|
1051
1051
|
thresholds.lookbackHours = cli.flags.lookbackHours;
|
|
1052
|
-
const { runHealthchecksRun } = await import("./run-
|
|
1052
|
+
const { runHealthchecksRun } = await import("./run-64SBCL4R.js");
|
|
1053
1053
|
await runHealthchecksRun({
|
|
1054
1054
|
org,
|
|
1055
1055
|
id: action,
|
|
@@ -1095,12 +1095,12 @@ async function main() {
|
|
|
1095
1095
|
case "setup":
|
|
1096
1096
|
case "install":
|
|
1097
1097
|
case "i": {
|
|
1098
|
-
const { setupSkill } = await import("./setup-skill-
|
|
1098
|
+
const { setupSkill } = await import("./setup-skill-NQIZBJMR.js");
|
|
1099
1099
|
setupSkill({ json: jsonMode });
|
|
1100
1100
|
break;
|
|
1101
1101
|
}
|
|
1102
1102
|
case "uninstall": {
|
|
1103
|
-
const { uninstallSkill } = await import("./setup-skill-
|
|
1103
|
+
const { uninstallSkill } = await import("./setup-skill-NQIZBJMR.js");
|
|
1104
1104
|
uninstallSkill({ json: jsonMode });
|
|
1105
1105
|
break;
|
|
1106
1106
|
}
|
|
@@ -27,7 +27,12 @@ function buildPayload(entry, org, thresholds) {
|
|
|
27
27
|
"failThreshold",
|
|
28
28
|
"warnAgeHours",
|
|
29
29
|
"failAgeHours",
|
|
30
|
-
"staleThresholdHours"
|
|
30
|
+
"staleThresholdHours",
|
|
31
|
+
"windowHours",
|
|
32
|
+
"baselineDays",
|
|
33
|
+
"warnMultiplier",
|
|
34
|
+
"failMultiplier",
|
|
35
|
+
"minSpikeCount"
|
|
31
36
|
]);
|
|
32
37
|
for (const [key, value] of Object.entries(thresholds)) {
|
|
33
38
|
if (allowedKeys.has(key)) {
|
|
@@ -150,6 +155,9 @@ async function runHealthchecksRun(options) {
|
|
|
150
155
|
if (entry.result.suggestedAction) {
|
|
151
156
|
lines.push(`- Suggested action: ${entry.result.suggestedAction}`);
|
|
152
157
|
}
|
|
158
|
+
if (entry.result.appUrl) {
|
|
159
|
+
lines.push(`- View in app: ${entry.result.appUrl}`);
|
|
160
|
+
}
|
|
153
161
|
return lines.join("\n");
|
|
154
162
|
}),
|
|
155
163
|
...errors.map((entry) => `## ${entry.entry.id} \u2014 ERROR
|
|
@@ -194,6 +202,10 @@ async function runHealthchecksRun(options) {
|
|
|
194
202
|
console.log(`Suggested action for ${entry.result.id}:`);
|
|
195
203
|
console.log(` ${entry.result.suggestedAction}`);
|
|
196
204
|
}
|
|
205
|
+
if (entry.result.appUrl) {
|
|
206
|
+
console.log("");
|
|
207
|
+
console.log(`View in app: ${entry.result.appUrl}`);
|
|
208
|
+
}
|
|
197
209
|
}
|
|
198
210
|
}
|
|
199
211
|
});
|
package/package.json
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
"name": "@chainpatrol/cli",
|
|
3
3
|
"description": "The official ChainPatrol CLI — terminal interface for threat detection",
|
|
4
4
|
"author": "Umar Ahmed <umar@chainpatrol.io>",
|
|
5
|
-
"version": "0.
|
|
5
|
+
"version": "0.5.0",
|
|
6
6
|
"license": "UNLICENSED",
|
|
7
7
|
"homepage": "https://chainpatrol.com/docs/cli",
|
|
8
8
|
"keywords": [
|