@bilalimamoglu/sift 0.2.3 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,10 +2,17 @@
2
2
 
3
3
  <img src="assets/brand/sift-logo-badge-monochrome.svg" alt="sift logo" width="88" />
4
4
 
5
- `sift` is a small CLI that runs a noisy shell command, keeps the useful signal, and returns a much smaller answer.
5
+ `sift` turns a long terminal wall of text into a short answer you can act on.
6
6
 
7
- It is a good fit when you want an agent or CI job to understand:
8
- - test results
7
+ Think of it like this:
8
+ - `standard` = map
9
+ - `focused` or `rerun --remaining` = zoom
10
+ - raw traceback = last resort
11
+
12
+ It is a good fit when a human, agent, or CI job needs the answer faster than it needs the whole log.
13
+
14
+ Common uses:
15
+ - test failures
9
16
  - typecheck failures
10
17
  - lint failures
11
18
  - build logs
@@ -13,10 +20,10 @@ It is a good fit when you want an agent or CI job to understand:
13
20
  - `npm audit`
14
21
  - `terraform plan`
15
22
 
16
- It is not a good fit when you need:
17
- - the exact raw log as the main output
18
- - interactive or TUI commands
19
- - shell behavior that depends on raw command output
23
+ Do not use it when:
24
+ - the exact raw log is the main thing you need
25
+ - the command is interactive or TUI-based
26
+ - shell behavior depends on exact raw command output
20
27
 
21
28
  ## Install
22
29
 
@@ -57,43 +64,130 @@ Then check it:
57
64
  sift doctor
58
65
  ```
59
66
 
60
- ## Quick start
67
+ ## Start here
68
+
69
+ The default path is simple:
70
+ 1. run the noisy command through `sift`
71
+ 2. read the short `standard` answer first
72
+ 3. only zoom in if `standard` clearly tells you more detail is still worth it
73
+
74
+ Examples:
61
75
 
62
76
  ```bash
63
77
  sift exec "what changed?" -- git diff
64
- sift exec --preset test-status -- npm test
78
+ sift exec --preset test-status -- pytest -q
79
+ sift rerun
80
+ sift rerun --remaining --detail focused
81
+ sift rerun --remaining --detail verbose --show-raw
82
+ sift watch "what changed between cycles?" < watcher-output.txt
83
+ sift exec --watch "what changed between cycles?" -- node watcher.js
65
84
  sift exec --preset typecheck-summary -- npm run typecheck
66
85
  sift exec --preset lint-failures -- eslint .
67
86
  sift exec --preset audit-critical -- npm audit
68
87
  sift exec --preset infra-risk -- terraform plan
88
+ sift agent install codex --dry-run
69
89
  ```
70
90
 
71
- ## The main workflow
91
+ ## Simple workflow
72
92
 
73
- `sift exec` is the default path:
93
+ For most repos, this is the whole story:
74
94
 
75
95
  ```bash
76
- sift exec "what changed?" -- git diff
77
- sift exec --preset test-status -- npm test
78
- sift exec --preset test-status --show-raw -- npm test
79
- sift exec --preset test-status --detail focused -- npm test
80
- sift exec --preset test-status --detail verbose -- npm test
96
+ sift exec --preset test-status -- <test command>
97
+ sift rerun
98
+ sift rerun --remaining --detail focused
81
99
  ```
82
100
 
101
+ Mental model:
102
+ - `sift escalate` = same cached output, deeper render
103
+ - `sift rerun` = rerun the cached full command at `standard` and prepend what resolved, remained, or changed
104
+ - `sift rerun --remaining` = rerun only the remaining failing pytest node IDs for a zoomed-in view
105
+ - `sift watch` / `sift exec --watch` = treat redraw-style output as cycles and summarize what changed
106
+ - `Decision: stop and act` = trust the current diagnosis and go read or fix code
107
+ - `Decision: zoom` = one deeper sift pass is justified before raw
108
+ - `Decision: raw only if exact traceback is required` = raw is last resort, not the next default step
109
+
83
110
  If your project uses `pytest`, `vitest`, `jest`, `bun test`, or another test runner instead of `npm test`, use the same preset with that command.
84
111
 
85
- What happens:
86
- 1. `sift` runs the command
112
+ What `sift` does in `exec` mode:
113
+ 1. runs the child command
87
114
  2. captures `stdout` and `stderr`
88
- 3. trims the noise
89
- 4. sends a smaller input to the model
90
- 5. prints a short answer or JSON
91
- 6. preserves the child command exit code in `exec` mode
115
+ 3. keeps the useful signal
116
+ 4. returns a short answer or JSON
117
+ 5. preserves the child command exit code
92
118
 
93
119
  Useful debug flags:
94
120
  - `--dry-run`: show the reduced input and prompt without calling the provider
95
121
  - `--show-raw`: print the captured raw input to `stderr`
96
122
 
123
+ ## When tests fail
124
+
125
+ Start with the map:
126
+
127
+ ```bash
128
+ sift exec --preset test-status -- <test command>
129
+ ```
130
+
131
+ If `standard` already names the main failure buckets, counts, and hints, stop there and read code.
132
+
133
+ Then use this order:
134
+ 1. `sift exec --preset test-status -- <test command>`
135
+ 2. `sift rerun`
136
+ 3. `sift rerun --remaining --detail focused`
137
+ 4. `sift rerun --remaining --detail verbose`
138
+ 5. `sift rerun --remaining --detail verbose --show-raw`
139
+ 6. raw pytest only if exact traceback lines are still needed
140
+
141
+ The normal stop budget is `standard` first, then at most one zoom step before raw.
142
+
143
+ If you want the older explicit compare shape, `sift exec --preset test-status --diff -- <test command>` still works. `sift rerun` is the shorter normal path for the same idea.
144
+
145
+ ## Diagnose JSON
146
+
147
+ Most of the time, you do not need JSON. Start with text first.
148
+
149
+ If `standard` already shows bucket-level root cause, `Anchor`, and `Fix`, do not re-verify the same bucket with raw pytest. At most do one targeted source read before you edit.
150
+
151
+ Use diagnose JSON only when automation or machine branching really needs it:
152
+
153
+ ```bash
154
+ sift exec --preset test-status --goal diagnose --format json -- pytest -q
155
+ sift rerun --goal diagnose --format json
156
+ sift watch --preset test-status --goal diagnose --format json < pytest-watch.txt
157
+ ```
158
+
159
+ Default diagnose JSON is summary-first:
160
+ - `remaining_summary` and `resolved_summary` keep the answer small
161
+ - `read_targets` points to the first file or line worth reading
162
+ - `read_targets.context_hint` can tell an agent to read only a small line window first
163
+ - if `context_hint` only includes `search_hint`, search for that string before reading the whole file
164
+ - `remaining_subset_available` tells you whether `sift rerun --remaining` can zoom safely
165
+
166
+ If an agent truly needs every raw failing test ID, opt in:
167
+
168
+ ```bash
169
+ sift exec --preset test-status --goal diagnose --format json --include-test-ids -- pytest -q
170
+ ```
171
+
172
+ `--goal diagnose --format json` is currently supported only for `test-status`, `rerun`, and `test-status` watch flows.
173
+
174
+ ## Watch mode
175
+
176
+ Use watch mode when command output redraws or repeats and you care about cycle-to-cycle change summaries more than the raw stream:
177
+
178
+ ```bash
179
+ sift watch "what changed between cycles?" < watcher-output.txt
180
+ sift exec --watch "what changed between cycles?" -- node watcher.js
181
+ sift exec --watch --preset test-status -- pytest -f
182
+ ```
183
+
184
+ `sift watch` keeps the current summary and change summary together:
185
+ - cycle 1 = current state
186
+ - later cycles = what changed, what resolved, what stayed, and the next best action
187
+ - for `test-status`, resolved tests drop out and remaining failures stay in focus
188
+
189
+ If the stream clearly looks like a redraw/watch session, `sift` can auto-switch to watch handling and prints a short stderr note when it does.
190
+
97
191
  ## `test-status` detail modes
98
192
 
99
193
  If you are running `npm test` and want `sift` to check the result, use `--preset test-status`.
@@ -116,48 +210,88 @@ Examples:
116
210
 
117
211
  ```bash
118
212
  sift exec --preset test-status -- npm test
119
- sift exec --preset test-status --detail focused -- npm test
120
- sift exec --preset test-status --detail verbose -- npm test
121
- sift exec --preset test-status --detail verbose --show-raw -- npm test
213
+ sift rerun
214
+ sift rerun --remaining --detail focused
215
+ sift rerun --remaining --detail verbose
216
+ sift rerun --remaining --detail verbose --show-raw
122
217
  ```
123
218
 
124
219
  If you use a different runner, swap in your command:
125
220
 
126
221
  ```bash
127
222
  sift exec --preset test-status -- pytest
128
- sift exec --preset test-status --detail focused -- vitest
129
- sift exec --preset test-status --detail verbose -- bun test
223
+ sift rerun
224
+ sift rerun --remaining --detail focused
225
+ sift rerun --remaining --detail verbose --show-raw
130
226
  ```
131
227
 
228
+ `sift rerun --remaining` currently supports only cached argv-mode `pytest ...` or `python -m pytest ...` runs. If the cached command is not subset-capable, run a narrowed pytest command manually with `sift exec --preset test-status -- <narrowed pytest command>`.
229
+
132
230
  Typical shapes:
133
231
 
134
232
  `standard`
135
233
  ```text
136
234
  - Tests did not complete.
137
235
  - 114 errors occurred during collection.
138
- - Most failures are import/dependency errors during test collection.
139
- - Missing modules include pydantic, fastapi, botocore, PIL, httpx, numpy.
236
+ - Import/dependency blocker: repeated collection failures are caused by missing dependencies.
237
+ - Anchor: path/to/failing_test.py
238
+ - Fix: Install the missing dependencies and rerun the affected tests.
239
+ - Decision: stop and act. Do not escalate unless you need exact traceback lines.
240
+ - Next: Fix bucket 1 first, then rerun the full suite at standard.
241
+ - Stop signal: diagnosis complete; raw not needed.
242
+ ```
243
+
244
+ `standard` can also separate more than one failure family in a single pass:
245
+ ```text
246
+ - Tests did not pass.
247
+ - 3 tests failed. 124 errors occurred.
248
+ - Shared blocker: DB-isolated tests are missing a required test env var.
249
+ - Anchor: search <TEST_ENV_VAR> in path/to/test_setup.py
250
+ - Fix: Set the required test env var and rerun the suite.
251
+ - Contract drift: snapshot expectations are out of sync with the current API or model state.
252
+ - Anchor: search <route-or-entity> in path/to/freeze_test.py
253
+ - Fix: Review the drift and regenerate the snapshots if the change is intentional.
254
+ - Decision: stop and act. Do not escalate unless you need exact traceback lines.
255
+ - Next: Fix bucket 1 first, then rerun the full suite at standard. Secondary buckets are already visible behind it.
256
+ - Stop signal: diagnosis complete; raw not needed.
140
257
  ```
141
258
 
142
259
  `focused`
143
260
  ```text
144
261
  - Tests did not complete.
145
262
  - 114 errors occurred during collection.
146
- - import/dependency errors during collection
147
- - tests/unit/test_auth_refresh.py -> missing module: botocore
148
- - tests/unit/test_cognito.py -> missing module: pydantic
149
- - and 103 more failing modules
263
+ - Import/dependency blocker: missing dependencies are blocking collection.
264
+ - Missing modules include <module-a>, <module-b>.
265
+ - path/to/test_a.py -> missing module: <module-a>
266
+ - path/to/test_b.py -> missing module: <module-b>
267
+ - Hint: Install the missing dependencies and rerun the affected tests.
268
+ - Next: Fix bucket 1 first, then rerun the full suite at standard.
269
+ - Stop signal: diagnosis complete; raw not needed.
150
270
  ```
151
271
 
152
272
  `verbose`
153
273
  ```text
154
274
  - Tests did not complete.
155
275
  - 114 errors occurred during collection.
156
- - tests/unit/test_auth_refresh.py -> missing module: botocore
157
- - tests/unit/test_cognito.py -> missing module: pydantic
158
- - tests/unit/test_dataset_use_case_facade.py -> missing module: fastapi
276
+ - Import/dependency blocker: missing dependencies are blocking collection.
277
+ - path/to/test_a.py -> missing module: <module-a>
278
+ - path/to/test_b.py -> missing module: <module-b>
279
+ - path/to/test_c.py -> missing module: <module-c>
280
+ - Hint: Install the missing dependencies and rerun the affected tests.
281
+ - Next: Fix bucket 1 first, then rerun the full suite at standard.
282
+ - Stop signal: diagnosis complete; raw not needed.
159
283
  ```
160
284
 
285
+ Recommended debugging order for tests:
286
+ 1. Use `standard` for the full suite first.
287
+ 2. Treat `standard` as the map. If it already shows bucket-level root cause, `Anchor`, and `Fix`, trust it and report or act from there directly.
288
+ 3. Use `sift escalate` only when you want a deeper render of the same cached output without rerunning the command.
289
+ 4. After fixing something, run `sift rerun` to refresh the full-suite truth at `standard`.
290
+ 5. Only then use `sift rerun --remaining --detail focused` as the zoom lens after the full-suite truth is refreshed.
291
+ 6. Then use `sift rerun --remaining --detail verbose`.
292
+ 7. Then use `sift rerun --remaining --detail verbose --show-raw`.
293
+ 8. Fall back to the raw pytest command only if you still need exact traceback lines for the remaining failing subset.
294
+
161
295
  ## Built-in presets
162
296
 
163
297
  - `test-status`: summarize test runs
@@ -176,6 +310,55 @@ sift presets list
176
310
  sift presets show test-status
177
311
  ```
178
312
 
313
+ ## Agent setup
314
+
315
+ If you want Codex or Claude Code to use `sift` by default, let `sift` install a managed instruction block for you.
316
+
317
+ Repo scope is the default because it is safer:
318
+
319
+ ```bash
320
+ sift agent show codex
321
+ sift agent show codex --raw
322
+ sift agent install codex --dry-run
323
+ sift agent install codex --dry-run --raw
324
+ sift agent install codex
325
+ sift agent install claude
326
+ ```
327
+
328
+ You can also install machine-wide instructions explicitly:
329
+
330
+ ```bash
331
+ sift agent install codex --scope global
332
+ sift agent install claude --scope global
333
+ ```
334
+
335
+ Useful commands:
336
+
337
+ ```bash
338
+ sift agent status
339
+ sift agent remove codex
340
+ sift agent remove claude
341
+ ```
342
+
343
+ `sift agent show ...` is a preview. It also tells you whether the managed block is already installed in the current scope.
344
+
345
+ What the installer does:
346
+ - writes to `AGENTS.md` or `CLAUDE.md` by default in the current repo
347
+ - uses marked managed blocks instead of rewriting the whole file
348
+ - preserves your surrounding notes and instructions
349
+ - can use global files when you explicitly choose `--scope global`
350
+ - keeps previews short by default
351
+ - shows the exact managed block or final dry-run content only with `--raw`
352
+
353
+ What the managed block tells the agent:
354
+ - start with `sift` for long non-interactive command output so the agent spends less context-window and token budget on raw logs
355
+ - for tests, begin with the normal `test-status` summary
356
+ - if `standard` already identifies the main buckets, stop there instead of escalating automatically
357
+ - use `sift escalate` only for the same cached output when more detail is needed without rerunning the command
358
+ - after a fix, refresh the truth with `sift rerun`
359
+ - only then zoom into the remaining failing pytest subset with `sift rerun --remaining --detail focused`, then `verbose`, then `--show-raw`
360
+ - fall back to the raw test command only when exact traceback lines are still needed
361
+
179
362
  ## CI-friendly usage
180
363
 
181
364
  Some commands succeed technically but should still block CI. `--fail-on` handles that for the built-in semantic presets that have stable machine-readable output:
@@ -210,6 +393,22 @@ Config precedence:
210
393
  4. machine-wide `~/.config/sift/config.yaml` or `~/.config/sift/config.yml`
211
394
  5. built-in defaults
212
395
 
396
+ ## Maintainer benchmark
397
+
398
+ To compare raw pytest output against the `test-status` reduction ladder on fixed fixtures, run:
399
+
400
+ ```bash
401
+ npm run bench:test-status-ab
402
+ npm run bench:test-status-live
403
+ ```
404
+
405
+ This uses the real `o200k_base` tokenizer and reports both:
406
+ - command-output budget as the primary benchmark
407
+ - deterministic recipe-budget comparisons as supporting evidence only
408
+ - live-session scorecards for captured mixed full-suite agent transcripts
409
+
410
+ The benchmark is meant to show context-window and command-output reduction first. In normal debugging flows, `test-status` should usually stop at `standard`; `focused` and `verbose` are escalation tools, and raw pytest is the last resort when exact traceback evidence is still needed.
411
+
213
412
  If you pass `--config <path>`, that path is strict. Missing explicit config paths are errors.
214
413
 
215
414
  Minimal config example:
@@ -253,15 +452,6 @@ Known compatible env fallbacks include:
253
452
  - `TOGETHER_API_KEY`
254
453
  - `GROQ_API_KEY`
255
454
 
256
- ## Agent usage
257
-
258
- The simple rule is:
259
- - use `sift exec` for long, noisy, non-interactive command output
260
- - skip `sift` when exact raw output matters
261
-
262
- For Codex, put that rule in `~/.codex/AGENTS.md`.
263
- For Claude Code, put the same rule in `CLAUDE.md`.
264
-
265
455
  ## Safety and limits
266
456
 
267
457
  - redaction is optional and regex-based