@bilalimamoglu/sift 0.2.1 → 0.2.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +163 -69
- package/dist/cli.js +1321 -252
- package/dist/index.d.ts +29 -1
- package/dist/index.js +569 -55
- package/package.json +4 -2
package/README.md
CHANGED
|
@@ -1,20 +1,24 @@
|
|
|
1
1
|
# sift
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
<img src="assets/brand/sift-logo-badge-monochrome.svg" alt="sift logo" width="88" />
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
`sift` is a small CLI that runs a noisy shell command, keeps the useful signal, and returns a much smaller answer.
|
|
6
6
|
|
|
7
|
-
|
|
8
|
-
-
|
|
9
|
-
-
|
|
10
|
-
-
|
|
7
|
+
It is a good fit when you want an agent or CI job to understand:
|
|
8
|
+
- test results
|
|
9
|
+
- typecheck failures
|
|
10
|
+
- lint failures
|
|
11
|
+
- build logs
|
|
12
|
+
- `git diff`
|
|
13
|
+
- `npm audit`
|
|
14
|
+
- `terraform plan`
|
|
11
15
|
|
|
12
|
-
|
|
13
|
-
- exact raw log
|
|
14
|
-
- TUI
|
|
15
|
-
-
|
|
16
|
+
It is not a good fit when you need:
|
|
17
|
+
- the exact raw log as the main output
|
|
18
|
+
- interactive or TUI commands
|
|
19
|
+
- shell behavior that depends on raw command output
|
|
16
20
|
|
|
17
|
-
##
|
|
21
|
+
## Install
|
|
18
22
|
|
|
19
23
|
Requires Node.js 20 or later.
|
|
20
24
|
|
|
@@ -24,77 +28,139 @@ npm install -g @bilalimamoglu/sift
|
|
|
24
28
|
|
|
25
29
|
## One-time setup
|
|
26
30
|
|
|
27
|
-
|
|
31
|
+
The easiest setup path is:
|
|
28
32
|
|
|
29
33
|
```bash
|
|
30
|
-
|
|
31
|
-
export SIFT_BASE_URL=https://api.openai.com/v1
|
|
32
|
-
export SIFT_MODEL=gpt-5-nano
|
|
33
|
-
export OPENAI_API_KEY=your_openai_api_key
|
|
34
|
+
sift config setup
|
|
34
35
|
```
|
|
35
36
|
|
|
36
|
-
|
|
37
|
+
That writes a machine-wide config to:
|
|
37
38
|
|
|
38
|
-
```
|
|
39
|
-
sift
|
|
39
|
+
```text
|
|
40
|
+
~/.config/sift/config.yaml
|
|
40
41
|
```
|
|
41
42
|
|
|
42
|
-
|
|
43
|
+
After that, any terminal on the machine can use `sift`. A repo-local config can still override it later.
|
|
44
|
+
|
|
45
|
+
If you prefer manual setup, this is the smallest useful OpenAI setup:
|
|
43
46
|
|
|
44
47
|
```bash
|
|
45
|
-
export
|
|
48
|
+
export SIFT_PROVIDER=openai
|
|
49
|
+
export SIFT_BASE_URL=https://api.openai.com/v1
|
|
50
|
+
export SIFT_MODEL=gpt-5-nano
|
|
51
|
+
export OPENAI_API_KEY=your_openai_api_key
|
|
46
52
|
```
|
|
47
53
|
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
54
|
+
Then check it:
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
sift doctor
|
|
58
|
+
```
|
|
52
59
|
|
|
53
60
|
## Quick start
|
|
54
61
|
|
|
55
62
|
```bash
|
|
56
63
|
sift exec "what changed?" -- git diff
|
|
57
|
-
sift exec --preset test-status --
|
|
58
|
-
sift exec --preset typecheck-summary --
|
|
64
|
+
sift exec --preset test-status -- npm test
|
|
65
|
+
sift exec --preset typecheck-summary -- npm run typecheck
|
|
59
66
|
sift exec --preset lint-failures -- eslint .
|
|
60
67
|
sift exec --preset audit-critical -- npm audit
|
|
61
68
|
sift exec --preset infra-risk -- terraform plan
|
|
62
|
-
sift exec --preset audit-critical --fail-on -- npm audit
|
|
63
|
-
sift exec --preset infra-risk --fail-on -- terraform plan
|
|
64
69
|
```
|
|
65
70
|
|
|
66
|
-
##
|
|
71
|
+
## The main workflow
|
|
67
72
|
|
|
68
73
|
`sift exec` is the default path:
|
|
69
74
|
|
|
70
75
|
```bash
|
|
71
|
-
sift exec "
|
|
72
|
-
sift exec --
|
|
76
|
+
sift exec "what changed?" -- git diff
|
|
77
|
+
sift exec --preset test-status -- npm test
|
|
78
|
+
sift exec --preset test-status --show-raw -- npm test
|
|
79
|
+
sift exec --preset test-status --detail focused -- npm test
|
|
80
|
+
sift exec --preset test-status --detail verbose -- npm test
|
|
73
81
|
```
|
|
74
82
|
|
|
75
|
-
|
|
76
|
-
|
|
83
|
+
If your project uses `pytest`, `vitest`, `jest`, `bun test`, or another test runner instead of `npm test`, use the same preset with that command.
|
|
84
|
+
|
|
85
|
+
What happens:
|
|
86
|
+
1. `sift` runs the command
|
|
77
87
|
2. captures `stdout` and `stderr`
|
|
78
|
-
3.
|
|
79
|
-
4. sends
|
|
88
|
+
3. trims the noise
|
|
89
|
+
4. sends a smaller input to the model
|
|
80
90
|
5. prints a short answer or JSON
|
|
81
|
-
6. preserves the
|
|
91
|
+
6. preserves the child command exit code in `exec` mode
|
|
82
92
|
|
|
83
|
-
|
|
93
|
+
Useful debug flags:
|
|
94
|
+
- `--dry-run`: show the reduced input and prompt without calling the provider
|
|
95
|
+
- `--show-raw`: print the captured raw input to `stderr`
|
|
84
96
|
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
97
|
+
## `test-status` detail modes
|
|
98
|
+
|
|
99
|
+
If you are running `npm test` and want `sift` to check the result, use `--preset test-status`.
|
|
88
100
|
|
|
89
|
-
|
|
101
|
+
`test-status` becomes test-aware because you chose the preset. It does **not** infer “this is a test command” from `pytest`, `vitest`, `npm test`, or any other runner name.
|
|
102
|
+
|
|
103
|
+
Available detail levels:
|
|
104
|
+
|
|
105
|
+
- `standard`
|
|
106
|
+
- short default summary
|
|
107
|
+
- no file list
|
|
108
|
+
- `focused`
|
|
109
|
+
- groups failures by error type
|
|
110
|
+
- shows a few representative failing tests or modules
|
|
111
|
+
- `verbose`
|
|
112
|
+
- flat list of visible failing tests or modules and their normalized reason
|
|
113
|
+
- useful when Codex needs to know exactly what to fix first
|
|
114
|
+
|
|
115
|
+
Examples:
|
|
116
|
+
|
|
117
|
+
```bash
|
|
118
|
+
sift exec --preset test-status -- npm test
|
|
119
|
+
sift exec --preset test-status --detail focused -- npm test
|
|
120
|
+
sift exec --preset test-status --detail verbose -- npm test
|
|
121
|
+
sift exec --preset test-status --detail verbose --show-raw -- npm test
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
If you use a different runner, swap in your command:
|
|
90
125
|
|
|
91
126
|
```bash
|
|
92
|
-
|
|
127
|
+
sift exec --preset test-status -- pytest
|
|
128
|
+
sift exec --preset test-status --detail focused -- vitest
|
|
129
|
+
sift exec --preset test-status --detail verbose -- bun test
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
Typical shapes:
|
|
133
|
+
|
|
134
|
+
`standard`
|
|
135
|
+
```text
|
|
136
|
+
- Tests did not complete.
|
|
137
|
+
- 114 errors occurred during collection.
|
|
138
|
+
- Most failures are import/dependency errors during test collection.
|
|
139
|
+
- Missing modules include pydantic, fastapi, botocore, PIL, httpx, numpy.
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
`focused`
|
|
143
|
+
```text
|
|
144
|
+
- Tests did not complete.
|
|
145
|
+
- 114 errors occurred during collection.
|
|
146
|
+
- import/dependency errors during collection
|
|
147
|
+
- tests/unit/test_auth_refresh.py -> missing module: botocore
|
|
148
|
+
- tests/unit/test_cognito.py -> missing module: pydantic
|
|
149
|
+
- and 103 more failing modules
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
`verbose`
|
|
153
|
+
```text
|
|
154
|
+
- Tests did not complete.
|
|
155
|
+
- 114 errors occurred during collection.
|
|
156
|
+
- tests/unit/test_auth_refresh.py -> missing module: botocore
|
|
157
|
+
- tests/unit/test_cognito.py -> missing module: pydantic
|
|
158
|
+
- tests/unit/test_dataset_use_case_facade.py -> missing module: fastapi
|
|
93
159
|
```
|
|
94
160
|
|
|
95
161
|
## Built-in presets
|
|
96
162
|
|
|
97
|
-
- `test-status`: summarize test
|
|
163
|
+
- `test-status`: summarize test runs
|
|
98
164
|
- `typecheck-summary`: group blocking type errors by root cause
|
|
99
165
|
- `lint-failures`: group repeated lint violations and highlight the files or rules that matter
|
|
100
166
|
- `audit-critical`: extract only high and critical vulnerabilities
|
|
@@ -103,27 +169,32 @@ git diff 2>&1 | sift "what changed?"
|
|
|
103
169
|
- `build-failure`: explain the most likely build failure
|
|
104
170
|
- `log-errors`: extract the most relevant error signals
|
|
105
171
|
|
|
106
|
-
|
|
172
|
+
List or inspect them:
|
|
107
173
|
|
|
108
174
|
```bash
|
|
109
175
|
sift presets list
|
|
110
|
-
sift presets show
|
|
176
|
+
sift presets show test-status
|
|
111
177
|
```
|
|
112
178
|
|
|
113
|
-
##
|
|
179
|
+
## CI-friendly usage
|
|
114
180
|
|
|
115
|
-
-
|
|
116
|
-
- `bullets`
|
|
117
|
-
- `json`
|
|
118
|
-
- `verdict`
|
|
181
|
+
Some commands succeed technically but should still block CI. `--fail-on` handles that for the built-in semantic presets that have stable machine-readable output:
|
|
119
182
|
|
|
120
|
-
|
|
183
|
+
```bash
|
|
184
|
+
sift exec --preset audit-critical --fail-on -- npm audit
|
|
185
|
+
sift exec --preset infra-risk --fail-on -- terraform plan
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
Supported presets for `--fail-on`:
|
|
189
|
+
- `audit-critical`
|
|
190
|
+
- `infra-risk`
|
|
121
191
|
|
|
122
192
|
## Config
|
|
123
193
|
|
|
124
194
|
Useful commands:
|
|
125
195
|
|
|
126
196
|
```bash
|
|
197
|
+
sift config setup
|
|
127
198
|
sift config init
|
|
128
199
|
sift config show
|
|
129
200
|
sift config validate
|
|
@@ -132,16 +203,16 @@ sift doctor
|
|
|
132
203
|
|
|
133
204
|
`sift config show` masks secrets by default. Use `--show-secrets` only when you explicitly need raw values.
|
|
134
205
|
|
|
135
|
-
|
|
206
|
+
Config precedence:
|
|
136
207
|
1. CLI flags
|
|
137
208
|
2. environment variables
|
|
138
|
-
3. `sift.config.yaml` or `sift.config.yml`
|
|
139
|
-
4. `~/.config/sift/config.yaml` or `~/.config/sift/config.yml`
|
|
209
|
+
3. repo-local `sift.config.yaml` or `sift.config.yml`
|
|
210
|
+
4. machine-wide `~/.config/sift/config.yaml` or `~/.config/sift/config.yml`
|
|
140
211
|
5. built-in defaults
|
|
141
212
|
|
|
142
213
|
If you pass `--config <path>`, that path is strict. Missing explicit config paths are errors.
|
|
143
214
|
|
|
144
|
-
Minimal example:
|
|
215
|
+
Minimal config example:
|
|
145
216
|
|
|
146
217
|
```yaml
|
|
147
218
|
provider:
|
|
@@ -160,16 +231,36 @@ runtime:
|
|
|
160
231
|
rawFallback: true
|
|
161
232
|
```
|
|
162
233
|
|
|
163
|
-
##
|
|
234
|
+
## OpenAI vs OpenAI-compatible
|
|
235
|
+
|
|
236
|
+
Use `provider: openai` for `api.openai.com`.
|
|
237
|
+
|
|
238
|
+
Use `provider: openai-compatible` for third-party compatible gateways or self-hosted endpoints.
|
|
164
239
|
|
|
165
|
-
For
|
|
240
|
+
For OpenAI:
|
|
241
|
+
```bash
|
|
242
|
+
export OPENAI_API_KEY=your_openai_api_key
|
|
243
|
+
```
|
|
166
244
|
|
|
167
|
-
For
|
|
245
|
+
For third-party compatible endpoints, use either the endpoint-native env var or:
|
|
168
246
|
|
|
169
|
-
|
|
170
|
-
|
|
247
|
+
```bash
|
|
248
|
+
export SIFT_PROVIDER_API_KEY=your_provider_api_key
|
|
249
|
+
```
|
|
250
|
+
|
|
251
|
+
Known compatible env fallbacks include:
|
|
252
|
+
- `OPENROUTER_API_KEY`
|
|
253
|
+
- `TOGETHER_API_KEY`
|
|
254
|
+
- `GROQ_API_KEY`
|
|
255
|
+
|
|
256
|
+
## Agent usage
|
|
257
|
+
|
|
258
|
+
The simple rule is:
|
|
259
|
+
- use `sift exec` for long, noisy, non-interactive command output
|
|
171
260
|
- skip `sift` when exact raw output matters
|
|
172
|
-
|
|
261
|
+
|
|
262
|
+
For Codex, put that rule in `~/.codex/AGENTS.md`.
|
|
263
|
+
For Claude Code, put the same rule in `CLAUDE.md`.
|
|
173
264
|
|
|
174
265
|
## Safety and limits
|
|
175
266
|
|
|
@@ -187,13 +278,16 @@ Release flow:
|
|
|
187
278
|
2. merge to `main`
|
|
188
279
|
3. run the `release` workflow manually
|
|
189
280
|
|
|
190
|
-
The workflow
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
281
|
+
The workflow runs typecheck, tests, coverage, build, packaging smoke checks, npm publish, tag creation, and GitHub Release creation.
|
|
282
|
+
|
|
283
|
+
## Brand assets
|
|
284
|
+
|
|
285
|
+
Curated public logo assets live in `assets/brand/`.
|
|
286
|
+
|
|
287
|
+
Included SVG sets:
|
|
288
|
+
- badge/app: teal, black, monochrome
|
|
289
|
+
- icon-only: teal, black, monochrome
|
|
290
|
+
- 24px icon: teal, black, monochrome
|
|
197
291
|
|
|
198
292
|
## License
|
|
199
293
|
|