@bilalimamoglu/sift 0.2.0 → 0.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,38 +1,20 @@
1
1
  # sift
2
2
 
3
- `sift` is a small wrapper for agent workflows.
3
+ <img src="assets/brand/sift-logo-badge-monochrome.svg" alt="sift logo" width="88" />
4
4
 
5
- Instead of giving a model the full output of `pytest`, `git diff`, `npm audit`, or `terraform plan`, you run the command through `sift`. `sift` captures the output, trims the noise, and returns a much smaller answer.
5
+ `sift` is a small command-output reducer for agent workflows.
6
6
 
7
- That answer can be short text or structured JSON.
7
+ Instead of feeding a model the full output of `pytest`, `git diff`, `npm audit`, `tsc --noEmit`, `eslint .`, or `terraform plan`, you run the command through `sift`. It captures the output, trims the noise, and returns a much smaller answer.
8
8
 
9
- ## What it is
9
+ Best fit:
10
+ - non-interactive shell commands
11
+ - agents that need short answers instead of full logs
12
+ - CI checks where a command may succeed but still produce a blocking result
10
13
 
11
- - a command-output reducer for agents
12
- - best used with `sift exec ... -- <command>`
13
- - designed for non-interactive shell commands
14
- - compatible with OpenAI-style APIs
15
-
16
- ## What it is not
17
-
18
- - not a native Codex tool
19
- - not an MCP server
20
- - not a replacement for raw shell output when exact logs matter
21
- - not meant for TUI or interactive password/confirmation flows
22
-
23
- ## Why use it
24
-
25
- Large shell output is expensive and noisy.
26
-
27
- If an agent only needs to know:
28
- - did tests pass
29
- - what changed
30
- - are there critical vulnerabilities
31
- - is this infra plan risky
32
-
33
- then sending the full raw output to a large model is wasteful.
34
-
35
- `sift` keeps the shell command, but shrinks what the model has to read.
14
+ Not a fit:
15
+ - exact raw log inspection
16
+ - TUI tools
17
+ - password/confirmation prompts
36
18
 
37
19
  ## Installation
38
20
 
@@ -44,86 +26,116 @@ npm install -g @bilalimamoglu/sift
44
26
 
45
27
  ## One-time setup
46
28
 
47
- Set credentials once in your shell:
29
+ The easiest path is the guided setup:
48
30
 
49
31
  ```bash
32
+ sift config setup
33
+ ```
34
+
35
+ That writes a machine-wide config to:
36
+
37
+ ```text
38
+ ~/.config/sift/config.yaml
39
+ ```
40
+
41
+ After that, any terminal can use `sift` without per-project setup. A repo-local config can still override it later.
42
+
43
+ If you want to set things up manually, for OpenAI-hosted models:
44
+
45
+ ```bash
46
+ export SIFT_PROVIDER=openai
50
47
  export SIFT_BASE_URL=https://api.openai.com/v1
51
- export SIFT_MODEL=gpt-4.1-mini
48
+ export SIFT_MODEL=gpt-5-nano
52
49
  export OPENAI_API_KEY=your_openai_api_key
53
50
  ```
54
51
 
55
- Or write them to a config file:
52
+ Or write a template config file:
56
53
 
57
54
  ```bash
58
55
  sift config init
59
56
  ```
60
57
 
61
- For the default OpenAI-compatible setup, `OPENAI_API_KEY` works directly. If you point `SIFT_BASE_URL` at a different compatible endpoint, use that provider's native key when `sift` recognizes the endpoint, or set the generic fallback env:
58
+ For a manual machine-wide template:
62
59
 
63
60
  ```bash
64
- export SIFT_PROVIDER_API_KEY=your_provider_api_key
61
+ sift config init --global
65
62
  ```
66
63
 
67
- `SIFT_PROVIDER_API_KEY` is the generic wrapper env for custom or self-hosted compatible endpoints. Today's `openai-compatible` mode stays generic and does not imply OpenAI ownership.
64
+ That writes:
68
65
 
69
- Known native env fallbacks for recognized compatible endpoints:
66
+ ```text
67
+ ~/.config/sift/config.yaml
68
+ ```
70
69
 
71
- - `OPENAI_API_KEY` for `https://api.openai.com/v1`
72
- - `OPENROUTER_API_KEY` for `https://openrouter.ai/api/v1`
73
- - `TOGETHER_API_KEY` for `https://api.together.xyz/v1`
74
- - `GROQ_API_KEY` for `https://api.groq.com/openai/v1`
70
+ Then keep the API key in your shell profile so every terminal can use it:
71
+
72
+ ```bash
73
+ export OPENAI_API_KEY=your_openai_api_key
74
+ ```
75
+
76
+ If you use a different OpenAI-compatible endpoint, switch to `provider: openai-compatible` and use either the endpoint's native API key env var or the generic fallback:
77
+
78
+ ```bash
79
+ export SIFT_PROVIDER_API_KEY=your_provider_api_key
80
+ ```
81
+
82
+ Common compatible env fallbacks:
83
+ - `OPENROUTER_API_KEY`
84
+ - `TOGETHER_API_KEY`
85
+ - `GROQ_API_KEY`
75
86
 
76
87
  ## Quick start
77
88
 
78
89
  ```bash
79
90
  sift exec "what changed?" -- git diff
80
91
  sift exec --preset test-status -- pytest
92
+ sift exec --preset typecheck-summary -- tsc --noEmit
93
+ sift exec --preset lint-failures -- eslint .
81
94
  sift exec --preset audit-critical -- npm audit
82
95
  sift exec --preset infra-risk -- terraform plan
96
+ sift exec --preset audit-critical --fail-on -- npm audit
97
+ sift exec --preset infra-risk --fail-on -- terraform plan
83
98
  ```
84
99
 
85
100
  ## Main workflow
86
101
 
87
- `sift exec` is the main path:
102
+ `sift exec` is the default path:
88
103
 
89
104
  ```bash
90
105
  sift exec "did tests pass?" -- pytest
91
- sift exec "what changed?" -- git diff
92
- sift exec --preset infra-risk -- terraform plan
93
106
  sift exec --dry-run "what changed?" -- git diff
94
107
  ```
95
108
 
96
- What happens:
97
-
98
- 1. `sift` runs the command.
99
- 2. It captures `stdout` and `stderr`.
100
- 3. It sanitizes, optionally redacts, and truncates the result.
101
- 4. It sends the reduced input to a smaller model.
102
- 5. It prints a short answer or JSON.
103
- 6. It preserves the wrapped command's exit code.
109
+ What it does:
110
+ 1. runs the command
111
+ 2. captures `stdout` and `stderr`
112
+ 3. sanitizes, optionally redacts, and truncates the output
113
+ 4. sends the reduced input to a smaller model
114
+ 5. prints a short answer or JSON
115
+ 6. preserves the wrapped command's exit code
104
116
 
105
117
  Use `--dry-run` to inspect the reduced input and prompt without calling the provider.
106
118
 
107
- ## Pipe mode
119
+ Use `--fail-on` when a built-in semantic preset should turn a technically successful command into a CI failure. Supported presets:
120
+ - `infra-risk`
121
+ - `audit-critical`
108
122
 
109
- If the output already exists in a pipeline, pipe mode still works:
123
+ Pipe mode still works when output already exists:
110
124
 
111
125
  ```bash
112
126
  git diff 2>&1 | sift "what changed?"
113
127
  ```
114
128
 
115
- Use pipe mode when the command is already being produced elsewhere.
116
-
117
- ## Presets
129
+ ## Built-in presets
118
130
 
119
- Built-in presets:
120
-
121
- - `test-status`
122
- - `audit-critical`
123
- - `diff-summary`
124
- - `build-failure`
125
- - `log-errors`
126
- - `infra-risk`
131
+ - `test-status`: summarize test results
132
+ - `typecheck-summary`: group blocking type errors by root cause
133
+ - `lint-failures`: group repeated lint violations and highlight the files or rules that matter
134
+ - `audit-critical`: extract only high and critical vulnerabilities
135
+ - `infra-risk`: return a safety verdict for infra changes
136
+ - `diff-summary`: summarize code changes and risks
137
+ - `build-failure`: explain the most likely build failure
138
+ - `log-errors`: extract the most relevant error signals
127
139
 
128
140
  Inspect them with:
129
141
 
@@ -134,182 +146,99 @@ sift presets show audit-critical
134
146
 
135
147
  ## Output modes
136
148
 
137
- - `brief`: short plain-text answer
138
- - `bullets`: short bullet list
139
- - `json`: structured JSON
140
- - `verdict`: `{ verdict, reason, evidence }`
141
-
142
- Some built-in presets also use local heuristics before calling a model. For example, `infra-risk` can mark obvious destructive plans as `fail` without sending the whole decision to the model.
149
+ - `brief`
150
+ - `bullets`
151
+ - `json`
152
+ - `verdict`
143
153
 
144
- ## JSON response format
145
-
146
- When `format` resolves to JSON, `sift` can ask the provider for native JSON output.
147
-
148
- - `auto`: enable native JSON mode only for known-safe endpoints such as `https://api.openai.com/v1`
149
- - `on`: always send the native JSON response format request
150
- - `off`: never send it
151
-
152
- Example:
153
-
154
- ```bash
155
- sift exec --format json --json-response-format on "summarize this" -- some-command
156
- ```
154
+ Built-in JSON and verdict flows return strict error objects on provider or model failure.
157
155
 
158
156
  ## Config
159
157
 
160
- Generate an example config:
158
+ Useful commands:
161
159
 
162
160
  ```bash
161
+ sift config setup
163
162
  sift config init
163
+ sift config show
164
+ sift config validate
165
+ sift doctor
164
166
  ```
165
167
 
166
- `sift config show` masks secret values by default. Use `sift config show --show-secrets` only when you explicitly need the raw values.
168
+ `sift config show` masks secrets by default. Use `--show-secrets` only when you explicitly need raw values.
167
169
 
168
170
  Resolution order:
169
-
170
171
  1. CLI flags
171
172
  2. environment variables
172
173
  3. `sift.config.yaml` or `sift.config.yml`
173
174
  4. `~/.config/sift/config.yaml` or `~/.config/sift/config.yml`
174
175
  5. built-in defaults
175
176
 
176
- If you pass `--config <path>`, that path is treated strictly. Missing explicit config paths are errors; `sift` does not silently fall back to defaults in that case.
177
+ If you pass `--config <path>`, that path is strict. Missing explicit config paths are errors.
177
178
 
178
- Supported environment variables:
179
-
180
- - `SIFT_PROVIDER`
181
- - `SIFT_MODEL`
182
- - `SIFT_BASE_URL`
183
- - `SIFT_PROVIDER_API_KEY`
184
- - `OPENAI_API_KEY` for `https://api.openai.com/v1`
185
- - `OPENROUTER_API_KEY` for `https://openrouter.ai/api/v1`
186
- - `TOGETHER_API_KEY` for `https://api.together.xyz/v1`
187
- - `GROQ_API_KEY` for `https://api.groq.com/openai/v1`
188
- - `SIFT_MAX_CAPTURE_CHARS`
189
- - `SIFT_TIMEOUT_MS`
190
- - `SIFT_MAX_INPUT_CHARS`
191
-
192
- Example config:
179
+ Minimal example:
193
180
 
194
181
  ```yaml
195
182
  provider:
196
- provider: openai-compatible
197
- model: gpt-4.1-mini
183
+ provider: openai
184
+ model: gpt-5-nano
198
185
  baseUrl: https://api.openai.com/v1
199
186
  apiKey: YOUR_API_KEY
200
- timeoutMs: 20000
201
- temperature: 0.1
202
- maxOutputTokens: 220
203
187
 
204
188
  input:
205
189
  stripAnsi: true
206
- redact: true
207
- redactStrict: false
208
- maxCaptureChars: 250000
209
- maxInputChars: 20000
210
- headChars: 6000
211
- tailChars: 6000
190
+ redact: false
191
+ maxCaptureChars: 400000
192
+ maxInputChars: 60000
212
193
 
213
194
  runtime:
214
195
  rawFallback: true
215
- verbose: false
216
196
  ```
217
197
 
218
- ## Commands
198
+ ## Agent usage
219
199
 
220
- ```bash
221
- sift [question]
222
- sift preset <name>
223
- sift exec [question] -- <program> [args...]
224
- sift exec --preset <name> -- <program> [args...]
225
- sift exec [question] --shell "<command string>"
226
- sift exec --preset <name> --shell "<command string>"
227
- sift config init
228
- sift config show
229
- sift config validate
230
- sift doctor
231
- sift presets list
232
- sift presets show <name>
233
- ```
200
+ For Claude Code, add a short rule to `CLAUDE.md`.
234
201
 
235
- ## Releasing
236
-
237
- `sift` uses a manual GitHub Actions release workflow with npm trusted publishing.
238
-
239
- Before the first release:
240
-
241
- 1. configure npm trusted publishing for `@bilalimamoglu/sift`
242
- 2. point it at `bilalimamoglu/sift`
243
- 3. use the workflow filename `release.yml`
244
- 4. set the GitHub Actions environment name to `release`
245
-
246
- For each release:
247
-
248
- 1. update `package.json` to the target version
249
- 2. merge the final release commit to `main`
250
- 3. open GitHub Actions and run the `release` workflow manually
251
-
252
- The workflow will:
253
-
254
- 1. install dependencies
255
- 2. typecheck, test, and build
256
- 3. pack and smoke-test the tarball
257
- 4. publish to npm
258
- 5. create and push the `vX.Y.Z` tag
259
- 6. create a GitHub Release
260
-
261
- `release.yml` uses OIDC trusted publishing, so it does not require an `NPM_TOKEN`.
202
+ For Codex, add the same rule to `~/.codex/AGENTS.md`.
262
203
 
263
- ## Using it with Codex
204
+ The important part is simple:
205
+ - prefer `sift exec` for noisy shell commands
206
+ - skip `sift` when exact raw output matters
207
+ - keep credentials in your shell env or `sift.config.yaml`, never inline in prompts or agent instructions
264
208
 
265
- `sift` does not install itself into Codex. The normal setup is:
266
-
267
- 1. put credentials in your shell environment or `sift.config.yaml`
268
- 2. add a short rule to `~/.codex/AGENTS.md`
269
-
270
- That way Codex inherits credentials safely. It should not pass API keys inline on every command.
271
-
272
- Example:
273
-
274
- ```md
275
- Prefer `sift exec` for non-interactive shell commands whose output will be read or summarized.
276
- Use pipe mode only when the output already exists from another pipeline.
277
- Do not use `sift` when exact raw output is required.
278
- Do not use `sift` for interactive or TUI workflows.
279
- ```
209
+ ## Safety and limits
280
210
 
281
- That gives the agent a simple habit:
211
+ - redaction is optional and regex-based
212
+ - retriable provider failures such as `429`, timeouts, and `5xx` are retried once
213
+ - `sift exec` detects simple prompt-like output such as `[y/N]` or `password:` and skips reduction
214
+ - pipe mode does not preserve upstream shell pipeline failures; use `set -o pipefail` if you need that behavior
282
215
 
283
- - run command through `sift exec` when a summary is enough
284
- - skip `sift` when exact output matters
216
+ ## Releasing
285
217
 
286
- ## Safety and limits
218
+ This repo uses a manual GitHub Actions release workflow with npm trusted publishing.
287
219
 
288
- - Redaction is optional and regex-based.
289
- - Redaction is off by default. If command output may contain secrets, enable `--redact` or set it in config before sending output to a provider.
290
- - Built-in JSON and verdict flows return strict error objects on provider/model failure.
291
- - Retriable provider failures such as `429`, timeouts, and `5xx` responses are retried once before falling back.
292
- - `sift exec` detects simple prompt-like output such as `[y/N]` or `password:` and skips reduction instead of guessing.
293
- - Pipe mode does not preserve upstream shell pipeline failures; use `set -o pipefail` if you need that behavior.
294
- - `sift exec` mirrors the wrapped command's exit code.
295
- - `sift doctor` is a conservative local config check. For the default OpenAI-compatible path it requires `baseUrl`, `model`, and `apiKey`.
220
+ Release flow:
221
+ 1. bump `package.json`
222
+ 2. merge to `main`
223
+ 3. run the `release` workflow manually
296
224
 
297
- ## Current scope
225
+ The workflow:
226
+ 1. installs dependencies
227
+ 2. runs typecheck, tests, and build
228
+ 3. packs and smoke-tests the tarball
229
+ 4. publishes to npm
230
+ 5. creates and pushes the `vX.Y.Z` tag
231
+ 6. creates a GitHub Release
298
232
 
299
- `sift` is intentionally small.
233
+ ## Brand assets
300
234
 
301
- Today it supports:
302
- - OpenAI-compatible providers
303
- - agent-first `exec` mode
304
- - pipe mode
305
- - presets
306
- - local redaction and truncation
307
- - strict JSON/verdict fallbacks
235
+ Curated public logo assets live in `assets/brand/`.
308
236
 
309
- It does not try to be a full agent platform.
237
+ Included SVG sets:
238
+ - badge/app: teal, black, monochrome
239
+ - icon-only: teal, black, monochrome
240
+ - 24px icon: teal, black, monochrome
310
241
 
311
242
  ## License
312
243
 
313
244
  MIT
314
-
315
- The top-level MIT license is the licensing surface for this repo. Per-file license headers are not required unless code is copied or adapted from another source that needs separate notice or attribution.