@bilalimamoglu/sift 0.2.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,38 +1,18 @@
1
1
  # sift
2
2
 
3
- `sift` is a small wrapper for agent workflows.
3
+ `sift` is a small command-output reducer for agent workflows.
4
4
 
5
- Instead of giving a model the full output of `pytest`, `git diff`, `npm audit`, or `terraform plan`, you run the command through `sift`. `sift` captures the output, trims the noise, and returns a much smaller answer.
5
+ Instead of feeding a model the full output of `pytest`, `git diff`, `npm audit`, `tsc --noEmit`, `eslint .`, or `terraform plan`, you run the command through `sift`. It captures the output, trims the noise, and returns a much smaller answer.
6
6
 
7
- That answer can be short text or structured JSON.
7
+ Best fit:
8
+ - non-interactive shell commands
9
+ - agents that need short answers instead of full logs
10
+ - CI checks where a command may succeed but still produce a blocking result
8
11
 
9
- ## What it is
10
-
11
- - a command-output reducer for agents
12
- - best used with `sift exec ... -- <command>`
13
- - designed for non-interactive shell commands
14
- - compatible with OpenAI-style APIs
15
-
16
- ## What it is not
17
-
18
- - not a native Codex tool
19
- - not an MCP server
20
- - not a replacement for raw shell output when exact logs matter
21
- - not meant for TUI or interactive password/confirmation flows
22
-
23
- ## Why use it
24
-
25
- Large shell output is expensive and noisy.
26
-
27
- If an agent only needs to know:
28
- - did tests pass
29
- - what changed
30
- - are there critical vulnerabilities
31
- - is this infra plan risky
32
-
33
- then sending the full raw output to a large model is wasteful.
34
-
35
- `sift` keeps the shell command, but shrinks what the model has to read.
12
+ Not a fit:
13
+ - exact raw log inspection
14
+ - TUI tools
15
+ - password/confirmation prompts
36
16
 
37
17
  ## Installation
38
18
 
@@ -44,86 +24,84 @@ npm install -g @bilalimamoglu/sift
44
24
 
45
25
  ## One-time setup
46
26
 
47
- Set credentials once in your shell:
27
+ For OpenAI-hosted models:
48
28
 
49
29
  ```bash
30
+ export SIFT_PROVIDER=openai
50
31
  export SIFT_BASE_URL=https://api.openai.com/v1
51
- export SIFT_MODEL=gpt-4.1-mini
32
+ export SIFT_MODEL=gpt-5-nano
52
33
  export OPENAI_API_KEY=your_openai_api_key
53
34
  ```
54
35
 
55
- Or write them to a config file:
36
+ Or generate a config file:
56
37
 
57
38
  ```bash
58
39
  sift config init
59
40
  ```
60
41
 
61
- For the default OpenAI-compatible setup, `OPENAI_API_KEY` works directly. If you point `SIFT_BASE_URL` at a different compatible endpoint, use that provider's native key when `sift` recognizes the endpoint, or set the generic fallback env:
42
+ If you use a different OpenAI-compatible endpoint, switch to `provider: openai-compatible` and use either the endpoint's native API key env var or the generic fallback:
62
43
 
63
44
  ```bash
64
45
  export SIFT_PROVIDER_API_KEY=your_provider_api_key
65
46
  ```
66
47
 
67
- `SIFT_PROVIDER_API_KEY` is the generic wrapper env for custom or self-hosted compatible endpoints. Today's `openai-compatible` mode stays generic and does not imply OpenAI ownership.
68
-
69
- Known native env fallbacks for recognized compatible endpoints:
70
-
71
- - `OPENAI_API_KEY` for `https://api.openai.com/v1`
72
- - `OPENROUTER_API_KEY` for `https://openrouter.ai/api/v1`
73
- - `TOGETHER_API_KEY` for `https://api.together.xyz/v1`
74
- - `GROQ_API_KEY` for `https://api.groq.com/openai/v1`
48
+ Common compatible env fallbacks:
49
+ - `OPENROUTER_API_KEY`
50
+ - `TOGETHER_API_KEY`
51
+ - `GROQ_API_KEY`
75
52
 
76
53
  ## Quick start
77
54
 
78
55
  ```bash
79
56
  sift exec "what changed?" -- git diff
80
57
  sift exec --preset test-status -- pytest
58
+ sift exec --preset typecheck-summary -- tsc --noEmit
59
+ sift exec --preset lint-failures -- eslint .
81
60
  sift exec --preset audit-critical -- npm audit
82
61
  sift exec --preset infra-risk -- terraform plan
62
+ sift exec --preset audit-critical --fail-on -- npm audit
63
+ sift exec --preset infra-risk --fail-on -- terraform plan
83
64
  ```
84
65
 
85
66
  ## Main workflow
86
67
 
87
- `sift exec` is the main path:
68
+ `sift exec` is the default path:
88
69
 
89
70
  ```bash
90
71
  sift exec "did tests pass?" -- pytest
91
- sift exec "what changed?" -- git diff
92
- sift exec --preset infra-risk -- terraform plan
93
72
  sift exec --dry-run "what changed?" -- git diff
94
73
  ```
95
74
 
96
- What happens:
97
-
98
- 1. `sift` runs the command.
99
- 2. It captures `stdout` and `stderr`.
100
- 3. It sanitizes, optionally redacts, and truncates the result.
101
- 4. It sends the reduced input to a smaller model.
102
- 5. It prints a short answer or JSON.
103
- 6. It preserves the wrapped command's exit code.
75
+ What it does:
76
+ 1. runs the command
77
+ 2. captures `stdout` and `stderr`
78
+ 3. sanitizes, optionally redacts, and truncates the output
79
+ 4. sends the reduced input to a smaller model
80
+ 5. prints a short answer or JSON
81
+ 6. preserves the wrapped command's exit code
104
82
 
105
83
  Use `--dry-run` to inspect the reduced input and prompt without calling the provider.
106
84
 
107
- ## Pipe mode
85
+ Use `--fail-on` when a built-in semantic preset should turn a technically successful command into a CI failure. Supported presets:
86
+ - `infra-risk`
87
+ - `audit-critical`
108
88
 
109
- If the output already exists in a pipeline, pipe mode still works:
89
+ Pipe mode still works when output already exists:
110
90
 
111
91
  ```bash
112
92
  git diff 2>&1 | sift "what changed?"
113
93
  ```
114
94
 
115
- Use pipe mode when the command is already being produced elsewhere.
95
+ ## Built-in presets
116
96
 
117
- ## Presets
118
-
119
- Built-in presets:
120
-
121
- - `test-status`
122
- - `audit-critical`
123
- - `diff-summary`
124
- - `build-failure`
125
- - `log-errors`
126
- - `infra-risk`
97
+ - `test-status`: summarize test results
98
+ - `typecheck-summary`: group blocking type errors by root cause
99
+ - `lint-failures`: group repeated lint violations and highlight the files or rules that matter
100
+ - `audit-critical`: extract only high and critical vulnerabilities
101
+ - `infra-risk`: return a safety verdict for infra changes
102
+ - `diff-summary`: summarize code changes and risks
103
+ - `build-failure`: explain the most likely build failure
104
+ - `log-errors`: extract the most relevant error signals
127
105
 
128
106
  Inspect them with:
129
107
 
@@ -134,182 +112,89 @@ sift presets show audit-critical
134
112
 
135
113
  ## Output modes
136
114
 
137
- - `brief`: short plain-text answer
138
- - `bullets`: short bullet list
139
- - `json`: structured JSON
140
- - `verdict`: `{ verdict, reason, evidence }`
141
-
142
- Some built-in presets also use local heuristics before calling a model. For example, `infra-risk` can mark obvious destructive plans as `fail` without sending the whole decision to the model.
143
-
144
- ## JSON response format
145
-
146
- When `format` resolves to JSON, `sift` can ask the provider for native JSON output.
115
+ - `brief`
116
+ - `bullets`
117
+ - `json`
118
+ - `verdict`
147
119
 
148
- - `auto`: enable native JSON mode only for known-safe endpoints such as `https://api.openai.com/v1`
149
- - `on`: always send the native JSON response format request
150
- - `off`: never send it
151
-
152
- Example:
153
-
154
- ```bash
155
- sift exec --format json --json-response-format on "summarize this" -- some-command
156
- ```
120
+ Built-in JSON and verdict flows return strict error objects on provider or model failure.
157
121
 
158
122
  ## Config
159
123
 
160
- Generate an example config:
124
+ Useful commands:
161
125
 
162
126
  ```bash
163
127
  sift config init
128
+ sift config show
129
+ sift config validate
130
+ sift doctor
164
131
  ```
165
132
 
166
- `sift config show` masks secret values by default. Use `sift config show --show-secrets` only when you explicitly need the raw values.
133
+ `sift config show` masks secrets by default. Use `--show-secrets` only when you explicitly need raw values.
167
134
 
168
135
  Resolution order:
169
-
170
136
  1. CLI flags
171
137
  2. environment variables
172
138
  3. `sift.config.yaml` or `sift.config.yml`
173
139
  4. `~/.config/sift/config.yaml` or `~/.config/sift/config.yml`
174
140
  5. built-in defaults
175
141
 
176
- If you pass `--config <path>`, that path is treated strictly. Missing explicit config paths are errors; `sift` does not silently fall back to defaults in that case.
177
-
178
- Supported environment variables:
142
+ If you pass `--config <path>`, that path is strict. Missing explicit config paths are errors.
179
143
 
180
- - `SIFT_PROVIDER`
181
- - `SIFT_MODEL`
182
- - `SIFT_BASE_URL`
183
- - `SIFT_PROVIDER_API_KEY`
184
- - `OPENAI_API_KEY` for `https://api.openai.com/v1`
185
- - `OPENROUTER_API_KEY` for `https://openrouter.ai/api/v1`
186
- - `TOGETHER_API_KEY` for `https://api.together.xyz/v1`
187
- - `GROQ_API_KEY` for `https://api.groq.com/openai/v1`
188
- - `SIFT_MAX_CAPTURE_CHARS`
189
- - `SIFT_TIMEOUT_MS`
190
- - `SIFT_MAX_INPUT_CHARS`
191
-
192
- Example config:
144
+ Minimal example:
193
145
 
194
146
  ```yaml
195
147
  provider:
196
- provider: openai-compatible
197
- model: gpt-4.1-mini
148
+ provider: openai
149
+ model: gpt-5-nano
198
150
  baseUrl: https://api.openai.com/v1
199
151
  apiKey: YOUR_API_KEY
200
- timeoutMs: 20000
201
- temperature: 0.1
202
- maxOutputTokens: 220
203
152
 
204
153
  input:
205
154
  stripAnsi: true
206
- redact: true
207
- redactStrict: false
208
- maxCaptureChars: 250000
209
- maxInputChars: 20000
210
- headChars: 6000
211
- tailChars: 6000
155
+ redact: false
156
+ maxCaptureChars: 400000
157
+ maxInputChars: 60000
212
158
 
213
159
  runtime:
214
160
  rawFallback: true
215
- verbose: false
216
161
  ```
217
162
 
218
- ## Commands
219
-
220
- ```bash
221
- sift [question]
222
- sift preset <name>
223
- sift exec [question] -- <program> [args...]
224
- sift exec --preset <name> -- <program> [args...]
225
- sift exec [question] --shell "<command string>"
226
- sift exec --preset <name> --shell "<command string>"
227
- sift config init
228
- sift config show
229
- sift config validate
230
- sift doctor
231
- sift presets list
232
- sift presets show <name>
233
- ```
234
-
235
- ## Releasing
236
-
237
- `sift` uses a manual GitHub Actions release workflow with npm trusted publishing.
238
-
239
- Before the first release:
163
+ ## Agent usage
240
164
 
241
- 1. configure npm trusted publishing for `@bilalimamoglu/sift`
242
- 2. point it at `bilalimamoglu/sift`
243
- 3. use the workflow filename `release.yml`
244
- 4. set the GitHub Actions environment name to `release`
165
+ For Claude Code, add a short rule to `CLAUDE.md`.
245
166
 
246
- For each release:
167
+ For Codex, add the same rule to `~/.codex/AGENTS.md`.
247
168
 
248
- 1. update `package.json` to the target version
249
- 2. merge the final release commit to `main`
250
- 3. open GitHub Actions and run the `release` workflow manually
251
-
252
- The workflow will:
253
-
254
- 1. install dependencies
255
- 2. typecheck, test, and build
256
- 3. pack and smoke-test the tarball
257
- 4. publish to npm
258
- 5. create and push the `vX.Y.Z` tag
259
- 6. create a GitHub Release
260
-
261
- `release.yml` uses OIDC trusted publishing, so it does not require an `NPM_TOKEN`.
262
-
263
- ## Using it with Codex
264
-
265
- `sift` does not install itself into Codex. The normal setup is:
266
-
267
- 1. put credentials in your shell environment or `sift.config.yaml`
268
- 2. add a short rule to `~/.codex/AGENTS.md`
269
-
270
- That way Codex inherits credentials safely. It should not pass API keys inline on every command.
271
-
272
- Example:
273
-
274
- ```md
275
- Prefer `sift exec` for non-interactive shell commands whose output will be read or summarized.
276
- Use pipe mode only when the output already exists from another pipeline.
277
- Do not use `sift` when exact raw output is required.
278
- Do not use `sift` for interactive or TUI workflows.
279
- ```
280
-
281
- That gives the agent a simple habit:
282
-
283
- - run command through `sift exec` when a summary is enough
284
- - skip `sift` when exact output matters
169
+ The important part is simple:
170
+ - prefer `sift exec` for noisy shell commands
171
+ - skip `sift` when exact raw output matters
172
+ - keep credentials in your shell env or `sift.config.yaml`, never inline in prompts or agent instructions
285
173
 
286
174
  ## Safety and limits
287
175
 
288
- - Redaction is optional and regex-based.
289
- - Redaction is off by default. If command output may contain secrets, enable `--redact` or set it in config before sending output to a provider.
290
- - Built-in JSON and verdict flows return strict error objects on provider/model failure.
291
- - Retriable provider failures such as `429`, timeouts, and `5xx` responses are retried once before falling back.
292
- - `sift exec` detects simple prompt-like output such as `[y/N]` or `password:` and skips reduction instead of guessing.
293
- - Pipe mode does not preserve upstream shell pipeline failures; use `set -o pipefail` if you need that behavior.
294
- - `sift exec` mirrors the wrapped command's exit code.
295
- - `sift doctor` is a conservative local config check. For the default OpenAI-compatible path it requires `baseUrl`, `model`, and `apiKey`.
176
+ - redaction is optional and regex-based
177
+ - retriable provider failures such as `429`, timeouts, and `5xx` are retried once
178
+ - `sift exec` detects simple prompt-like output such as `[y/N]` or `password:` and skips reduction
179
+ - pipe mode does not preserve upstream shell pipeline failures; use `set -o pipefail` if you need that behavior
296
180
 
297
- ## Current scope
181
+ ## Releasing
298
182
 
299
- `sift` is intentionally small.
183
+ This repo uses a manual GitHub Actions release workflow with npm trusted publishing.
300
184
 
301
- Today it supports:
302
- - OpenAI-compatible providers
303
- - agent-first `exec` mode
304
- - pipe mode
305
- - presets
306
- - local redaction and truncation
307
- - strict JSON/verdict fallbacks
185
+ Release flow:
186
+ 1. bump `package.json`
187
+ 2. merge to `main`
188
+ 3. run the `release` workflow manually
308
189
 
309
- It does not try to be a full agent platform.
190
+ The workflow:
191
+ 1. installs dependencies
192
+ 2. runs typecheck, tests, and build
193
+ 3. packs and smoke-tests the tarball
194
+ 4. publishes to npm
195
+ 5. creates and pushes the `vX.Y.Z` tag
196
+ 6. creates a GitHub Release
310
197
 
311
198
  ## License
312
199
 
313
200
  MIT
314
-
315
- The top-level MIT license is the licensing surface for this repo. Per-file license headers are not required unless code is copied or adapted from another source that needs separate notice or attribution.
package/dist/cli.js CHANGED
@@ -51,23 +51,23 @@ function loadRawConfig(explicitPath) {
51
51
  // src/config/defaults.ts
52
52
  var defaultConfig = {
53
53
  provider: {
54
- provider: "openai-compatible",
55
- model: "gpt-4.1-mini",
54
+ provider: "openai",
55
+ model: "gpt-5-nano",
56
56
  baseUrl: "https://api.openai.com/v1",
57
57
  apiKey: "",
58
58
  jsonResponseFormat: "auto",
59
59
  timeoutMs: 2e4,
60
60
  temperature: 0.1,
61
- maxOutputTokens: 220
61
+ maxOutputTokens: 400
62
62
  },
63
63
  input: {
64
64
  stripAnsi: true,
65
65
  redact: false,
66
66
  redactStrict: false,
67
- maxCaptureChars: 25e4,
68
- maxInputChars: 2e4,
69
- headChars: 6e3,
70
- tailChars: 6e3
67
+ maxCaptureChars: 4e5,
68
+ maxInputChars: 6e4,
69
+ headChars: 2e4,
70
+ tailChars: 2e4
71
71
  },
72
72
  runtime: {
73
73
  rawFallback: true,
@@ -101,6 +101,16 @@ var defaultConfig = {
101
101
  format: "bullets",
102
102
  policy: "log-errors"
103
103
  },
104
+ "typecheck-summary": {
105
+ question: "Summarize the blocking typecheck failures. Group repeated errors by root cause and point to the first files or symbols to fix.",
106
+ format: "bullets",
107
+ policy: "typecheck-summary"
108
+ },
109
+ "lint-failures": {
110
+ question: "Summarize the blocking lint failures. Group repeated rules, highlight the top offending files, and call out only failures that matter for fixing the run.",
111
+ format: "bullets",
112
+ policy: "lint-failures"
113
+ },
104
114
  "infra-risk": {
105
115
  question: "Assess whether the infrastructure changes are risky and whether they look safe to apply.",
106
116
  format: "verdict",
@@ -141,18 +151,21 @@ function resolveCompatibleEnvName(baseUrl) {
141
151
  return match?.envName;
142
152
  }
143
153
  function resolveProviderApiKey(provider, baseUrl, env) {
144
- if (env.SIFT_PROVIDER_API_KEY) {
145
- return env.SIFT_PROVIDER_API_KEY;
146
- }
147
154
  if (provider === "openai-compatible") {
155
+ if (env.SIFT_PROVIDER_API_KEY) {
156
+ return env.SIFT_PROVIDER_API_KEY;
157
+ }
148
158
  const envName2 = resolveCompatibleEnvName(baseUrl);
149
159
  return envName2 ? env[envName2] : void 0;
150
160
  }
151
161
  if (!provider) {
152
- return void 0;
162
+ return env.SIFT_PROVIDER_API_KEY;
153
163
  }
154
164
  const envName = PROVIDER_API_KEY_ENV[provider];
155
- return envName ? env[envName] : void 0;
165
+ if (envName && env[envName]) {
166
+ return env[envName];
167
+ }
168
+ return env.SIFT_PROVIDER_API_KEY;
156
169
  }
157
170
  function getProviderApiKeyEnvNames(provider, baseUrl) {
158
171
  const envNames = ["SIFT_PROVIDER_API_KEY"];
@@ -168,14 +181,14 @@ function getProviderApiKeyEnvNames(provider, baseUrl) {
168
181
  }
169
182
  const envName = PROVIDER_API_KEY_ENV[provider];
170
183
  if (envName) {
171
- envNames.push(envName);
184
+ return [envName, ...envNames];
172
185
  }
173
186
  return envNames;
174
187
  }
175
188
 
176
189
  // src/config/schema.ts
177
190
  import { z } from "zod";
178
- var providerNameSchema = z.enum(["openai-compatible"]);
191
+ var providerNameSchema = z.enum(["openai", "openai-compatible"]);
179
192
  var outputFormatSchema = z.enum([
180
193
  "brief",
181
194
  "bullets",
@@ -190,7 +203,9 @@ var promptPolicyNameSchema = z.enum([
190
203
  "diff-summary",
191
204
  "build-failure",
192
205
  "log-errors",
193
- "infra-risk"
206
+ "infra-risk",
207
+ "typecheck-summary",
208
+ "lint-failures"
194
209
  ]);
195
210
  var providerConfigSchema = z.object({
196
211
  provider: providerNameSchema,
@@ -402,7 +417,7 @@ function runDoctor(config) {
402
417
  if (!config.provider.model) {
403
418
  problems.push("Missing provider.model");
404
419
  }
405
- if (config.provider.provider === "openai-compatible" && !config.provider.apiKey) {
420
+ if ((config.provider.provider === "openai" || config.provider.provider === "openai-compatible") && !config.provider.apiKey) {
406
421
  problems.push("Missing provider.apiKey");
407
422
  problems.push(
408
423
  `Set one of: ${getProviderApiKeyEnvNames(
@@ -444,9 +459,150 @@ import { spawn } from "child_process";
444
459
  import { constants as osConstants } from "os";
445
460
  import pc2 from "picocolors";
446
461
 
462
+ // src/core/gate.ts
463
+ var FAIL_ON_SUPPORTED_PRESETS = /* @__PURE__ */ new Set(["infra-risk", "audit-critical"]);
464
+ function parseJson(output) {
465
+ try {
466
+ return JSON.parse(output);
467
+ } catch {
468
+ return null;
469
+ }
470
+ }
471
+ function supportsFailOnPreset(presetName) {
472
+ return typeof presetName === "string" && FAIL_ON_SUPPORTED_PRESETS.has(presetName);
473
+ }
474
+ function assertSupportedFailOnPreset(presetName) {
475
+ if (!supportsFailOnPreset(presetName)) {
476
+ throw new Error(
477
+ "--fail-on is supported only for built-in presets: infra-risk, audit-critical."
478
+ );
479
+ }
480
+ }
481
+ function assertSupportedFailOnFormat(args) {
482
+ const expectedFormat = args.presetName === "infra-risk" ? "verdict" : "json";
483
+ if (args.format !== expectedFormat) {
484
+ throw new Error(
485
+ `--fail-on requires the default ${expectedFormat} format for preset ${args.presetName}.`
486
+ );
487
+ }
488
+ }
489
+ function evaluateGate(args) {
490
+ const parsed = parseJson(args.output);
491
+ if (!parsed || typeof parsed !== "object") {
492
+ return { shouldFail: false };
493
+ }
494
+ if (args.presetName === "infra-risk") {
495
+ return {
496
+ shouldFail: parsed["verdict"] === "fail"
497
+ };
498
+ }
499
+ if (args.presetName === "audit-critical") {
500
+ const status = parsed["status"];
501
+ const vulnerabilities = parsed["vulnerabilities"];
502
+ return {
503
+ shouldFail: status === "ok" && Array.isArray(vulnerabilities) && vulnerabilities.length > 0
504
+ };
505
+ }
506
+ return { shouldFail: false };
507
+ }
508
+
447
509
  // src/core/run.ts
448
510
  import pc from "picocolors";
449
511
 
512
+ // src/providers/systemInstruction.ts
513
+ var REDUCTION_SYSTEM_INSTRUCTION = "You reduce noisy command output into compact answers for agents and automation.";
514
+
515
+ // src/providers/openai.ts
516
+ function usesNativeJsonResponseFormat(mode) {
517
+ return mode !== "off";
518
+ }
519
+ function extractResponseText(payload) {
520
+ if (typeof payload?.output_text === "string") {
521
+ return payload.output_text.trim();
522
+ }
523
+ if (!Array.isArray(payload?.output)) {
524
+ return "";
525
+ }
526
+ return payload.output.flatMap((item) => Array.isArray(item?.content) ? item.content : []).map((item) => item?.type === "output_text" ? item.text : "").filter((text) => typeof text === "string" && text.trim().length > 0).join("").trim();
527
+ }
528
+ async function buildOpenAIError(response) {
529
+ let detail = `Provider returned HTTP ${response.status}`;
530
+ try {
531
+ const data = await response.json();
532
+ const message = data?.error?.message;
533
+ if (typeof message === "string" && message.trim().length > 0) {
534
+ detail = `${detail}: ${message.trim()}`;
535
+ }
536
+ } catch {
537
+ }
538
+ return new Error(detail);
539
+ }
540
+ var OpenAIProvider = class {
541
+ name = "openai";
542
+ baseUrl;
543
+ apiKey;
544
+ constructor(options) {
545
+ this.baseUrl = options.baseUrl.replace(/\/$/, "");
546
+ this.apiKey = options.apiKey;
547
+ }
548
+ async generate(input) {
549
+ const controller = new AbortController();
550
+ const timeout = setTimeout(() => controller.abort(), input.timeoutMs);
551
+ try {
552
+ const url = new URL("responses", `${this.baseUrl}/`);
553
+ const response = await fetch(url, {
554
+ method: "POST",
555
+ signal: controller.signal,
556
+ headers: {
557
+ "content-type": "application/json",
558
+ ...this.apiKey ? { authorization: `Bearer ${this.apiKey}` } : {}
559
+ },
560
+ body: JSON.stringify({
561
+ model: input.model,
562
+ instructions: REDUCTION_SYSTEM_INSTRUCTION,
563
+ input: input.prompt,
564
+ reasoning: {
565
+ effort: "minimal"
566
+ },
567
+ text: {
568
+ verbosity: "low",
569
+ ...input.responseMode === "json" && usesNativeJsonResponseFormat(input.jsonResponseFormat) ? {
570
+ format: {
571
+ type: "json_object"
572
+ }
573
+ } : {}
574
+ },
575
+ max_output_tokens: input.maxOutputTokens
576
+ })
577
+ });
578
+ if (!response.ok) {
579
+ throw await buildOpenAIError(response);
580
+ }
581
+ const data = await response.json();
582
+ const text = extractResponseText(data);
583
+ if (!text) {
584
+ throw new Error("Provider returned an empty response");
585
+ }
586
+ return {
587
+ text,
588
+ usage: data?.usage ? {
589
+ inputTokens: data.usage.input_tokens,
590
+ outputTokens: data.usage.output_tokens,
591
+ totalTokens: data.usage.total_tokens
592
+ } : void 0,
593
+ raw: data
594
+ };
595
+ } catch (error) {
596
+ if (error.name === "AbortError") {
597
+ throw new Error("Provider request timed out");
598
+ }
599
+ throw error;
600
+ } finally {
601
+ clearTimeout(timeout);
602
+ }
603
+ }
604
+ };
605
+
450
606
  // src/providers/openaiCompatible.ts
451
607
  function supportsNativeJsonResponseFormat(baseUrl, mode) {
452
608
  if (mode === "off") {
@@ -467,6 +623,18 @@ function extractMessageText(payload) {
467
623
  }
468
624
  return "";
469
625
  }
626
+ async function buildOpenAICompatibleError(response) {
627
+ let detail = `Provider returned HTTP ${response.status}`;
628
+ try {
629
+ const data = await response.json();
630
+ const message = data?.error?.message;
631
+ if (typeof message === "string" && message.trim().length > 0) {
632
+ detail = `${detail}: ${message.trim()}`;
633
+ }
634
+ } catch {
635
+ }
636
+ return new Error(detail);
637
+ }
470
638
  var OpenAICompatibleProvider = class {
471
639
  name = "openai-compatible";
472
640
  baseUrl;
@@ -495,7 +663,7 @@ var OpenAICompatibleProvider = class {
495
663
  messages: [
496
664
  {
497
665
  role: "system",
498
- content: "You reduce noisy command output into compact answers for agents and automation."
666
+ content: REDUCTION_SYSTEM_INSTRUCTION
499
667
  },
500
668
  {
501
669
  role: "user",
@@ -505,7 +673,7 @@ var OpenAICompatibleProvider = class {
505
673
  })
506
674
  });
507
675
  if (!response.ok) {
508
- throw new Error(`Provider returned HTTP ${response.status}`);
676
+ throw await buildOpenAICompatibleError(response);
509
677
  }
510
678
  const data = await response.json();
511
679
  const text = extractMessageText(data);
@@ -534,6 +702,12 @@ var OpenAICompatibleProvider = class {
534
702
 
535
703
  // src/providers/factory.ts
536
704
  function createProvider(config) {
705
+ if (config.provider.provider === "openai") {
706
+ return new OpenAIProvider({
707
+ baseUrl: config.provider.baseUrl,
708
+ apiKey: config.provider.apiKey
709
+ });
710
+ }
537
711
  if (config.provider.provider === "openai-compatible") {
538
712
  return new OpenAICompatibleProvider({
539
713
  baseUrl: config.provider.baseUrl,
@@ -659,6 +833,33 @@ var BUILT_IN_POLICIES = {
659
833
  `If there is no clear error signal, reply exactly with: ${INSUFFICIENT_SIGNAL_TEXT}`
660
834
  ]
661
835
  },
836
+ "typecheck-summary": {
837
+ name: "typecheck-summary",
838
+ responseMode: "text",
839
+ taskRules: [
840
+ "Return at most 5 short bullet points.",
841
+ "Determine whether the typecheck failed or passed.",
842
+ "Group repeated diagnostics into root-cause buckets instead of echoing many duplicate lines.",
843
+ "Mention the first concrete files, symbols, or error categories to fix when they are visible.",
844
+ "Prefer compiler or type-system errors over timing, progress, or summary noise.",
845
+ "If the output clearly indicates success, say that briefly and do not add extra bullets.",
846
+ `If you cannot tell whether the typecheck failed, reply exactly with: ${INSUFFICIENT_SIGNAL_TEXT}`
847
+ ]
848
+ },
849
+ "lint-failures": {
850
+ name: "lint-failures",
851
+ responseMode: "text",
852
+ taskRules: [
853
+ "Return at most 5 short bullet points.",
854
+ "Determine whether lint failed or whether there are no blocking lint failures.",
855
+ "Group repeated rule violations instead of listing the same rule many times.",
856
+ "Mention the top offending files and rule names when they are visible.",
857
+ "Distinguish blocking failures from warnings only when that distinction is clearly visible in the input.",
858
+ "Do not invent autofixability; only mention autofix or --fix support when the tool output explicitly says so.",
859
+ "If the output clearly indicates success or no blocking failures, say that briefly and stop.",
860
+ `If there is not enough evidence to determine the lint result, reply exactly with: ${INSUFFICIENT_SIGNAL_TEXT}`
861
+ ]
862
+ },
662
863
  "infra-risk": {
663
864
  name: "infra-risk",
664
865
  responseMode: "json",
@@ -1305,6 +1506,12 @@ async function runExec(request) {
1305
1506
  });
1306
1507
  process.stdout.write(`${output}
1307
1508
  `);
1509
+ if (request.failOn && !request.dryRun && exitCode === 0 && supportsFailOnPreset(request.presetName) && evaluateGate({
1510
+ presetName: request.presetName,
1511
+ output
1512
+ }).shouldFail) {
1513
+ return 1;
1514
+ }
1308
1515
  }
1309
1516
  return exitCode;
1310
1517
  }
@@ -1372,15 +1579,25 @@ function buildCliOverrides(options) {
1372
1579
  return overrides;
1373
1580
  }
1374
1581
  function applySharedOptions(command) {
1375
- return command.option("--provider <provider>", "Provider: openai-compatible").option("--model <model>", "Model name").option("--base-url <url>", "Provider base URL").option(
1582
+ return command.option("--provider <provider>", "Provider: openai | openai-compatible").option("--model <model>", "Model name").option("--base-url <url>", "Provider base URL").option(
1376
1583
  "--api-key <key>",
1377
- "Provider API key (or set SIFT_PROVIDER_API_KEY; OPENAI_API_KEY also works for api.openai.com)"
1584
+ "Provider API key (or set OPENAI_API_KEY for provider=openai; use SIFT_PROVIDER_API_KEY or endpoint-native envs for openai-compatible)"
1378
1585
  ).option(
1379
1586
  "--json-response-format <mode>",
1380
1587
  "JSON response format mode: auto | on | off"
1381
- ).option("--timeout-ms <ms>", "Request timeout in milliseconds").option("--format <format>", "brief | bullets | json | verdict").option("--max-capture-chars <n>", "Maximum raw child output chars kept in memory").option("--max-input-chars <n>", "Maximum input chars sent to the model").option("--head-chars <n>", "Head chars to preserve during truncation").option("--tail-chars <n>", "Tail chars to preserve during truncation").option("--strip-ansi", "Force ANSI stripping").option("--redact", "Enable standard redaction").option("--redact-strict", "Enable strict redaction").option("--raw-fallback", "Enable raw fallback text output").option("--dry-run", "Show the reduced input and prompt without calling the provider").option("--config <path>", "Path to config file").option("--verbose", "Enable verbose stderr logging");
1588
+ ).option("--timeout-ms <ms>", "Request timeout in milliseconds").option("--format <format>", "brief | bullets | json | verdict").option("--max-capture-chars <n>", "Maximum raw child output chars kept in memory").option("--max-input-chars <n>", "Maximum input chars sent to the model").option("--head-chars <n>", "Head chars to preserve during truncation").option("--tail-chars <n>", "Tail chars to preserve during truncation").option("--strip-ansi", "Force ANSI stripping").option("--redact", "Enable standard redaction").option("--redact-strict", "Enable strict redaction").option("--raw-fallback", "Enable raw fallback text output").option("--dry-run", "Show the reduced input and prompt without calling the provider").option(
1589
+ "--fail-on",
1590
+ "Fail with exit code 1 when a supported built-in preset produces a blocking result"
1591
+ ).option("--config <path>", "Path to config file").option("--verbose", "Enable verbose stderr logging");
1382
1592
  }
1383
1593
  async function executeRun(args) {
1594
+ if (Boolean(args.options.failOn)) {
1595
+ assertSupportedFailOnPreset(args.presetName);
1596
+ assertSupportedFailOnFormat({
1597
+ presetName: args.presetName,
1598
+ format: args.format
1599
+ });
1600
+ }
1384
1601
  const config = resolveConfig({
1385
1602
  configPath: args.options.config,
1386
1603
  env: process.env,
@@ -1393,12 +1610,19 @@ async function executeRun(args) {
1393
1610
  stdin,
1394
1611
  config,
1395
1612
  dryRun: Boolean(args.options.dryRun),
1613
+ presetName: args.presetName,
1396
1614
  policyName: args.policyName,
1397
1615
  outputContract: args.outputContract,
1398
1616
  fallbackJson: args.fallbackJson
1399
1617
  });
1400
1618
  process.stdout.write(`${output}
1401
1619
  `);
1620
+ if (Boolean(args.options.failOn) && !Boolean(args.options.dryRun) && args.presetName && evaluateGate({
1621
+ presetName: args.presetName,
1622
+ output
1623
+ }).shouldFail) {
1624
+ process.exitCode = 1;
1625
+ }
1402
1626
  }
1403
1627
  function extractExecCommand(options) {
1404
1628
  const passthrough = Array.isArray(options["--"]) ? options["--"].map((value) => String(value)) : [];
@@ -1415,6 +1639,13 @@ function extractExecCommand(options) {
1415
1639
  };
1416
1640
  }
1417
1641
  async function executeExec(args) {
1642
+ if (Boolean(args.options.failOn)) {
1643
+ assertSupportedFailOnPreset(args.presetName);
1644
+ assertSupportedFailOnFormat({
1645
+ presetName: args.presetName,
1646
+ format: args.format
1647
+ });
1648
+ }
1418
1649
  const config = resolveConfig({
1419
1650
  configPath: args.options.config,
1420
1651
  env: process.env,
@@ -1426,6 +1657,8 @@ async function executeExec(args) {
1426
1657
  format: args.format,
1427
1658
  config,
1428
1659
  dryRun: Boolean(args.options.dryRun),
1660
+ failOn: Boolean(args.options.failOn),
1661
+ presetName: args.presetName,
1429
1662
  policyName: args.policyName,
1430
1663
  outputContract: args.outputContract,
1431
1664
  fallbackJson: args.fallbackJson,
@@ -1444,6 +1677,7 @@ applySharedOptions(
1444
1677
  await executeRun({
1445
1678
  question: preset.question,
1446
1679
  format: options.format ?? preset.format,
1680
+ presetName: name,
1447
1681
  policyName: options.format === void 0 || options.format === preset.format ? preset.policy : void 0,
1448
1682
  options,
1449
1683
  outputContract: preset.outputContract,
@@ -1472,6 +1706,7 @@ applySharedOptions(
1472
1706
  await executeExec({
1473
1707
  question: preset.question,
1474
1708
  format: options.format ?? preset.format,
1709
+ presetName,
1475
1710
  policyName: options.format === void 0 || options.format === preset.format ? preset.policy : void 0,
1476
1711
  options,
1477
1712
  outputContract: preset.outputContract,
package/dist/index.d.ts CHANGED
@@ -1,8 +1,8 @@
1
- type ProviderName = "openai-compatible";
1
+ type ProviderName = "openai" | "openai-compatible";
2
2
  type OutputFormat = "brief" | "bullets" | "json" | "verdict";
3
3
  type ResponseMode = "text" | "json";
4
4
  type JsonResponseFormatMode = "auto" | "on" | "off";
5
- type PromptPolicyName = "test-status" | "audit-critical" | "diff-summary" | "build-failure" | "log-errors" | "infra-risk";
5
+ type PromptPolicyName = "test-status" | "audit-critical" | "diff-summary" | "build-failure" | "log-errors" | "infra-risk" | "typecheck-summary" | "lint-failures";
6
6
  interface ProviderConfig {
7
7
  provider: ProviderName;
8
8
  model: string;
@@ -70,6 +70,7 @@ interface RunRequest {
70
70
  stdin: string;
71
71
  config: SiftConfig;
72
72
  dryRun?: boolean;
73
+ presetName?: string;
73
74
  policyName?: PromptPolicyName;
74
75
  outputContract?: string;
75
76
  fallbackJson?: unknown;
@@ -89,6 +90,7 @@ interface PreparedInput {
89
90
 
90
91
  interface ExecRequest extends Omit<RunRequest, "stdin"> {
91
92
  command?: string[];
93
+ failOn?: boolean;
92
94
  shellCommand?: string;
93
95
  }
94
96
  declare function runExec(request: ExecRequest): Promise<number>;
package/dist/index.js CHANGED
@@ -16,9 +16,135 @@ var INSUFFICIENT_SIGNAL_TEXT = "Insufficient signal in the provided input.";
16
16
  var GENERIC_JSON_CONTRACT = '{"answer":string,"evidence":string[],"risks":string[]}';
17
17
  var CAPTURE_OMITTED_MARKER = "\n...[captured output omitted]...\n";
18
18
 
19
+ // src/core/gate.ts
20
+ var FAIL_ON_SUPPORTED_PRESETS = /* @__PURE__ */ new Set(["infra-risk", "audit-critical"]);
21
+ function parseJson(output) {
22
+ try {
23
+ return JSON.parse(output);
24
+ } catch {
25
+ return null;
26
+ }
27
+ }
28
+ function supportsFailOnPreset(presetName) {
29
+ return typeof presetName === "string" && FAIL_ON_SUPPORTED_PRESETS.has(presetName);
30
+ }
31
+ function evaluateGate(args) {
32
+ const parsed = parseJson(args.output);
33
+ if (!parsed || typeof parsed !== "object") {
34
+ return { shouldFail: false };
35
+ }
36
+ if (args.presetName === "infra-risk") {
37
+ return {
38
+ shouldFail: parsed["verdict"] === "fail"
39
+ };
40
+ }
41
+ if (args.presetName === "audit-critical") {
42
+ const status = parsed["status"];
43
+ const vulnerabilities = parsed["vulnerabilities"];
44
+ return {
45
+ shouldFail: status === "ok" && Array.isArray(vulnerabilities) && vulnerabilities.length > 0
46
+ };
47
+ }
48
+ return { shouldFail: false };
49
+ }
50
+
19
51
  // src/core/run.ts
20
52
  import pc from "picocolors";
21
53
 
54
+ // src/providers/systemInstruction.ts
55
+ var REDUCTION_SYSTEM_INSTRUCTION = "You reduce noisy command output into compact answers for agents and automation.";
56
+
57
+ // src/providers/openai.ts
58
+ function usesNativeJsonResponseFormat(mode) {
59
+ return mode !== "off";
60
+ }
61
+ function extractResponseText(payload) {
62
+ if (typeof payload?.output_text === "string") {
63
+ return payload.output_text.trim();
64
+ }
65
+ if (!Array.isArray(payload?.output)) {
66
+ return "";
67
+ }
68
+ return payload.output.flatMap((item) => Array.isArray(item?.content) ? item.content : []).map((item) => item?.type === "output_text" ? item.text : "").filter((text) => typeof text === "string" && text.trim().length > 0).join("").trim();
69
+ }
70
+ async function buildOpenAIError(response) {
71
+ let detail = `Provider returned HTTP ${response.status}`;
72
+ try {
73
+ const data = await response.json();
74
+ const message = data?.error?.message;
75
+ if (typeof message === "string" && message.trim().length > 0) {
76
+ detail = `${detail}: ${message.trim()}`;
77
+ }
78
+ } catch {
79
+ }
80
+ return new Error(detail);
81
+ }
82
+ var OpenAIProvider = class {
83
+ name = "openai";
84
+ baseUrl;
85
+ apiKey;
86
+ constructor(options) {
87
+ this.baseUrl = options.baseUrl.replace(/\/$/, "");
88
+ this.apiKey = options.apiKey;
89
+ }
90
+ async generate(input) {
91
+ const controller = new AbortController();
92
+ const timeout = setTimeout(() => controller.abort(), input.timeoutMs);
93
+ try {
94
+ const url = new URL("responses", `${this.baseUrl}/`);
95
+ const response = await fetch(url, {
96
+ method: "POST",
97
+ signal: controller.signal,
98
+ headers: {
99
+ "content-type": "application/json",
100
+ ...this.apiKey ? { authorization: `Bearer ${this.apiKey}` } : {}
101
+ },
102
+ body: JSON.stringify({
103
+ model: input.model,
104
+ instructions: REDUCTION_SYSTEM_INSTRUCTION,
105
+ input: input.prompt,
106
+ reasoning: {
107
+ effort: "minimal"
108
+ },
109
+ text: {
110
+ verbosity: "low",
111
+ ...input.responseMode === "json" && usesNativeJsonResponseFormat(input.jsonResponseFormat) ? {
112
+ format: {
113
+ type: "json_object"
114
+ }
115
+ } : {}
116
+ },
117
+ max_output_tokens: input.maxOutputTokens
118
+ })
119
+ });
120
+ if (!response.ok) {
121
+ throw await buildOpenAIError(response);
122
+ }
123
+ const data = await response.json();
124
+ const text = extractResponseText(data);
125
+ if (!text) {
126
+ throw new Error("Provider returned an empty response");
127
+ }
128
+ return {
129
+ text,
130
+ usage: data?.usage ? {
131
+ inputTokens: data.usage.input_tokens,
132
+ outputTokens: data.usage.output_tokens,
133
+ totalTokens: data.usage.total_tokens
134
+ } : void 0,
135
+ raw: data
136
+ };
137
+ } catch (error) {
138
+ if (error.name === "AbortError") {
139
+ throw new Error("Provider request timed out");
140
+ }
141
+ throw error;
142
+ } finally {
143
+ clearTimeout(timeout);
144
+ }
145
+ }
146
+ };
147
+
22
148
  // src/providers/openaiCompatible.ts
23
149
  function supportsNativeJsonResponseFormat(baseUrl, mode) {
24
150
  if (mode === "off") {
@@ -39,6 +165,18 @@ function extractMessageText(payload) {
39
165
  }
40
166
  return "";
41
167
  }
168
+ async function buildOpenAICompatibleError(response) {
169
+ let detail = `Provider returned HTTP ${response.status}`;
170
+ try {
171
+ const data = await response.json();
172
+ const message = data?.error?.message;
173
+ if (typeof message === "string" && message.trim().length > 0) {
174
+ detail = `${detail}: ${message.trim()}`;
175
+ }
176
+ } catch {
177
+ }
178
+ return new Error(detail);
179
+ }
42
180
  var OpenAICompatibleProvider = class {
43
181
  name = "openai-compatible";
44
182
  baseUrl;
@@ -67,7 +205,7 @@ var OpenAICompatibleProvider = class {
67
205
  messages: [
68
206
  {
69
207
  role: "system",
70
- content: "You reduce noisy command output into compact answers for agents and automation."
208
+ content: REDUCTION_SYSTEM_INSTRUCTION
71
209
  },
72
210
  {
73
211
  role: "user",
@@ -77,7 +215,7 @@ var OpenAICompatibleProvider = class {
77
215
  })
78
216
  });
79
217
  if (!response.ok) {
80
- throw new Error(`Provider returned HTTP ${response.status}`);
218
+ throw await buildOpenAICompatibleError(response);
81
219
  }
82
220
  const data = await response.json();
83
221
  const text = extractMessageText(data);
@@ -106,6 +244,12 @@ var OpenAICompatibleProvider = class {
106
244
 
107
245
  // src/providers/factory.ts
108
246
  function createProvider(config) {
247
+ if (config.provider.provider === "openai") {
248
+ return new OpenAIProvider({
249
+ baseUrl: config.provider.baseUrl,
250
+ apiKey: config.provider.apiKey
251
+ });
252
+ }
109
253
  if (config.provider.provider === "openai-compatible") {
110
254
  return new OpenAICompatibleProvider({
111
255
  baseUrl: config.provider.baseUrl,
@@ -231,6 +375,33 @@ var BUILT_IN_POLICIES = {
231
375
  `If there is no clear error signal, reply exactly with: ${INSUFFICIENT_SIGNAL_TEXT}`
232
376
  ]
233
377
  },
378
+ "typecheck-summary": {
379
+ name: "typecheck-summary",
380
+ responseMode: "text",
381
+ taskRules: [
382
+ "Return at most 5 short bullet points.",
383
+ "Determine whether the typecheck failed or passed.",
384
+ "Group repeated diagnostics into root-cause buckets instead of echoing many duplicate lines.",
385
+ "Mention the first concrete files, symbols, or error categories to fix when they are visible.",
386
+ "Prefer compiler or type-system errors over timing, progress, or summary noise.",
387
+ "If the output clearly indicates success, say that briefly and do not add extra bullets.",
388
+ `If you cannot tell whether the typecheck failed, reply exactly with: ${INSUFFICIENT_SIGNAL_TEXT}`
389
+ ]
390
+ },
391
+ "lint-failures": {
392
+ name: "lint-failures",
393
+ responseMode: "text",
394
+ taskRules: [
395
+ "Return at most 5 short bullet points.",
396
+ "Determine whether lint failed or whether there are no blocking lint failures.",
397
+ "Group repeated rule violations instead of listing the same rule many times.",
398
+ "Mention the top offending files and rule names when they are visible.",
399
+ "Distinguish blocking failures from warnings only when that distinction is clearly visible in the input.",
400
+ "Do not invent autofixability; only mention autofix or --fix support when the tool output explicitly says so.",
401
+ "If the output clearly indicates success or no blocking failures, say that briefly and stop.",
402
+ `If there is not enough evidence to determine the lint result, reply exactly with: ${INSUFFICIENT_SIGNAL_TEXT}`
403
+ ]
404
+ },
234
405
  "infra-risk": {
235
406
  name: "infra-risk",
236
407
  responseMode: "json",
@@ -877,6 +1048,12 @@ async function runExec(request) {
877
1048
  });
878
1049
  process.stdout.write(`${output}
879
1050
  `);
1051
+ if (request.failOn && !request.dryRun && exitCode === 0 && supportsFailOnPreset(request.presetName) && evaluateGate({
1052
+ presetName: request.presetName,
1053
+ output
1054
+ }).shouldFail) {
1055
+ return 1;
1056
+ }
880
1057
  }
881
1058
  return exitCode;
882
1059
  }
@@ -884,23 +1061,23 @@ async function runExec(request) {
884
1061
  // src/config/defaults.ts
885
1062
  var defaultConfig = {
886
1063
  provider: {
887
- provider: "openai-compatible",
888
- model: "gpt-4.1-mini",
1064
+ provider: "openai",
1065
+ model: "gpt-5-nano",
889
1066
  baseUrl: "https://api.openai.com/v1",
890
1067
  apiKey: "",
891
1068
  jsonResponseFormat: "auto",
892
1069
  timeoutMs: 2e4,
893
1070
  temperature: 0.1,
894
- maxOutputTokens: 220
1071
+ maxOutputTokens: 400
895
1072
  },
896
1073
  input: {
897
1074
  stripAnsi: true,
898
1075
  redact: false,
899
1076
  redactStrict: false,
900
- maxCaptureChars: 25e4,
901
- maxInputChars: 2e4,
902
- headChars: 6e3,
903
- tailChars: 6e3
1077
+ maxCaptureChars: 4e5,
1078
+ maxInputChars: 6e4,
1079
+ headChars: 2e4,
1080
+ tailChars: 2e4
904
1081
  },
905
1082
  runtime: {
906
1083
  rawFallback: true,
@@ -934,6 +1111,16 @@ var defaultConfig = {
934
1111
  format: "bullets",
935
1112
  policy: "log-errors"
936
1113
  },
1114
+ "typecheck-summary": {
1115
+ question: "Summarize the blocking typecheck failures. Group repeated errors by root cause and point to the first files or symbols to fix.",
1116
+ format: "bullets",
1117
+ policy: "typecheck-summary"
1118
+ },
1119
+ "lint-failures": {
1120
+ question: "Summarize the blocking lint failures. Group repeated rules, highlight the top offending files, and call out only failures that matter for fixing the run.",
1121
+ format: "bullets",
1122
+ policy: "lint-failures"
1123
+ },
937
1124
  "infra-risk": {
938
1125
  question: "Assess whether the infrastructure changes are risky and whether they look safe to apply.",
939
1126
  format: "verdict",
@@ -1002,23 +1189,26 @@ function resolveCompatibleEnvName(baseUrl) {
1002
1189
  return match?.envName;
1003
1190
  }
1004
1191
  function resolveProviderApiKey(provider, baseUrl, env) {
1005
- if (env.SIFT_PROVIDER_API_KEY) {
1006
- return env.SIFT_PROVIDER_API_KEY;
1007
- }
1008
1192
  if (provider === "openai-compatible") {
1193
+ if (env.SIFT_PROVIDER_API_KEY) {
1194
+ return env.SIFT_PROVIDER_API_KEY;
1195
+ }
1009
1196
  const envName2 = resolveCompatibleEnvName(baseUrl);
1010
1197
  return envName2 ? env[envName2] : void 0;
1011
1198
  }
1012
1199
  if (!provider) {
1013
- return void 0;
1200
+ return env.SIFT_PROVIDER_API_KEY;
1014
1201
  }
1015
1202
  const envName = PROVIDER_API_KEY_ENV[provider];
1016
- return envName ? env[envName] : void 0;
1203
+ if (envName && env[envName]) {
1204
+ return env[envName];
1205
+ }
1206
+ return env.SIFT_PROVIDER_API_KEY;
1017
1207
  }
1018
1208
 
1019
1209
  // src/config/schema.ts
1020
1210
  import { z } from "zod";
1021
- var providerNameSchema = z.enum(["openai-compatible"]);
1211
+ var providerNameSchema = z.enum(["openai", "openai-compatible"]);
1022
1212
  var outputFormatSchema = z.enum([
1023
1213
  "brief",
1024
1214
  "bullets",
@@ -1033,7 +1223,9 @@ var promptPolicyNameSchema = z.enum([
1033
1223
  "diff-summary",
1034
1224
  "build-failure",
1035
1225
  "log-errors",
1036
- "infra-risk"
1226
+ "infra-risk",
1227
+ "typecheck-summary",
1228
+ "lint-failures"
1037
1229
  ]);
1038
1230
  var providerConfigSchema = z.object({
1039
1231
  provider: providerNameSchema,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@bilalimamoglu/sift",
3
- "version": "0.2.0",
3
+ "version": "0.2.1",
4
4
  "description": "Agent-first command-output reduction layer for agents, CI, and automation.",
5
5
  "type": "module",
6
6
  "bin": {