delimit-cli 4.1.48 → 4.1.50

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,44 @@
1
1
  # Changelog
2
2
 
3
+ ## [4.1.50] - 2026-04-09
4
+
5
+ ### Fixed (CRITICAL — CLAUDE.md in-prose marker clobber)
6
+ - **`upsertDelimitSection` regex was unanchored** — `bin/delimit-setup.js` used `/<!-- delimit:start[^>]*-->/` (no line anchors) to detect the managed-section markers. If a user *quoted* the markers inside backticks in a documentation bullet (e.g. `Use managed-section markers (\`<!-- delimit:start -->\` / \`<!-- delimit:end -->\`)`), the regex matched the prose mention. On the next `delimit setup` run the upsert sliced everything between the prose start and prose end markers and replaced it with a fresh stock template — the exact "never clobber user-customized files" failure mode 4.1.48 / 4.1.49 were written to prevent. Reproduced on `/root/CLAUDE.md` 2026-04-09.
7
+ - Both markers are now anchored to start-of-line with the multiline flag, allow optional leading horizontal whitespace (`/^[ \t]*<!-- delimit:start[^>]*-->[ \t]*$/m`), and the file is BOM-stripped before matching. Result: documentation prose that quotes the markers inside backticks, blockquotes (`> `), or bullets (`- `, `* `) is never matched, while genuine markers — flush-left, indented, or BOM-prefixed — still are. Same fix applied to the test mirror in `tests/setup-onboarding.test.js`.
8
+
9
+ ### Added
10
+ - **Regression tests** in `tests/setup-onboarding.test.js` covering five failure modes the v4.1.49 regex would have matched incorrectly:
11
+ - Markers quoted in a bullet via inline backticks (the exact /root/CLAUDE.md 2026-04-09 incident)
12
+ - Markers with a CRLF (`\r\n`) line ending
13
+ - File starting with a UTF-8 BOM
14
+ - Tab- and space-indented real markers (must still be recognized)
15
+ - Bullet- and blockquote-prefixed markers (must NOT be recognized)
16
+ Each test asserts both the first-run behavior (`appended` vs `updated`) and that user content survives a subsequent version-bump upgrade verbatim.
17
+
18
+ ### Scope
19
+ - Single-purpose patch: CLAUDE.md preservation only. Unrelated gateway fixes (e.g. `loop_engine` LED-814) deferred to 4.1.51 per multi-model deliberation, since 4.1.48 and 4.1.49 both shipped with the regression bug undetected and 4.1.50 must stay laser-focused.
20
+
21
+ ### Tests
22
+ - 134/134 npm CLI tests passing (was 129). New regression suite (`does not match markers quoted in prose`, CRLF, BOM, indented, bullet/blockquote-prefixed) covers every edge case the multi-model deliberation surfaced.
23
+
24
+ ## [4.1.49] - 2026-04-09
25
+
26
+ ### Fixed (full preservation audit follow-up to 4.1.48)
27
+ - **Project `.claude/settings.json` hooks clobber** — `installClaudeHooks` was replacing the project-level `.claude/settings.json` hooks object with the merged-with-global config, propagating global hooks into every project file and wiping any project-local hooks the user had set. Now merges only Delimit-owned hook groups (entries whose command contains `delimit`) into existing project hooks; project-specific user hooks survive.
28
+ - **Gemini `general.defaultApprovalMode` clobber** — `delimit-cli setup` was force-setting Gemini's `defaultApprovalMode` to `auto_edit` on every run, overwriting whatever the user had chosen (e.g. `manual`). Now only sets it when missing.
29
+ - **`~/.claude.json` MCP hooks replacement** — `lib/hooks-installer.js` (opt-in via `delimit-cli hooks install`) replaced `preCommand` / `postCommand` / `authentication` / `audit` keys on every install. Now only fills in missing keys, preserving any user-chosen MCP hook commands.
30
+
31
+ ### Added
32
+ - **`tests/setup-no-clobber.test.js`** — dedicated regression suite that runs setup helpers against synthetic fresh-user HOME directories with pre-populated user customizations (project hooks, Gemini approval mode, custom MCP hook commands) and asserts none get clobbered. 5 tests, all passing.
33
+
34
+ ### Audit results
35
+ - Audited every `fs.writeFileSync` in `bin/delimit-setup.js`, `lib/cross-model-hooks.js`, `lib/hooks-installer.js`, `adapters/cursor-rules.js`, and `scripts/postinstall.js`.
36
+ - All remaining writes are either delimit-owned (shims, hook scripts, generated `delimit.md`), guarded by `!fs.existsSync` (models.json, social_target_config.json, codex empty file), or surgical merges that preserve user content (`.mcp.json` mcpServers, `.claude/settings.json` allowList, `.codex/config.toml` mcp_servers.delimit block, `.cursor/mcp.json` mcpServers, rc-file PATH append).
37
+ - The full preservation contract is now: `delimit-cli setup` may safely run on any user machine, including via the shim auto-update flow, without destroying user state. New installs and upgrades are equivalent for everything except delimit-owned files.
38
+
39
+ ### Tests
40
+ - 129/129 passing (was 124).
41
+
3
42
  ## [4.1.48] - 2026-04-09
4
43
 
5
44
  ### Fixed
@@ -430,9 +430,12 @@ async function main() {
430
430
  cwd: path.join(DELIMIT_HOME, 'server'),
431
431
  env: { PYTHONPATH: path.join(DELIMIT_HOME, 'server') }
432
432
  };
433
- // Auto-approve all tools — users should not be prompted for every Delimit call
433
+ // Auto-approve all tools — users should not be prompted for every Delimit call.
434
+ // Only set if missing — never clobber the user's chosen approval mode on upgrade.
434
435
  if (!geminiConfig.general) geminiConfig.general = {};
435
- geminiConfig.general.defaultApprovalMode = 'auto_edit';
436
+ if (!geminiConfig.general.defaultApprovalMode) {
437
+ geminiConfig.general.defaultApprovalMode = 'auto_edit';
438
+ }
436
439
  fs.writeFileSync(GEMINI_CONFIG, JSON.stringify(geminiConfig, null, 2));
437
440
  if (geminiExisted) {
438
441
  await logp(` ${green('✓')} Updated Delimit paths in Gemini CLI config`);
@@ -1359,24 +1362,38 @@ function upsertDelimitSection(filePath) {
1359
1362
  return { action: 'created' };
1360
1363
  }
1361
1364
 
1362
- const existing = fs.readFileSync(filePath, 'utf-8');
1363
-
1364
- // Check if managed markers already exist
1365
- const startMarkerRe = /<!-- delimit:start[^>]*-->/;
1366
- const endMarker = '<!-- delimit:end -->';
1367
- const hasStart = startMarkerRe.test(existing);
1368
- const hasEnd = existing.includes(endMarker);
1365
+ const rawExisting = fs.readFileSync(filePath, 'utf-8');
1366
+ // Strip a UTF-8 BOM if present so the start-of-line anchor still matches
1367
+ // the very first line of the file. We write back the stripped form to keep
1368
+ // serialization deterministic.
1369
+ const existing = rawExisting.replace(/^\uFEFF/, '');
1370
+
1371
+ // Check if managed markers already exist.
1372
+ // Markers MUST be on their own line — anchored with the multiline flag — so
1373
+ // that documentation prose that quotes the markers (e.g. inside backticks,
1374
+ // bullets, or blockquotes) does NOT get mistaken for a real managed section.
1375
+ // The v4.1.49 unanchored regex caused exactly this clobber on /root/CLAUDE.md.
1376
+ // We allow optional leading horizontal whitespace ([ \t]*) so genuinely
1377
+ // indented markers still match, but NOT a leading "- ", "> ", "`", "*", etc.
1378
+ const startMarkerRe = /^[ \t]*<!-- delimit:start[^>]*-->[ \t]*$/m;
1379
+ const endMarkerRe = /^[ \t]*<!-- delimit:end -->[ \t]*$/m;
1380
+ const startMatch = existing.match(startMarkerRe);
1381
+ const endMatch = existing.match(endMarkerRe);
1382
+ const hasStart = !!startMatch;
1383
+ const hasEnd = !!endMatch;
1369
1384
 
1370
1385
  if (hasStart && hasEnd) {
1371
- // Extract current version from the marker
1372
- const versionMatch = existing.match(/<!-- delimit:start v([^ ]+) -->/);
1386
+ // Extract current version from the marker (also anchored, allows indent)
1387
+ const versionMatch = existing.match(/^[ \t]*<!-- delimit:start v([^ ]+) -->[ \t]*$/m);
1373
1388
  const currentVersion = versionMatch ? versionMatch[1] : '';
1374
1389
  if (currentVersion === version) {
1375
1390
  return { action: 'unchanged' };
1376
1391
  }
1377
1392
  // Replace only the managed region — preserve content above/below
1378
- const before = existing.substring(0, existing.search(startMarkerRe));
1379
- const after = existing.substring(existing.indexOf(endMarker) + endMarker.length);
1393
+ const startIdx = startMatch.index;
1394
+ const endIdx = endMatch.index + endMatch[0].length;
1395
+ const before = existing.substring(0, startIdx);
1396
+ const after = existing.substring(endIdx);
1380
1397
  fs.writeFileSync(filePath, before + newSection + after);
1381
1398
  return { action: 'updated' };
1382
1399
  }
@@ -23,11 +23,10 @@ if str(GATEWAY_ROOT) not in sys.path:
23
23
 
24
24
 
25
25
  def _load_specs(spec_path: str) -> Dict[str, Any]:
26
- """Load an API spec (OpenAPI or JSON Schema) from a file path.
26
+ """Load an OpenAPI spec from a file path.
27
27
 
28
28
  Performs a non-fatal version compatibility check (LED-290) so that
29
29
  unknown OpenAPI versions log a warning instead of silently parsing.
30
- JSON Schema documents skip the OpenAPI version assert.
31
30
  """
32
31
  import yaml
33
32
 
@@ -42,146 +41,15 @@ def _load_specs(spec_path: str) -> Dict[str, Any]:
42
41
  spec = json.loads(content)
43
42
 
44
43
  # LED-290: warn (non-fatal) if version is outside the validated set.
45
- # Only applies to OpenAPI/Swagger documents — bare JSON Schema files
46
- # have no "openapi"/"swagger" key and would otherwise trip the assert.
47
44
  try:
48
- if isinstance(spec, dict) and ("openapi" in spec or "swagger" in spec):
49
- from core.openapi_version import assert_supported
50
- assert_supported(spec, strict=False)
45
+ from core.openapi_version import assert_supported
46
+ assert_supported(spec, strict=False)
51
47
  except Exception as exc: # pragma: no cover -- defensive only
52
48
  logger.debug("openapi version check skipped: %s", exc)
53
49
 
54
50
  return spec
55
51
 
56
52
 
57
- # ---------------------------------------------------------------------------
58
- # LED-713: JSON Schema spec-type dispatch helpers
59
- # ---------------------------------------------------------------------------
60
-
61
-
62
- def _spec_type(doc: Any) -> str:
63
- """Classify a loaded spec doc. 'openapi' or 'json_schema'."""
64
- from core.spec_detector import detect_spec_type
65
- t = detect_spec_type(doc)
66
- # Fallback to openapi for unknown so we never break existing flows.
67
- return "json_schema" if t == "json_schema" else "openapi"
68
-
69
-
70
- def _json_schema_changes_to_dicts(changes: List[Any]) -> List[Dict[str, Any]]:
71
- return [
72
- {
73
- "type": c.type.value,
74
- "path": c.path,
75
- "message": c.message,
76
- "is_breaking": c.is_breaking,
77
- "details": c.details,
78
- }
79
- for c in changes
80
- ]
81
-
82
-
83
- def _json_schema_semver(changes: List[Any]) -> Dict[str, Any]:
84
- """Build an OpenAPI-compatible semver result from JSON Schema changes.
85
-
86
- Mirrors core.semver_classifier.classify_detailed shape so downstream
87
- consumers (PR comment, CI formatter, ledger) don't need to branch.
88
- """
89
- breaking = [c for c in changes if c.is_breaking]
90
- non_breaking = [c for c in changes if not c.is_breaking]
91
- if breaking:
92
- bump = "major"
93
- elif non_breaking:
94
- bump = "minor"
95
- else:
96
- bump = "none"
97
- return {
98
- "bump": bump,
99
- "is_breaking": bool(breaking),
100
- "counts": {
101
- "breaking": len(breaking),
102
- "non_breaking": len(non_breaking),
103
- "total": len(changes),
104
- },
105
- }
106
-
107
-
108
- def _bump_semver_version(current: str, bump: str) -> Optional[str]:
109
- """Minimal semver bump for JSON Schema path (core.semver_classifier
110
- only understands OpenAPI ChangeType enums)."""
111
- if not current:
112
- return None
113
- try:
114
- parts = current.lstrip("v").split(".")
115
- major, minor, patch = (int(parts[0]), int(parts[1]), int(parts[2]))
116
- except Exception:
117
- return None
118
- if bump == "major":
119
- return f"{major + 1}.0.0"
120
- if bump == "minor":
121
- return f"{major}.{minor + 1}.0"
122
- if bump == "patch":
123
- return f"{major}.{minor}.{patch + 1}"
124
- return current
125
-
126
-
127
- def _run_json_schema_lint(
128
- old_doc: Dict[str, Any],
129
- new_doc: Dict[str, Any],
130
- current_version: Optional[str] = None,
131
- api_name: Optional[str] = None,
132
- ) -> Dict[str, Any]:
133
- """Build an evaluate_with_policy-compatible result for JSON Schema.
134
-
135
- Policy rules in Delimit are defined against OpenAPI ChangeType values,
136
- so they do not apply here. We return zero violations and rely on the
137
- breaking-change count + semver bump to drive the governance gate.
138
- """
139
- from core.json_schema_diff import JSONSchemaDiffEngine
140
-
141
- engine = JSONSchemaDiffEngine()
142
- changes = engine.compare(old_doc, new_doc)
143
- semver = _json_schema_semver(changes)
144
-
145
- if current_version:
146
- semver["current_version"] = current_version
147
- semver["next_version"] = _bump_semver_version(current_version, semver["bump"])
148
-
149
- breaking_count = semver["counts"]["breaking"]
150
- total = semver["counts"]["total"]
151
-
152
- decision = "pass"
153
- exit_code = 0
154
- # No policy rules apply to JSON Schema, but breaking changes still
155
- # flag MAJOR semver and the downstream gate uses that to block.
156
- # Mirror the shape of evaluate_with_policy so the action/CLI renderers
157
- # need no JSON Schema-specific branch.
158
- result: Dict[str, Any] = {
159
- "spec_type": "json_schema",
160
- "api_name": api_name or new_doc.get("title") or old_doc.get("title") or "JSON Schema",
161
- "decision": decision,
162
- "exit_code": exit_code,
163
- "violations": [],
164
- "summary": {
165
- "total_changes": total,
166
- "breaking_changes": breaking_count,
167
- "violations": 0,
168
- "errors": 0,
169
- "warnings": 0,
170
- },
171
- "all_changes": [
172
- {
173
- "type": c.type.value,
174
- "path": c.path,
175
- "message": c.message,
176
- "is_breaking": c.is_breaking,
177
- }
178
- for c in changes
179
- ],
180
- "semver": semver,
181
- }
182
- return result
183
-
184
-
185
53
  def _read_jsonl(path: Path) -> List[Dict[str, Any]]:
186
54
  """Read JSONL entries from a file, skipping malformed lines."""
187
55
  items: List[Dict[str, Any]] = []
@@ -247,51 +115,29 @@ def run_lint(old_spec: str, new_spec: str, policy_file: Optional[str] = None) ->
247
115
  """Run the full lint pipeline: diff + policy evaluation.
248
116
 
249
117
  This is the Tier 1 primary tool — combines diff detection with
250
- policy enforcement into a single pass/fail decision. Auto-detects
251
- spec type (OpenAPI vs JSON Schema, LED-713) and dispatches to the
252
- matching engine.
118
+ policy enforcement into a single pass/fail decision.
253
119
  """
254
120
  from core.policy_engine import evaluate_with_policy
255
121
 
256
122
  old = _load_specs(old_spec)
257
123
  new = _load_specs(new_spec)
258
124
 
259
- # LED-713: JSON Schema dispatch. Policy rules are OpenAPI-specific,
260
- # so JSON Schema takes the no-policy (breaking-count + semver) path.
261
- if _spec_type(new) == "json_schema" or _spec_type(old) == "json_schema":
262
- return _run_json_schema_lint(old, new)
263
-
264
125
  return evaluate_with_policy(old, new, policy_file)
265
126
 
266
127
 
267
128
  def run_diff(old_spec: str, new_spec: str) -> Dict[str, Any]:
268
- """Run diff engine only — no policy evaluation.
129
+ """Run diff engine only — no policy evaluation."""
130
+ from core.diff_engine_v2 import OpenAPIDiffEngine
269
131
 
270
- Auto-detects OpenAPI vs JSON Schema and dispatches (LED-713).
271
- """
272
132
  old = _load_specs(old_spec)
273
133
  new = _load_specs(new_spec)
274
134
 
275
- if _spec_type(new) == "json_schema" or _spec_type(old) == "json_schema":
276
- from core.json_schema_diff import JSONSchemaDiffEngine
277
- engine = JSONSchemaDiffEngine()
278
- changes = engine.compare(old, new)
279
- breaking = [c for c in changes if c.is_breaking]
280
- return {
281
- "spec_type": "json_schema",
282
- "total_changes": len(changes),
283
- "breaking_changes": len(breaking),
284
- "changes": _json_schema_changes_to_dicts(changes),
285
- }
286
-
287
- from core.diff_engine_v2 import OpenAPIDiffEngine
288
135
  engine = OpenAPIDiffEngine()
289
136
  changes = engine.compare(old, new)
290
137
 
291
138
  breaking = [c for c in changes if c.is_breaking]
292
139
 
293
140
  return {
294
- "spec_type": "openapi",
295
141
  "total_changes": len(changes),
296
142
  "breaking_changes": len(breaking),
297
143
  "changes": [
@@ -318,20 +164,13 @@ def run_changelog(
318
164
  Uses the diff engine to detect changes, then formats them into
319
165
  a human-readable changelog grouped by category.
320
166
  """
167
+ from core.diff_engine_v2 import OpenAPIDiffEngine
321
168
  from datetime import datetime, timezone
322
169
 
323
170
  old = _load_specs(old_spec)
324
171
  new = _load_specs(new_spec)
325
172
 
326
- # LED-713: dispatch on spec type. JSONSchemaChange / Change share the
327
- # (.type.value, .path, .message, .is_breaking) duck type.
328
- if _spec_type(new) == "json_schema" or _spec_type(old) == "json_schema":
329
- from core.json_schema_diff import JSONSchemaDiffEngine
330
- engine = JSONSchemaDiffEngine()
331
- else:
332
- from core.diff_engine_v2 import OpenAPIDiffEngine
333
- engine = OpenAPIDiffEngine()
334
-
173
+ engine = OpenAPIDiffEngine()
335
174
  changes = engine.compare(old, new)
336
175
 
337
176
  # Categorize changes
@@ -969,26 +808,14 @@ def run_semver(
969
808
  """Classify the semver bump for a spec change.
970
809
 
971
810
  Returns detailed breakdown: bump level, per-category counts,
972
- and optionally the bumped version string. Auto-detects OpenAPI vs
973
- JSON Schema (LED-713).
811
+ and optionally the bumped version string.
974
812
  """
975
- old = _load_specs(old_spec)
976
- new = _load_specs(new_spec)
977
-
978
- # LED-713: JSON Schema path
979
- if _spec_type(new) == "json_schema" or _spec_type(old) == "json_schema":
980
- from core.json_schema_diff import JSONSchemaDiffEngine
981
- engine = JSONSchemaDiffEngine()
982
- changes = engine.compare(old, new)
983
- result = _json_schema_semver(changes)
984
- if current_version:
985
- result["current_version"] = current_version
986
- result["next_version"] = _bump_semver_version(current_version, result["bump"])
987
- return result
988
-
989
813
  from core.diff_engine_v2 import OpenAPIDiffEngine
990
814
  from core.semver_classifier import classify_detailed, bump_version, classify
991
815
 
816
+ old = _load_specs(old_spec)
817
+ new = _load_specs(new_spec)
818
+
992
819
  engine = OpenAPIDiffEngine()
993
820
  changes = engine.compare(old, new)
994
821
  result = classify_detailed(changes)
@@ -1119,6 +946,7 @@ def run_diff_report(
1119
946
  """
1120
947
  from datetime import datetime, timezone
1121
948
 
949
+ from core.diff_engine_v2 import OpenAPIDiffEngine
1122
950
  from core.policy_engine import PolicyEngine
1123
951
  from core.semver_classifier import classify_detailed, classify
1124
952
  from core.spec_health import score_spec
@@ -1127,43 +955,6 @@ def run_diff_report(
1127
955
  old = _load_specs(old_spec)
1128
956
  new = _load_specs(new_spec)
1129
957
 
1130
- # LED-713: JSON Schema dispatch — short-circuit to a minimal report
1131
- # shape compatible with the JSON renderer (HTML renderer remains
1132
- # OpenAPI-only; JSON Schema callers should use fmt="json").
1133
- if _spec_type(new) == "json_schema" or _spec_type(old) == "json_schema":
1134
- from core.json_schema_diff import JSONSchemaDiffEngine
1135
- js_engine = JSONSchemaDiffEngine()
1136
- js_changes = js_engine.compare(old, new)
1137
- js_breaking = [c for c in js_changes if c.is_breaking]
1138
- js_semver = _json_schema_semver(js_changes)
1139
- now_js = datetime.now(timezone.utc)
1140
- return {
1141
- "format": fmt,
1142
- "spec_type": "json_schema",
1143
- "generated_at": now_js.isoformat(),
1144
- "old_spec": old_spec,
1145
- "new_spec": new_spec,
1146
- "old_title": old.get("title", "") if isinstance(old, dict) else "",
1147
- "new_title": new.get("title", "") if isinstance(new, dict) else "",
1148
- "semver": js_semver,
1149
- "changes": _json_schema_changes_to_dicts(js_changes),
1150
- "breaking_count": len(js_breaking),
1151
- "non_breaking_count": len(js_changes) - len(js_breaking),
1152
- "total_changes": len(js_changes),
1153
- "policy": {
1154
- "decision": "pass",
1155
- "violations": [],
1156
- "errors": 0,
1157
- "warnings": 0,
1158
- },
1159
- "health": None,
1160
- "migration": "",
1161
- "output_file": output_file,
1162
- "note": "JSON Schema report (policy rules and HTML report are OpenAPI-only in v1)",
1163
- }
1164
-
1165
- from core.diff_engine_v2 import OpenAPIDiffEngine
1166
-
1167
958
  # -- Diff --
1168
959
  engine = OpenAPIDiffEngine()
1169
960
  changes = engine.compare(old, new)
@@ -158,80 +158,21 @@ def config_audit(target: str = ".", options: Optional[Dict] = None) -> Dict[str,
158
158
  # ─── EvidencePack ───────────────────────────────────────────────────────
159
159
 
160
160
  def evidence_collect(target: str = ".", options: Optional[Dict] = None) -> Dict[str, Any]:
161
- """Collect project evidence: git log, test files, configs, governance data.
162
-
163
- Accepts either a local filesystem path (repo directory) or a remote
164
- reference (GitHub URL, owner/repo#N, or any non-filesystem string).
165
- Remote targets skip the filesystem walk and store reference metadata.
166
- """
167
- import re
168
- import subprocess
169
- import time as _time
170
-
171
- opts = options or {}
172
- evidence_type = opts.get("evidence_type", "")
173
-
174
- # Detect non-filesystem targets: URLs, owner/repo#N, bare issue refs, etc.
175
- is_remote = (
176
- "://" in target
177
- or target.startswith("http")
178
- or re.match(r"^[\w.-]+/[\w.-]+#\d+$", target) is not None
179
- or "#" in target
180
- )
181
-
182
- evidence: Dict[str, Any] = {"collected_at": _time.time(), "target": target}
183
- if evidence_type:
184
- evidence["evidence_type"] = evidence_type
185
-
186
- if is_remote:
187
- # Remote/reference target — no filesystem walk, just record metadata.
188
- evidence["target_type"] = "remote"
161
+ """Collect project evidence: git log, test files, configs, governance data."""
162
+ import subprocess, time as _time
163
+ root = Path(target).resolve()
164
+ evidence: Dict[str, Any] = {"collected_at": _time.time(), "target": str(root)}
165
+ # Git log
166
+ try:
167
+ r = subprocess.run(["git", "-C", str(root), "log", "--oneline", "-10"], capture_output=True, text=True, timeout=10)
168
+ evidence["git_log"] = r.stdout.strip().splitlines() if r.returncode == 0 else []
169
+ except Exception:
189
170
  evidence["git_log"] = []
190
- evidence["test_directories"] = []
191
- evidence["configs"] = []
192
- m = re.match(r"^([\w.-]+)/([\w.-]+)#(\d+)$", target)
193
- if m:
194
- evidence["repo"] = f"{m.group(1)}/{m.group(2)}"
195
- evidence["issue_number"] = int(m.group(3))
196
- else:
197
- root = Path(target).resolve()
198
- evidence["target"] = str(root)
199
- evidence["target_type"] = "local"
200
-
201
- if not root.exists():
202
- return {
203
- "tool": "evidence.collect",
204
- "status": "error",
205
- "error": "target_not_found",
206
- "message": f"Path {root} does not exist. For remote targets, pass a URL or owner/repo#N.",
207
- "target": target,
208
- }
209
-
210
- # Git log (safe for non-git dirs)
211
- try:
212
- r = subprocess.run(
213
- ["git", "-C", str(root), "log", "--oneline", "-10"],
214
- capture_output=True, text=True, timeout=10,
215
- )
216
- evidence["git_log"] = r.stdout.strip().splitlines() if r.returncode == 0 else []
217
- except Exception:
218
- evidence["git_log"] = []
219
-
220
- # Test dirs + configs (only if target is a directory)
221
- if root.is_dir():
222
- test_dirs = [d for d in ["tests", "test", "__tests__", "spec"] if (root / d).exists()]
223
- evidence["test_directories"] = test_dirs
224
- try:
225
- evidence["configs"] = [
226
- f.name for f in root.iterdir()
227
- if f.is_file() and (f.suffix in [".json", ".yaml", ".yml", ".toml"] or f.name.startswith("."))
228
- ]
229
- except (PermissionError, OSError):
230
- evidence["configs"] = []
231
- else:
232
- evidence["test_directories"] = []
233
- evidence["configs"] = []
234
-
171
+ # Test files
172
+ test_dirs = [d for d in ["tests", "test", "__tests__", "spec"] if (root / d).exists()]
173
+ evidence["test_directories"] = test_dirs
174
+ # Configs
175
+ evidence["configs"] = [f.name for f in root.iterdir() if f.is_file() and (f.suffix in [".json", ".yaml", ".yml", ".toml"] or f.name.startswith("."))]
235
176
  # Save bundle
236
177
  ev_dir = Path(os.environ.get("DELIMIT_HOME", str(Path.home() / ".delimit"))) / "evidence"
237
178
  ev_dir.mkdir(parents=True, exist_ok=True)
@@ -239,13 +180,8 @@ def evidence_collect(target: str = ".", options: Optional[Dict] = None) -> Dict[
239
180
  bundle_path = ev_dir / f"{bundle_id}.json"
240
181
  evidence["bundle_id"] = bundle_id
241
182
  bundle_path.write_text(json.dumps(evidence, indent=2))
242
- return {
243
- "tool": "evidence.collect",
244
- "status": "ok",
245
- "bundle_id": bundle_id,
246
- "bundle_path": str(bundle_path),
247
- "summary": {k: len(v) if isinstance(v, list) else v for k, v in evidence.items()},
248
- }
183
+ return {"tool": "evidence.collect", "status": "ok", "bundle_id": bundle_id,
184
+ "bundle_path": str(bundle_path), "summary": {k: len(v) if isinstance(v, list) else v for k, v in evidence.items()}}
249
185
 
250
186
 
251
187
  def evidence_verify(bundle_id: Optional[str] = None, bundle_path: Optional[str] = None, options: Optional[Dict] = None) -> Dict[str, Any]:
@@ -3,7 +3,7 @@ Automatic OpenAPI specification detector for zero-config installation.
3
3
  """
4
4
 
5
5
  import os
6
- from typing import Any, List, Optional, Tuple
6
+ from typing import List, Optional, Tuple
7
7
  from pathlib import Path
8
8
  import yaml
9
9
 
@@ -77,7 +77,7 @@ class SpecDetector:
77
77
  """Check if file is a valid OpenAPI specification."""
78
78
  if not file_path.is_file():
79
79
  return False
80
-
80
+
81
81
  try:
82
82
  with open(file_path, 'r') as f:
83
83
  data = yaml.safe_load(f)
@@ -86,66 +86,26 @@ class SpecDetector:
86
86
  return 'openapi' in data or 'swagger' in data
87
87
  except:
88
88
  return False
89
-
89
+
90
90
  return False
91
-
91
+
92
92
  def get_default_specs(self) -> Tuple[Optional[str], Optional[str]]:
93
93
  """
94
94
  Get default old_spec and new_spec for auto-detection.
95
-
95
+
96
96
  Returns:
97
97
  (old_spec, new_spec): Paths or None if not found
98
98
  """
99
99
  specs, _ = self.detect_specs()
100
-
100
+
101
101
  if len(specs) == 0:
102
102
  return None, None
103
-
103
+
104
104
  # Use the first found spec as both old and new (baseline mode)
105
105
  default_spec = specs[0]
106
106
  return default_spec, default_spec
107
107
 
108
108
 
109
- def detect_spec_type(doc: Any) -> str:
110
- """Classify a parsed spec document for engine dispatch (LED-713).
111
-
112
- Returns:
113
- "openapi" — OpenAPI 3.x / Swagger 2.x (route to OpenAPIDiffEngine)
114
- "json_schema" — bare JSON Schema Draft 4+ (route to JSONSchemaDiffEngine)
115
- "unknown" — no recognized markers
116
- """
117
- if not isinstance(doc, dict):
118
- return "unknown"
119
- if "openapi" in doc or "swagger" in doc or "paths" in doc:
120
- return "openapi"
121
- # JSON Schema markers: $schema URL, top-level definitions, or ref-shim root
122
- schema_url = doc.get("$schema")
123
- if isinstance(schema_url, str) and "json-schema.org" in schema_url:
124
- return "json_schema"
125
- if isinstance(doc.get("definitions"), dict):
126
- return "json_schema"
127
- ref = doc.get("$ref")
128
- if isinstance(ref, str) and ref.startswith("#/definitions/"):
129
- return "json_schema"
130
- return "unknown"
131
-
132
-
133
- def get_diff_engine(doc: Any):
134
- """Factory: return the right diff engine instance for a parsed doc.
135
-
136
- Callers: action.yml inline Python, policy_engine, npm-delimit api-engine.
137
- The returned engine exposes .compare(old, new) -> List[Change].
138
- """
139
- spec_type = detect_spec_type(doc)
140
- if spec_type == "json_schema":
141
- from .json_schema_diff import JSONSchemaDiffEngine
142
- return JSONSchemaDiffEngine()
143
- # Default to OpenAPI for "openapi" and "unknown" (back-compat: existing
144
- # specs without explicit markers still hit the OpenAPI engine)
145
- from .diff_engine_v2 import OpenAPIDiffEngine
146
- return OpenAPIDiffEngine()
147
-
148
-
149
109
  def auto_detect_specs(root_path: str = ".") -> dict:
150
110
  """
151
111
  Main entry point for spec auto-detection.