eyeling 1.23.0 → 1.23.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/HANDBOOK.md +7 -2
- package/dist/browser/eyeling.browser.js +165 -93
- package/eyeling.js +166 -101
- package/lib/cli.js +3 -75
- package/lib/multisource.js +133 -14
- package/lib/rules.js +28 -4
- package/package.json +1 -1
- package/test/api.test.js +24 -0
package/HANDBOOK.md
CHANGED
|
@@ -358,7 +358,9 @@ That distinction matters because quoted formulas still play **two different role
|
|
|
358
358
|
|
|
359
359
|
The practical rule is:
|
|
360
360
|
|
|
361
|
-
> **Eyeling lifts blanks inside quoted formulas only when the quoted formula appears directly in
|
|
361
|
+
> **Eyeling lifts blanks inside quoted formulas only when the quoted formula appears directly in an ordinary premise triple position.**
|
|
362
|
+
>
|
|
363
|
+
> For `log:includes` and `log:notIncludes`, quoted formula operands keep their own blank-node scope. The builtin may treat blanks in the goal formula existentially while proving it, but blanks in an explicit scope graph remain formula-local blanks and may be returned as blank nodes rather than synthetic variables such as `?_b1`.
|
|
362
364
|
|
|
363
365
|
This keeps `log:conjunction` and formula printing honest, while still allowing direct quoted-formula premise patterns such as `{ _:X :B :C } a :Statement.` to match interoperably.
|
|
364
366
|
|
|
@@ -1526,6 +1528,7 @@ Eyeling has **two modes**:
|
|
|
1526
1528
|
1. **Explicit scope graph**: if `Scope` is a formula `{...}`
|
|
1527
1529
|
- Eyeling reasons _only inside that formula_ (its triples are the fact store).
|
|
1528
1530
|
- External rules are not used.
|
|
1531
|
+
- Blank nodes inside the explicit scope graph are preserved as graph-local blanks; if a goal variable matches one, the binding is a blank node, not a lifted rule variable.
|
|
1529
1532
|
|
|
1530
1533
|
2. **Priority-gated global scope**: otherwise
|
|
1531
1534
|
- Eyeling uses a _frozen snapshot_ of the current global closure.
|
|
@@ -1871,7 +1874,7 @@ echo '@prefix : <http://example.org/> .
|
|
|
1871
1874
|
{ ?x a :Man } => { ?x a :Mortal } .' | npx eyeling
|
|
1872
1875
|
```
|
|
1873
1876
|
|
|
1874
|
-
You can also pass one or more file paths/URLs, or `-` to read explicitly from stdin. When multiple inputs are given, Eyeling parses each source separately, merges the parsed ASTs, and then runs one reasoning pass over the combined facts and rules. This avoids constructing one giant N3 source string.
|
|
1877
|
+
You can also pass one or more file paths/URLs, or `-` to read explicitly from stdin. When multiple inputs are given, Eyeling parses each source separately, merges the parsed ASTs, and then runs one reasoning pass over the combined facts and rules. This avoids constructing one giant N3 source string. During the merge path, parse-only artifacts such as source text and token arrays are discarded after the AST has been built, except while formatting syntax errors or when an internal caller explicitly asks to keep those artifacts for debugging.
|
|
1875
1878
|
|
|
1876
1879
|
Show the available options:
|
|
1877
1880
|
|
|
@@ -2054,6 +2057,8 @@ console.log(out);
|
|
|
2054
2057
|
|
|
2055
2058
|
In a source list, each source is parsed with its own blank-node scope and optional base IRI. That means the same explicit blank label, such as `_:x`, in two different sources does not accidentally become the same blank node after merging. Prefix declarations are merged mainly for readable output; IRI expansion has already happened while each source was parsed.
|
|
2056
2059
|
|
|
2060
|
+
For large inputs, the source-list path is also the preferred memory shape: each source is lexed and parsed independently, then Eyeling keeps the merged facts/rules rather than retaining the original source strings and token arrays. The parser still needs one JavaScript string per source, so source splitting is not a streaming parser, but it avoids both a single giant concatenated N3 string and long-lived token arrays after parsing.
|
|
2061
|
+
|
|
2057
2062
|
For RDF/JS facts, the graph must be the default graph. Named-graph quads are rejected.
|
|
2058
2063
|
|
|
2059
2064
|
If you already have rules in structured form, Eyeling rule objects can be passed directly in the API:
|
|
@@ -5132,6 +5132,8 @@
|
|
|
5132
5132
|
parseN3Text(text, {
|
|
5133
5133
|
baseIri: __sourceLabelToBaseIri(sourceLabel),
|
|
5134
5134
|
label: sourceLabel,
|
|
5135
|
+
collectUsedPrefixes: true,
|
|
5136
|
+
keepSourceArtifacts: false,
|
|
5135
5137
|
}),
|
|
5136
5138
|
);
|
|
5137
5139
|
} catch (e) {
|
|
@@ -5149,8 +5151,6 @@
|
|
|
5149
5151
|
const frules = mergedDocument.frules;
|
|
5150
5152
|
const brules = mergedDocument.brules;
|
|
5151
5153
|
const qrules = mergedDocument.logQueryRules;
|
|
5152
|
-
const tokenSets = parsedSources.map((source) => ({ tokens: source.tokens, prefixes: source.prefixes }));
|
|
5153
|
-
|
|
5154
5154
|
if (showAst) {
|
|
5155
5155
|
function astReplacer(unusedJsonKey, value) {
|
|
5156
5156
|
if (value instanceof Set) return Array.from(value);
|
|
@@ -5208,75 +5208,6 @@
|
|
|
5208
5208
|
// In --stream mode we print prefixes *before* any derivations happen.
|
|
5209
5209
|
// To keep the header small and stable, emit only prefixes that are actually
|
|
5210
5210
|
// used (as QNames) in the *input* N3 program.
|
|
5211
|
-
function prefixesUsedInInputTokens(toks2, prefEnv) {
|
|
5212
|
-
const used = new Set();
|
|
5213
|
-
|
|
5214
|
-
function maybeAddFromQName(name) {
|
|
5215
|
-
if (typeof name !== 'string') return;
|
|
5216
|
-
if (!name.includes(':')) return;
|
|
5217
|
-
if (name.startsWith('_:')) return; // blank node
|
|
5218
|
-
|
|
5219
|
-
// Split only on the first ':'
|
|
5220
|
-
const idx = name.indexOf(':');
|
|
5221
|
-
const p = name.slice(0, idx); // may be '' for ":foo"
|
|
5222
|
-
|
|
5223
|
-
// Ignore things like "http://..." unless that prefix is actually defined.
|
|
5224
|
-
if (!Object.prototype.hasOwnProperty.call(prefEnv.map, p)) return;
|
|
5225
|
-
|
|
5226
|
-
used.add(p);
|
|
5227
|
-
}
|
|
5228
|
-
|
|
5229
|
-
for (let i = 0; i < toks2.length; i++) {
|
|
5230
|
-
const t = toks2[i];
|
|
5231
|
-
|
|
5232
|
-
// Skip @prefix ... .
|
|
5233
|
-
if (t.typ === 'AtPrefix') {
|
|
5234
|
-
while (i < toks2.length && toks2[i].typ !== 'Dot' && toks2[i].typ !== 'EOF') i++;
|
|
5235
|
-
continue;
|
|
5236
|
-
}
|
|
5237
|
-
// Skip @base ... .
|
|
5238
|
-
if (t.typ === 'AtBase') {
|
|
5239
|
-
while (i < toks2.length && toks2[i].typ !== 'Dot' && toks2[i].typ !== 'EOF') i++;
|
|
5240
|
-
continue;
|
|
5241
|
-
}
|
|
5242
|
-
|
|
5243
|
-
// Skip SPARQL/Turtle PREFIX pfx: <iri>
|
|
5244
|
-
if (
|
|
5245
|
-
t.typ === 'Ident' &&
|
|
5246
|
-
typeof t.value === 'string' &&
|
|
5247
|
-
t.value.toLowerCase() === 'prefix' &&
|
|
5248
|
-
toks2[i + 1] &&
|
|
5249
|
-
toks2[i + 1].typ === 'Ident' &&
|
|
5250
|
-
typeof toks2[i + 1].value === 'string' &&
|
|
5251
|
-
toks2[i + 1].value.endsWith(':') &&
|
|
5252
|
-
toks2[i + 2] &&
|
|
5253
|
-
(toks2[i + 2].typ === 'IriRef' || toks2[i + 2].typ === 'Ident')
|
|
5254
|
-
) {
|
|
5255
|
-
i += 2;
|
|
5256
|
-
continue;
|
|
5257
|
-
}
|
|
5258
|
-
|
|
5259
|
-
// Skip SPARQL BASE <iri>
|
|
5260
|
-
if (
|
|
5261
|
-
t.typ === 'Ident' &&
|
|
5262
|
-
typeof t.value === 'string' &&
|
|
5263
|
-
t.value.toLowerCase() === 'base' &&
|
|
5264
|
-
toks2[i + 1] &&
|
|
5265
|
-
toks2[i + 1].typ === 'IriRef'
|
|
5266
|
-
) {
|
|
5267
|
-
i += 1;
|
|
5268
|
-
continue;
|
|
5269
|
-
}
|
|
5270
|
-
|
|
5271
|
-
// Count QNames in identifiers (including datatypes like xsd:integer).
|
|
5272
|
-
if (t.typ === 'Ident') {
|
|
5273
|
-
maybeAddFromQName(t.value);
|
|
5274
|
-
}
|
|
5275
|
-
}
|
|
5276
|
-
|
|
5277
|
-
return used;
|
|
5278
|
-
}
|
|
5279
|
-
|
|
5280
5211
|
function restrictPrefixEnv(prefEnv, usedSet) {
|
|
5281
5212
|
const m = {};
|
|
5282
5213
|
for (const p of usedSet) {
|
|
@@ -5294,10 +5225,8 @@
|
|
|
5294
5225
|
const mayAutoRenderOutputStrings = programMayProduceOutputStrings(triples, frules, qrules);
|
|
5295
5226
|
|
|
5296
5227
|
if (streamMode && !hasQueries && !mayAutoRenderOutputStrings) {
|
|
5297
|
-
const usedInInput =
|
|
5298
|
-
|
|
5299
|
-
for (const pfx of prefixesUsedInInputTokens(source.tokens, source.prefixes)) usedInInput.add(pfx);
|
|
5300
|
-
}
|
|
5228
|
+
const usedInInput =
|
|
5229
|
+
mergedDocument.usedPrefixes instanceof Set ? new Set(mergedDocument.usedPrefixes) : new Set();
|
|
5301
5230
|
const outPrefixes = restrictPrefixEnv(prefixes, usedInInput);
|
|
5302
5231
|
|
|
5303
5232
|
// Ensure log:trace uses the same compact prefix set as the output.
|
|
@@ -10614,13 +10543,101 @@ ${triples.map((tr) => ` ${tripleToN3(tr, prefixes)}`).join('\n')}
|
|
|
10614
10543
|
};
|
|
10615
10544
|
}
|
|
10616
10545
|
|
|
10546
|
+
function prefixesUsedInTokens(tokens, prefEnv) {
|
|
10547
|
+
const used = new Set();
|
|
10548
|
+
const toks = Array.isArray(tokens) ? tokens : [];
|
|
10549
|
+
const prefixes = prefEnv && prefEnv.map ? prefEnv.map : {};
|
|
10550
|
+
|
|
10551
|
+
function maybeAddFromQName(name) {
|
|
10552
|
+
if (typeof name !== 'string') return;
|
|
10553
|
+
if (!name.includes(':')) return;
|
|
10554
|
+
if (name.startsWith('_:')) return; // blank node
|
|
10555
|
+
|
|
10556
|
+
// Split only on the first ':'; the empty prefix is valid for ":foo".
|
|
10557
|
+
const idx = name.indexOf(':');
|
|
10558
|
+
const p = name.slice(0, idx);
|
|
10559
|
+
|
|
10560
|
+
// Ignore strings like "http://..." unless that prefix is actually defined.
|
|
10561
|
+
if (!Object.prototype.hasOwnProperty.call(prefixes, p)) return;
|
|
10562
|
+
|
|
10563
|
+
used.add(p);
|
|
10564
|
+
}
|
|
10565
|
+
|
|
10566
|
+
for (let i = 0; i < toks.length; i++) {
|
|
10567
|
+
const t = toks[i];
|
|
10568
|
+
if (!t) continue;
|
|
10569
|
+
|
|
10570
|
+
// Skip @prefix ... .
|
|
10571
|
+
if (t.typ === 'AtPrefix') {
|
|
10572
|
+
while (i < toks.length && toks[i].typ !== 'Dot' && toks[i].typ !== 'EOF') i++;
|
|
10573
|
+
continue;
|
|
10574
|
+
}
|
|
10575
|
+
|
|
10576
|
+
// Skip @base ... .
|
|
10577
|
+
if (t.typ === 'AtBase') {
|
|
10578
|
+
while (i < toks.length && toks[i].typ !== 'Dot' && toks[i].typ !== 'EOF') i++;
|
|
10579
|
+
continue;
|
|
10580
|
+
}
|
|
10581
|
+
|
|
10582
|
+
// Skip SPARQL/Turtle PREFIX pfx: <iri>
|
|
10583
|
+
if (
|
|
10584
|
+
t.typ === 'Ident' &&
|
|
10585
|
+
typeof t.value === 'string' &&
|
|
10586
|
+
t.value.toLowerCase() === 'prefix' &&
|
|
10587
|
+
toks[i + 1] &&
|
|
10588
|
+
toks[i + 1].typ === 'Ident' &&
|
|
10589
|
+
typeof toks[i + 1].value === 'string' &&
|
|
10590
|
+
toks[i + 1].value.endsWith(':') &&
|
|
10591
|
+
toks[i + 2] &&
|
|
10592
|
+
(toks[i + 2].typ === 'IriRef' || toks[i + 2].typ === 'Ident')
|
|
10593
|
+
) {
|
|
10594
|
+
i += 2;
|
|
10595
|
+
continue;
|
|
10596
|
+
}
|
|
10597
|
+
|
|
10598
|
+
// Skip SPARQL BASE <iri>
|
|
10599
|
+
if (
|
|
10600
|
+
t.typ === 'Ident' &&
|
|
10601
|
+
typeof t.value === 'string' &&
|
|
10602
|
+
t.value.toLowerCase() === 'base' &&
|
|
10603
|
+
toks[i + 1] &&
|
|
10604
|
+
toks[i + 1].typ === 'IriRef'
|
|
10605
|
+
) {
|
|
10606
|
+
i += 1;
|
|
10607
|
+
continue;
|
|
10608
|
+
}
|
|
10609
|
+
|
|
10610
|
+
// Count QNames in identifiers, including datatypes like xsd:integer.
|
|
10611
|
+
if (t.typ === 'Ident') maybeAddFromQName(t.value);
|
|
10612
|
+
}
|
|
10613
|
+
|
|
10614
|
+
return used;
|
|
10615
|
+
}
|
|
10616
|
+
|
|
10617
10617
|
function parseN3Text(text, opts = {}) {
|
|
10618
|
-
const { baseIri = '', label = '<input>' } = opts || {};
|
|
10618
|
+
const { baseIri = '', label = '<input>', keepSourceArtifacts = true, collectUsedPrefixes = false } = opts || {};
|
|
10619
10619
|
const tokens = lex(text);
|
|
10620
10620
|
const parser = new Parser(tokens);
|
|
10621
10621
|
if (baseIri) parser.prefixes.setBase(baseIri);
|
|
10622
10622
|
const [prefixes, triples, frules, brules, logQueryRules] = parser.parseDocument();
|
|
10623
|
-
|
|
10623
|
+
|
|
10624
|
+
const doc = { prefixes, triples, frules, brules, logQueryRules, label };
|
|
10625
|
+
|
|
10626
|
+
if (collectUsedPrefixes) {
|
|
10627
|
+
Object.defineProperty(doc, 'usedPrefixes', {
|
|
10628
|
+
value: prefixesUsedInTokens(tokens, prefixes),
|
|
10629
|
+
enumerable: false,
|
|
10630
|
+
writable: false,
|
|
10631
|
+
configurable: true,
|
|
10632
|
+
});
|
|
10633
|
+
}
|
|
10634
|
+
|
|
10635
|
+
if (keepSourceArtifacts) {
|
|
10636
|
+
doc.tokens = tokens;
|
|
10637
|
+
doc.text = text;
|
|
10638
|
+
}
|
|
10639
|
+
|
|
10640
|
+
return doc;
|
|
10624
10641
|
}
|
|
10625
10642
|
|
|
10626
10643
|
function sourceBlankPrefix(sourceIndex) {
|
|
@@ -10679,16 +10696,27 @@ ${triples.map((tr) => ` ${tripleToN3(tr, prefixes)}`).join('\n')}
|
|
|
10679
10696
|
return out;
|
|
10680
10697
|
}
|
|
10681
10698
|
|
|
10682
|
-
|
|
10699
|
+
const out = {
|
|
10683
10700
|
prefixes: doc.prefixes,
|
|
10684
10701
|
triples: (doc.triples || []).map(cloneTriple),
|
|
10685
10702
|
frules: (doc.frules || []).map(cloneRule),
|
|
10686
10703
|
brules: (doc.brules || []).map(cloneRule),
|
|
10687
10704
|
logQueryRules: (doc.logQueryRules || []).map(cloneRule),
|
|
10688
|
-
tokens: doc.tokens,
|
|
10689
|
-
text: doc.text,
|
|
10690
10705
|
label: doc.label,
|
|
10691
10706
|
};
|
|
10707
|
+
|
|
10708
|
+
if (doc.usedPrefixes instanceof Set) {
|
|
10709
|
+
Object.defineProperty(out, 'usedPrefixes', {
|
|
10710
|
+
value: new Set(doc.usedPrefixes),
|
|
10711
|
+
enumerable: false,
|
|
10712
|
+
writable: false,
|
|
10713
|
+
configurable: true,
|
|
10714
|
+
});
|
|
10715
|
+
}
|
|
10716
|
+
if (Object.prototype.hasOwnProperty.call(doc, 'tokens')) out.tokens = doc.tokens;
|
|
10717
|
+
if (Object.prototype.hasOwnProperty.call(doc, 'text')) out.text = doc.text;
|
|
10718
|
+
|
|
10719
|
+
return out;
|
|
10692
10720
|
}
|
|
10693
10721
|
|
|
10694
10722
|
function mergePrefixEnvs(target, source) {
|
|
@@ -10707,9 +10735,10 @@ ${triples.map((tr) => ` ${tripleToN3(tr, prefixes)}`).join('\n')}
|
|
|
10707
10735
|
function mergeParsedDocuments(docs, opts = {}) {
|
|
10708
10736
|
const documents = Array.isArray(docs) ? docs : [];
|
|
10709
10737
|
const scopeBlankNodes = typeof opts.scopeBlankNodes === 'boolean' ? opts.scopeBlankNodes : documents.length > 1;
|
|
10738
|
+
const keepSources = !!opts.keepSources || !!opts.keepSourceArtifacts;
|
|
10710
10739
|
|
|
10711
10740
|
const merged = emptyParsedDocument();
|
|
10712
|
-
const mergedSources = [];
|
|
10741
|
+
const mergedSources = keepSources ? [] : null;
|
|
10713
10742
|
|
|
10714
10743
|
for (let i = 0; i < documents.length; i++) {
|
|
10715
10744
|
const originalDoc = documents[i] || emptyParsedDocument();
|
|
@@ -10720,15 +10749,30 @@ ${triples.map((tr) => ` ${tripleToN3(tr, prefixes)}`).join('\n')}
|
|
|
10720
10749
|
merged.frules.push(...(doc.frules || []));
|
|
10721
10750
|
merged.brules.push(...(doc.brules || []));
|
|
10722
10751
|
merged.logQueryRules.push(...(doc.logQueryRules || []));
|
|
10723
|
-
|
|
10752
|
+
|
|
10753
|
+
if (doc.usedPrefixes instanceof Set) {
|
|
10754
|
+
if (!(merged.usedPrefixes instanceof Set)) {
|
|
10755
|
+
Object.defineProperty(merged, 'usedPrefixes', {
|
|
10756
|
+
value: new Set(),
|
|
10757
|
+
enumerable: false,
|
|
10758
|
+
writable: false,
|
|
10759
|
+
configurable: true,
|
|
10760
|
+
});
|
|
10761
|
+
}
|
|
10762
|
+
for (const pfx of doc.usedPrefixes) merged.usedPrefixes.add(pfx);
|
|
10763
|
+
}
|
|
10764
|
+
|
|
10765
|
+
if (keepSources) mergedSources.push(doc);
|
|
10724
10766
|
}
|
|
10725
10767
|
|
|
10726
|
-
|
|
10727
|
-
|
|
10728
|
-
|
|
10729
|
-
|
|
10730
|
-
|
|
10731
|
-
|
|
10768
|
+
if (keepSources) {
|
|
10769
|
+
Object.defineProperty(merged, 'sources', {
|
|
10770
|
+
value: mergedSources,
|
|
10771
|
+
enumerable: false,
|
|
10772
|
+
writable: false,
|
|
10773
|
+
configurable: true,
|
|
10774
|
+
});
|
|
10775
|
+
}
|
|
10732
10776
|
|
|
10733
10777
|
return merged;
|
|
10734
10778
|
}
|
|
@@ -10760,14 +10804,17 @@ ${triples.map((tr) => ` ${tripleToN3(tr, prefixes)}`).join('\n')}
|
|
|
10760
10804
|
if (!isN3SourceListInput(input)) return null;
|
|
10761
10805
|
const sources = input.sources.map(normalizeN3SourceItem);
|
|
10762
10806
|
const defaultBaseIri = typeof opts.baseIri === 'string' ? opts.baseIri : '';
|
|
10763
|
-
const parsed = sources.map((source
|
|
10807
|
+
const parsed = sources.map((source) =>
|
|
10764
10808
|
parseN3Text(source.text, {
|
|
10765
10809
|
label: source.label,
|
|
10766
10810
|
baseIri: source.baseIri || (sources.length === 1 ? defaultBaseIri : ''),
|
|
10811
|
+
collectUsedPrefixes: true,
|
|
10812
|
+
keepSourceArtifacts: !!opts.keepSourceArtifacts,
|
|
10767
10813
|
}),
|
|
10768
10814
|
);
|
|
10769
10815
|
return mergeParsedDocuments(parsed, {
|
|
10770
10816
|
scopeBlankNodes: typeof input.scopeBlankNodes === 'boolean' ? input.scopeBlankNodes : parsed.length > 1,
|
|
10817
|
+
keepSources: !!opts.keepSourceArtifacts,
|
|
10771
10818
|
});
|
|
10772
10819
|
}
|
|
10773
10820
|
|
|
@@ -10776,6 +10823,7 @@ ${triples.map((tr) => ` ${tripleToN3(tr, prefixes)}`).join('\n')}
|
|
|
10776
10823
|
parseN3Text,
|
|
10777
10824
|
mergeParsedDocuments,
|
|
10778
10825
|
scopeBlankNodesInDocument,
|
|
10826
|
+
prefixesUsedInTokens,
|
|
10779
10827
|
isN3SourceListInput,
|
|
10780
10828
|
parseN3SourceList,
|
|
10781
10829
|
};
|
|
@@ -13272,9 +13320,23 @@ ${triples.map((tr) => ` ${tripleToN3(tr, prefixes)}`).join('\n')}
|
|
|
13272
13320
|
|
|
13273
13321
|
'use strict';
|
|
13274
13322
|
|
|
13275
|
-
const {
|
|
13323
|
+
const {
|
|
13324
|
+
LOG_NS,
|
|
13325
|
+
Iri,
|
|
13326
|
+
Var,
|
|
13327
|
+
Blank,
|
|
13328
|
+
ListTerm,
|
|
13329
|
+
OpenListTerm,
|
|
13330
|
+
GraphTerm,
|
|
13331
|
+
Triple,
|
|
13332
|
+
copyQuotedGraphMetadata,
|
|
13333
|
+
} = require('./prelude');
|
|
13276
13334
|
|
|
13277
13335
|
function liftBlankRuleVars(premise, conclusion) {
|
|
13336
|
+
function isLogIncludesLikePredicate(p) {
|
|
13337
|
+
return p instanceof Iri && (p.value === LOG_NS + 'includes' || p.value === LOG_NS + 'notIncludes');
|
|
13338
|
+
}
|
|
13339
|
+
|
|
13278
13340
|
// Map blank labels to stable rule-local variable names.
|
|
13279
13341
|
// This runs at rule construction time; keep it simple and allocation-light.
|
|
13280
13342
|
const mapping = Object.create(null);
|
|
@@ -13335,9 +13397,19 @@ ${triples.map((tr) => ` ${tripleToN3(tr, prefixes)}`).join('\n')}
|
|
|
13335
13397
|
return t;
|
|
13336
13398
|
}
|
|
13337
13399
|
|
|
13338
|
-
const newPremise = premise.map(
|
|
13339
|
-
|
|
13340
|
-
|
|
13400
|
+
const newPremise = premise.map((tr) => {
|
|
13401
|
+
// In log:includes / log:notIncludes, quoted formula operands are formulas
|
|
13402
|
+
// consumed by the builtin rather than ordinary triple patterns. Keep their
|
|
13403
|
+
// local blank nodes as Blank terms so the builtin can treat them as local
|
|
13404
|
+
// existentials, and bindings returned from an explicit scope are blank nodes
|
|
13405
|
+
// instead of synthetic rule variables such as ?_b1.
|
|
13406
|
+
const keepFormulaBlanks = isLogIncludesLikePredicate(tr.p);
|
|
13407
|
+
return new Triple(
|
|
13408
|
+
keepFormulaBlanks && tr.s instanceof GraphTerm ? copyQuotedTerm(tr.s) : convertTerm(tr.s, true),
|
|
13409
|
+
convertTerm(tr.p, true),
|
|
13410
|
+
keepFormulaBlanks && tr.o instanceof GraphTerm ? copyQuotedTerm(tr.o) : convertTerm(tr.o, true),
|
|
13411
|
+
);
|
|
13412
|
+
});
|
|
13341
13413
|
return [newPremise, conclusion];
|
|
13342
13414
|
}
|
|
13343
13415
|
|
package/eyeling.js
CHANGED
|
@@ -5116,6 +5116,8 @@ function main() {
|
|
|
5116
5116
|
parseN3Text(text, {
|
|
5117
5117
|
baseIri: __sourceLabelToBaseIri(sourceLabel),
|
|
5118
5118
|
label: sourceLabel,
|
|
5119
|
+
collectUsedPrefixes: true,
|
|
5120
|
+
keepSourceArtifacts: false,
|
|
5119
5121
|
}),
|
|
5120
5122
|
);
|
|
5121
5123
|
} catch (e) {
|
|
@@ -5133,8 +5135,6 @@ function main() {
|
|
|
5133
5135
|
const frules = mergedDocument.frules;
|
|
5134
5136
|
const brules = mergedDocument.brules;
|
|
5135
5137
|
const qrules = mergedDocument.logQueryRules;
|
|
5136
|
-
const tokenSets = parsedSources.map((source) => ({ tokens: source.tokens, prefixes: source.prefixes }));
|
|
5137
|
-
|
|
5138
5138
|
if (showAst) {
|
|
5139
5139
|
function astReplacer(unusedJsonKey, value) {
|
|
5140
5140
|
if (value instanceof Set) return Array.from(value);
|
|
@@ -5190,75 +5190,6 @@ function main() {
|
|
|
5190
5190
|
// In --stream mode we print prefixes *before* any derivations happen.
|
|
5191
5191
|
// To keep the header small and stable, emit only prefixes that are actually
|
|
5192
5192
|
// used (as QNames) in the *input* N3 program.
|
|
5193
|
-
function prefixesUsedInInputTokens(toks2, prefEnv) {
|
|
5194
|
-
const used = new Set();
|
|
5195
|
-
|
|
5196
|
-
function maybeAddFromQName(name) {
|
|
5197
|
-
if (typeof name !== 'string') return;
|
|
5198
|
-
if (!name.includes(':')) return;
|
|
5199
|
-
if (name.startsWith('_:')) return; // blank node
|
|
5200
|
-
|
|
5201
|
-
// Split only on the first ':'
|
|
5202
|
-
const idx = name.indexOf(':');
|
|
5203
|
-
const p = name.slice(0, idx); // may be '' for ":foo"
|
|
5204
|
-
|
|
5205
|
-
// Ignore things like "http://..." unless that prefix is actually defined.
|
|
5206
|
-
if (!Object.prototype.hasOwnProperty.call(prefEnv.map, p)) return;
|
|
5207
|
-
|
|
5208
|
-
used.add(p);
|
|
5209
|
-
}
|
|
5210
|
-
|
|
5211
|
-
for (let i = 0; i < toks2.length; i++) {
|
|
5212
|
-
const t = toks2[i];
|
|
5213
|
-
|
|
5214
|
-
// Skip @prefix ... .
|
|
5215
|
-
if (t.typ === 'AtPrefix') {
|
|
5216
|
-
while (i < toks2.length && toks2[i].typ !== 'Dot' && toks2[i].typ !== 'EOF') i++;
|
|
5217
|
-
continue;
|
|
5218
|
-
}
|
|
5219
|
-
// Skip @base ... .
|
|
5220
|
-
if (t.typ === 'AtBase') {
|
|
5221
|
-
while (i < toks2.length && toks2[i].typ !== 'Dot' && toks2[i].typ !== 'EOF') i++;
|
|
5222
|
-
continue;
|
|
5223
|
-
}
|
|
5224
|
-
|
|
5225
|
-
// Skip SPARQL/Turtle PREFIX pfx: <iri>
|
|
5226
|
-
if (
|
|
5227
|
-
t.typ === 'Ident' &&
|
|
5228
|
-
typeof t.value === 'string' &&
|
|
5229
|
-
t.value.toLowerCase() === 'prefix' &&
|
|
5230
|
-
toks2[i + 1] &&
|
|
5231
|
-
toks2[i + 1].typ === 'Ident' &&
|
|
5232
|
-
typeof toks2[i + 1].value === 'string' &&
|
|
5233
|
-
toks2[i + 1].value.endsWith(':') &&
|
|
5234
|
-
toks2[i + 2] &&
|
|
5235
|
-
(toks2[i + 2].typ === 'IriRef' || toks2[i + 2].typ === 'Ident')
|
|
5236
|
-
) {
|
|
5237
|
-
i += 2;
|
|
5238
|
-
continue;
|
|
5239
|
-
}
|
|
5240
|
-
|
|
5241
|
-
// Skip SPARQL BASE <iri>
|
|
5242
|
-
if (
|
|
5243
|
-
t.typ === 'Ident' &&
|
|
5244
|
-
typeof t.value === 'string' &&
|
|
5245
|
-
t.value.toLowerCase() === 'base' &&
|
|
5246
|
-
toks2[i + 1] &&
|
|
5247
|
-
toks2[i + 1].typ === 'IriRef'
|
|
5248
|
-
) {
|
|
5249
|
-
i += 1;
|
|
5250
|
-
continue;
|
|
5251
|
-
}
|
|
5252
|
-
|
|
5253
|
-
// Count QNames in identifiers (including datatypes like xsd:integer).
|
|
5254
|
-
if (t.typ === 'Ident') {
|
|
5255
|
-
maybeAddFromQName(t.value);
|
|
5256
|
-
}
|
|
5257
|
-
}
|
|
5258
|
-
|
|
5259
|
-
return used;
|
|
5260
|
-
}
|
|
5261
|
-
|
|
5262
5193
|
function restrictPrefixEnv(prefEnv, usedSet) {
|
|
5263
5194
|
const m = {};
|
|
5264
5195
|
for (const p of usedSet) {
|
|
@@ -5276,10 +5207,7 @@ function main() {
|
|
|
5276
5207
|
const mayAutoRenderOutputStrings = programMayProduceOutputStrings(triples, frules, qrules);
|
|
5277
5208
|
|
|
5278
5209
|
if (streamMode && !hasQueries && !mayAutoRenderOutputStrings) {
|
|
5279
|
-
const usedInInput = new Set();
|
|
5280
|
-
for (const source of tokenSets) {
|
|
5281
|
-
for (const pfx of prefixesUsedInInputTokens(source.tokens, source.prefixes)) usedInInput.add(pfx);
|
|
5282
|
-
}
|
|
5210
|
+
const usedInInput = mergedDocument.usedPrefixes instanceof Set ? new Set(mergedDocument.usedPrefixes) : new Set();
|
|
5283
5211
|
const outPrefixes = restrictPrefixEnv(prefixes, usedInInput);
|
|
5284
5212
|
|
|
5285
5213
|
// Ensure log:trace uses the same compact prefix set as the output.
|
|
@@ -10581,13 +10509,101 @@ function emptyParsedDocument() {
|
|
|
10581
10509
|
};
|
|
10582
10510
|
}
|
|
10583
10511
|
|
|
10512
|
+
function prefixesUsedInTokens(tokens, prefEnv) {
|
|
10513
|
+
const used = new Set();
|
|
10514
|
+
const toks = Array.isArray(tokens) ? tokens : [];
|
|
10515
|
+
const prefixes = prefEnv && prefEnv.map ? prefEnv.map : {};
|
|
10516
|
+
|
|
10517
|
+
function maybeAddFromQName(name) {
|
|
10518
|
+
if (typeof name !== 'string') return;
|
|
10519
|
+
if (!name.includes(':')) return;
|
|
10520
|
+
if (name.startsWith('_:')) return; // blank node
|
|
10521
|
+
|
|
10522
|
+
// Split only on the first ':'; the empty prefix is valid for ":foo".
|
|
10523
|
+
const idx = name.indexOf(':');
|
|
10524
|
+
const p = name.slice(0, idx);
|
|
10525
|
+
|
|
10526
|
+
// Ignore strings like "http://..." unless that prefix is actually defined.
|
|
10527
|
+
if (!Object.prototype.hasOwnProperty.call(prefixes, p)) return;
|
|
10528
|
+
|
|
10529
|
+
used.add(p);
|
|
10530
|
+
}
|
|
10531
|
+
|
|
10532
|
+
for (let i = 0; i < toks.length; i++) {
|
|
10533
|
+
const t = toks[i];
|
|
10534
|
+
if (!t) continue;
|
|
10535
|
+
|
|
10536
|
+
// Skip @prefix ... .
|
|
10537
|
+
if (t.typ === 'AtPrefix') {
|
|
10538
|
+
while (i < toks.length && toks[i].typ !== 'Dot' && toks[i].typ !== 'EOF') i++;
|
|
10539
|
+
continue;
|
|
10540
|
+
}
|
|
10541
|
+
|
|
10542
|
+
// Skip @base ... .
|
|
10543
|
+
if (t.typ === 'AtBase') {
|
|
10544
|
+
while (i < toks.length && toks[i].typ !== 'Dot' && toks[i].typ !== 'EOF') i++;
|
|
10545
|
+
continue;
|
|
10546
|
+
}
|
|
10547
|
+
|
|
10548
|
+
// Skip SPARQL/Turtle PREFIX pfx: <iri>
|
|
10549
|
+
if (
|
|
10550
|
+
t.typ === 'Ident' &&
|
|
10551
|
+
typeof t.value === 'string' &&
|
|
10552
|
+
t.value.toLowerCase() === 'prefix' &&
|
|
10553
|
+
toks[i + 1] &&
|
|
10554
|
+
toks[i + 1].typ === 'Ident' &&
|
|
10555
|
+
typeof toks[i + 1].value === 'string' &&
|
|
10556
|
+
toks[i + 1].value.endsWith(':') &&
|
|
10557
|
+
toks[i + 2] &&
|
|
10558
|
+
(toks[i + 2].typ === 'IriRef' || toks[i + 2].typ === 'Ident')
|
|
10559
|
+
) {
|
|
10560
|
+
i += 2;
|
|
10561
|
+
continue;
|
|
10562
|
+
}
|
|
10563
|
+
|
|
10564
|
+
// Skip SPARQL BASE <iri>
|
|
10565
|
+
if (
|
|
10566
|
+
t.typ === 'Ident' &&
|
|
10567
|
+
typeof t.value === 'string' &&
|
|
10568
|
+
t.value.toLowerCase() === 'base' &&
|
|
10569
|
+
toks[i + 1] &&
|
|
10570
|
+
toks[i + 1].typ === 'IriRef'
|
|
10571
|
+
) {
|
|
10572
|
+
i += 1;
|
|
10573
|
+
continue;
|
|
10574
|
+
}
|
|
10575
|
+
|
|
10576
|
+
// Count QNames in identifiers, including datatypes like xsd:integer.
|
|
10577
|
+
if (t.typ === 'Ident') maybeAddFromQName(t.value);
|
|
10578
|
+
}
|
|
10579
|
+
|
|
10580
|
+
return used;
|
|
10581
|
+
}
|
|
10582
|
+
|
|
10584
10583
|
function parseN3Text(text, opts = {}) {
|
|
10585
|
-
const { baseIri = '', label = '<input>' } = opts || {};
|
|
10584
|
+
const { baseIri = '', label = '<input>', keepSourceArtifacts = true, collectUsedPrefixes = false } = opts || {};
|
|
10586
10585
|
const tokens = lex(text);
|
|
10587
10586
|
const parser = new Parser(tokens);
|
|
10588
10587
|
if (baseIri) parser.prefixes.setBase(baseIri);
|
|
10589
10588
|
const [prefixes, triples, frules, brules, logQueryRules] = parser.parseDocument();
|
|
10590
|
-
|
|
10589
|
+
|
|
10590
|
+
const doc = { prefixes, triples, frules, brules, logQueryRules, label };
|
|
10591
|
+
|
|
10592
|
+
if (collectUsedPrefixes) {
|
|
10593
|
+
Object.defineProperty(doc, 'usedPrefixes', {
|
|
10594
|
+
value: prefixesUsedInTokens(tokens, prefixes),
|
|
10595
|
+
enumerable: false,
|
|
10596
|
+
writable: false,
|
|
10597
|
+
configurable: true,
|
|
10598
|
+
});
|
|
10599
|
+
}
|
|
10600
|
+
|
|
10601
|
+
if (keepSourceArtifacts) {
|
|
10602
|
+
doc.tokens = tokens;
|
|
10603
|
+
doc.text = text;
|
|
10604
|
+
}
|
|
10605
|
+
|
|
10606
|
+
return doc;
|
|
10591
10607
|
}
|
|
10592
10608
|
|
|
10593
10609
|
function sourceBlankPrefix(sourceIndex) {
|
|
@@ -10646,16 +10662,27 @@ function scopeBlankNodesInDocument(doc, sourceIndex) {
|
|
|
10646
10662
|
return out;
|
|
10647
10663
|
}
|
|
10648
10664
|
|
|
10649
|
-
|
|
10665
|
+
const out = {
|
|
10650
10666
|
prefixes: doc.prefixes,
|
|
10651
10667
|
triples: (doc.triples || []).map(cloneTriple),
|
|
10652
10668
|
frules: (doc.frules || []).map(cloneRule),
|
|
10653
10669
|
brules: (doc.brules || []).map(cloneRule),
|
|
10654
10670
|
logQueryRules: (doc.logQueryRules || []).map(cloneRule),
|
|
10655
|
-
tokens: doc.tokens,
|
|
10656
|
-
text: doc.text,
|
|
10657
10671
|
label: doc.label,
|
|
10658
10672
|
};
|
|
10673
|
+
|
|
10674
|
+
if (doc.usedPrefixes instanceof Set) {
|
|
10675
|
+
Object.defineProperty(out, 'usedPrefixes', {
|
|
10676
|
+
value: new Set(doc.usedPrefixes),
|
|
10677
|
+
enumerable: false,
|
|
10678
|
+
writable: false,
|
|
10679
|
+
configurable: true,
|
|
10680
|
+
});
|
|
10681
|
+
}
|
|
10682
|
+
if (Object.prototype.hasOwnProperty.call(doc, 'tokens')) out.tokens = doc.tokens;
|
|
10683
|
+
if (Object.prototype.hasOwnProperty.call(doc, 'text')) out.text = doc.text;
|
|
10684
|
+
|
|
10685
|
+
return out;
|
|
10659
10686
|
}
|
|
10660
10687
|
|
|
10661
10688
|
function mergePrefixEnvs(target, source) {
|
|
@@ -10673,11 +10700,11 @@ function mergePrefixEnvs(target, source) {
|
|
|
10673
10700
|
|
|
10674
10701
|
function mergeParsedDocuments(docs, opts = {}) {
|
|
10675
10702
|
const documents = Array.isArray(docs) ? docs : [];
|
|
10676
|
-
const scopeBlankNodes =
|
|
10677
|
-
|
|
10703
|
+
const scopeBlankNodes = typeof opts.scopeBlankNodes === 'boolean' ? opts.scopeBlankNodes : documents.length > 1;
|
|
10704
|
+
const keepSources = !!opts.keepSources || !!opts.keepSourceArtifacts;
|
|
10678
10705
|
|
|
10679
10706
|
const merged = emptyParsedDocument();
|
|
10680
|
-
const mergedSources = [];
|
|
10707
|
+
const mergedSources = keepSources ? [] : null;
|
|
10681
10708
|
|
|
10682
10709
|
for (let i = 0; i < documents.length; i++) {
|
|
10683
10710
|
const originalDoc = documents[i] || emptyParsedDocument();
|
|
@@ -10688,26 +10715,36 @@ function mergeParsedDocuments(docs, opts = {}) {
|
|
|
10688
10715
|
merged.frules.push(...(doc.frules || []));
|
|
10689
10716
|
merged.brules.push(...(doc.brules || []));
|
|
10690
10717
|
merged.logQueryRules.push(...(doc.logQueryRules || []));
|
|
10691
|
-
|
|
10718
|
+
|
|
10719
|
+
if (doc.usedPrefixes instanceof Set) {
|
|
10720
|
+
if (!(merged.usedPrefixes instanceof Set)) {
|
|
10721
|
+
Object.defineProperty(merged, 'usedPrefixes', {
|
|
10722
|
+
value: new Set(),
|
|
10723
|
+
enumerable: false,
|
|
10724
|
+
writable: false,
|
|
10725
|
+
configurable: true,
|
|
10726
|
+
});
|
|
10727
|
+
}
|
|
10728
|
+
for (const pfx of doc.usedPrefixes) merged.usedPrefixes.add(pfx);
|
|
10729
|
+
}
|
|
10730
|
+
|
|
10731
|
+
if (keepSources) mergedSources.push(doc);
|
|
10692
10732
|
}
|
|
10693
10733
|
|
|
10694
|
-
|
|
10695
|
-
|
|
10696
|
-
|
|
10697
|
-
|
|
10698
|
-
|
|
10699
|
-
|
|
10734
|
+
if (keepSources) {
|
|
10735
|
+
Object.defineProperty(merged, 'sources', {
|
|
10736
|
+
value: mergedSources,
|
|
10737
|
+
enumerable: false,
|
|
10738
|
+
writable: false,
|
|
10739
|
+
configurable: true,
|
|
10740
|
+
});
|
|
10741
|
+
}
|
|
10700
10742
|
|
|
10701
10743
|
return merged;
|
|
10702
10744
|
}
|
|
10703
10745
|
|
|
10704
10746
|
function isN3SourceListInput(input) {
|
|
10705
|
-
return !!(
|
|
10706
|
-
input &&
|
|
10707
|
-
typeof input === 'object' &&
|
|
10708
|
-
!Array.isArray(input) &&
|
|
10709
|
-
Array.isArray(input.sources)
|
|
10710
|
-
);
|
|
10747
|
+
return !!(input && typeof input === 'object' && !Array.isArray(input) && Array.isArray(input.sources));
|
|
10711
10748
|
}
|
|
10712
10749
|
|
|
10713
10750
|
function normalizeN3SourceItem(source, index) {
|
|
@@ -10733,14 +10770,17 @@ function parseN3SourceList(input, opts = {}) {
|
|
|
10733
10770
|
if (!isN3SourceListInput(input)) return null;
|
|
10734
10771
|
const sources = input.sources.map(normalizeN3SourceItem);
|
|
10735
10772
|
const defaultBaseIri = typeof opts.baseIri === 'string' ? opts.baseIri : '';
|
|
10736
|
-
const parsed = sources.map(
|
|
10773
|
+
const parsed = sources.map(source =>
|
|
10737
10774
|
parseN3Text(source.text, {
|
|
10738
10775
|
label: source.label,
|
|
10739
10776
|
baseIri: source.baseIri || (sources.length === 1 ? defaultBaseIri : ''),
|
|
10777
|
+
collectUsedPrefixes: true,
|
|
10778
|
+
keepSourceArtifacts: !!opts.keepSourceArtifacts,
|
|
10740
10779
|
}),
|
|
10741
10780
|
);
|
|
10742
10781
|
return mergeParsedDocuments(parsed, {
|
|
10743
10782
|
scopeBlankNodes: typeof input.scopeBlankNodes === 'boolean' ? input.scopeBlankNodes : parsed.length > 1,
|
|
10783
|
+
keepSources: !!opts.keepSourceArtifacts,
|
|
10744
10784
|
});
|
|
10745
10785
|
}
|
|
10746
10786
|
|
|
@@ -10749,6 +10789,7 @@ module.exports = {
|
|
|
10749
10789
|
parseN3Text,
|
|
10750
10790
|
mergeParsedDocuments,
|
|
10751
10791
|
scopeBlankNodesInDocument,
|
|
10792
|
+
prefixesUsedInTokens,
|
|
10752
10793
|
isN3SourceListInput,
|
|
10753
10794
|
parseN3SourceList,
|
|
10754
10795
|
};
|
|
@@ -13237,9 +13278,23 @@ module.exports = {
|
|
|
13237
13278
|
|
|
13238
13279
|
'use strict';
|
|
13239
13280
|
|
|
13240
|
-
const {
|
|
13281
|
+
const {
|
|
13282
|
+
LOG_NS,
|
|
13283
|
+
Iri,
|
|
13284
|
+
Var,
|
|
13285
|
+
Blank,
|
|
13286
|
+
ListTerm,
|
|
13287
|
+
OpenListTerm,
|
|
13288
|
+
GraphTerm,
|
|
13289
|
+
Triple,
|
|
13290
|
+
copyQuotedGraphMetadata,
|
|
13291
|
+
} = require('./prelude');
|
|
13241
13292
|
|
|
13242
13293
|
function liftBlankRuleVars(premise, conclusion) {
|
|
13294
|
+
function isLogIncludesLikePredicate(p) {
|
|
13295
|
+
return p instanceof Iri && (p.value === LOG_NS + 'includes' || p.value === LOG_NS + 'notIncludes');
|
|
13296
|
+
}
|
|
13297
|
+
|
|
13243
13298
|
// Map blank labels to stable rule-local variable names.
|
|
13244
13299
|
// This runs at rule construction time; keep it simple and allocation-light.
|
|
13245
13300
|
const mapping = Object.create(null);
|
|
@@ -13296,9 +13351,19 @@ function liftBlankRuleVars(premise, conclusion) {
|
|
|
13296
13351
|
return t;
|
|
13297
13352
|
}
|
|
13298
13353
|
|
|
13299
|
-
const newPremise = premise.map(
|
|
13300
|
-
|
|
13301
|
-
|
|
13354
|
+
const newPremise = premise.map((tr) => {
|
|
13355
|
+
// In log:includes / log:notIncludes, quoted formula operands are formulas
|
|
13356
|
+
// consumed by the builtin rather than ordinary triple patterns. Keep their
|
|
13357
|
+
// local blank nodes as Blank terms so the builtin can treat them as local
|
|
13358
|
+
// existentials, and bindings returned from an explicit scope are blank nodes
|
|
13359
|
+
// instead of synthetic rule variables such as ?_b1.
|
|
13360
|
+
const keepFormulaBlanks = isLogIncludesLikePredicate(tr.p);
|
|
13361
|
+
return new Triple(
|
|
13362
|
+
keepFormulaBlanks && tr.s instanceof GraphTerm ? copyQuotedTerm(tr.s) : convertTerm(tr.s, true),
|
|
13363
|
+
convertTerm(tr.p, true),
|
|
13364
|
+
keepFormulaBlanks && tr.o instanceof GraphTerm ? copyQuotedTerm(tr.o) : convertTerm(tr.o, true),
|
|
13365
|
+
);
|
|
13366
|
+
});
|
|
13302
13367
|
return [newPremise, conclusion];
|
|
13303
13368
|
}
|
|
13304
13369
|
|
package/lib/cli.js
CHANGED
|
@@ -208,6 +208,8 @@ function main() {
|
|
|
208
208
|
parseN3Text(text, {
|
|
209
209
|
baseIri: __sourceLabelToBaseIri(sourceLabel),
|
|
210
210
|
label: sourceLabel,
|
|
211
|
+
collectUsedPrefixes: true,
|
|
212
|
+
keepSourceArtifacts: false,
|
|
211
213
|
}),
|
|
212
214
|
);
|
|
213
215
|
} catch (e) {
|
|
@@ -225,8 +227,6 @@ function main() {
|
|
|
225
227
|
const frules = mergedDocument.frules;
|
|
226
228
|
const brules = mergedDocument.brules;
|
|
227
229
|
const qrules = mergedDocument.logQueryRules;
|
|
228
|
-
const tokenSets = parsedSources.map((source) => ({ tokens: source.tokens, prefixes: source.prefixes }));
|
|
229
|
-
|
|
230
230
|
if (showAst) {
|
|
231
231
|
function astReplacer(unusedJsonKey, value) {
|
|
232
232
|
if (value instanceof Set) return Array.from(value);
|
|
@@ -282,75 +282,6 @@ function main() {
|
|
|
282
282
|
// In --stream mode we print prefixes *before* any derivations happen.
|
|
283
283
|
// To keep the header small and stable, emit only prefixes that are actually
|
|
284
284
|
// used (as QNames) in the *input* N3 program.
|
|
285
|
-
function prefixesUsedInInputTokens(toks2, prefEnv) {
|
|
286
|
-
const used = new Set();
|
|
287
|
-
|
|
288
|
-
function maybeAddFromQName(name) {
|
|
289
|
-
if (typeof name !== 'string') return;
|
|
290
|
-
if (!name.includes(':')) return;
|
|
291
|
-
if (name.startsWith('_:')) return; // blank node
|
|
292
|
-
|
|
293
|
-
// Split only on the first ':'
|
|
294
|
-
const idx = name.indexOf(':');
|
|
295
|
-
const p = name.slice(0, idx); // may be '' for ":foo"
|
|
296
|
-
|
|
297
|
-
// Ignore things like "http://..." unless that prefix is actually defined.
|
|
298
|
-
if (!Object.prototype.hasOwnProperty.call(prefEnv.map, p)) return;
|
|
299
|
-
|
|
300
|
-
used.add(p);
|
|
301
|
-
}
|
|
302
|
-
|
|
303
|
-
for (let i = 0; i < toks2.length; i++) {
|
|
304
|
-
const t = toks2[i];
|
|
305
|
-
|
|
306
|
-
// Skip @prefix ... .
|
|
307
|
-
if (t.typ === 'AtPrefix') {
|
|
308
|
-
while (i < toks2.length && toks2[i].typ !== 'Dot' && toks2[i].typ !== 'EOF') i++;
|
|
309
|
-
continue;
|
|
310
|
-
}
|
|
311
|
-
// Skip @base ... .
|
|
312
|
-
if (t.typ === 'AtBase') {
|
|
313
|
-
while (i < toks2.length && toks2[i].typ !== 'Dot' && toks2[i].typ !== 'EOF') i++;
|
|
314
|
-
continue;
|
|
315
|
-
}
|
|
316
|
-
|
|
317
|
-
// Skip SPARQL/Turtle PREFIX pfx: <iri>
|
|
318
|
-
if (
|
|
319
|
-
t.typ === 'Ident' &&
|
|
320
|
-
typeof t.value === 'string' &&
|
|
321
|
-
t.value.toLowerCase() === 'prefix' &&
|
|
322
|
-
toks2[i + 1] &&
|
|
323
|
-
toks2[i + 1].typ === 'Ident' &&
|
|
324
|
-
typeof toks2[i + 1].value === 'string' &&
|
|
325
|
-
toks2[i + 1].value.endsWith(':') &&
|
|
326
|
-
toks2[i + 2] &&
|
|
327
|
-
(toks2[i + 2].typ === 'IriRef' || toks2[i + 2].typ === 'Ident')
|
|
328
|
-
) {
|
|
329
|
-
i += 2;
|
|
330
|
-
continue;
|
|
331
|
-
}
|
|
332
|
-
|
|
333
|
-
// Skip SPARQL BASE <iri>
|
|
334
|
-
if (
|
|
335
|
-
t.typ === 'Ident' &&
|
|
336
|
-
typeof t.value === 'string' &&
|
|
337
|
-
t.value.toLowerCase() === 'base' &&
|
|
338
|
-
toks2[i + 1] &&
|
|
339
|
-
toks2[i + 1].typ === 'IriRef'
|
|
340
|
-
) {
|
|
341
|
-
i += 1;
|
|
342
|
-
continue;
|
|
343
|
-
}
|
|
344
|
-
|
|
345
|
-
// Count QNames in identifiers (including datatypes like xsd:integer).
|
|
346
|
-
if (t.typ === 'Ident') {
|
|
347
|
-
maybeAddFromQName(t.value);
|
|
348
|
-
}
|
|
349
|
-
}
|
|
350
|
-
|
|
351
|
-
return used;
|
|
352
|
-
}
|
|
353
|
-
|
|
354
285
|
function restrictPrefixEnv(prefEnv, usedSet) {
|
|
355
286
|
const m = {};
|
|
356
287
|
for (const p of usedSet) {
|
|
@@ -368,10 +299,7 @@ function main() {
|
|
|
368
299
|
const mayAutoRenderOutputStrings = programMayProduceOutputStrings(triples, frules, qrules);
|
|
369
300
|
|
|
370
301
|
if (streamMode && !hasQueries && !mayAutoRenderOutputStrings) {
|
|
371
|
-
const usedInInput = new Set();
|
|
372
|
-
for (const source of tokenSets) {
|
|
373
|
-
for (const pfx of prefixesUsedInInputTokens(source.tokens, source.prefixes)) usedInInput.add(pfx);
|
|
374
|
-
}
|
|
302
|
+
const usedInInput = mergedDocument.usedPrefixes instanceof Set ? new Set(mergedDocument.usedPrefixes) : new Set();
|
|
375
303
|
const outPrefixes = restrictPrefixEnv(prefixes, usedInInput);
|
|
376
304
|
|
|
377
305
|
// Ensure log:trace uses the same compact prefix set as the output.
|
package/lib/multisource.js
CHANGED
|
@@ -31,13 +31,101 @@ function emptyParsedDocument() {
|
|
|
31
31
|
};
|
|
32
32
|
}
|
|
33
33
|
|
|
34
|
+
function prefixesUsedInTokens(tokens, prefEnv) {
|
|
35
|
+
const used = new Set();
|
|
36
|
+
const toks = Array.isArray(tokens) ? tokens : [];
|
|
37
|
+
const prefixes = prefEnv && prefEnv.map ? prefEnv.map : {};
|
|
38
|
+
|
|
39
|
+
function maybeAddFromQName(name) {
|
|
40
|
+
if (typeof name !== 'string') return;
|
|
41
|
+
if (!name.includes(':')) return;
|
|
42
|
+
if (name.startsWith('_:')) return; // blank node
|
|
43
|
+
|
|
44
|
+
// Split only on the first ':'; the empty prefix is valid for ":foo".
|
|
45
|
+
const idx = name.indexOf(':');
|
|
46
|
+
const p = name.slice(0, idx);
|
|
47
|
+
|
|
48
|
+
// Ignore strings like "http://..." unless that prefix is actually defined.
|
|
49
|
+
if (!Object.prototype.hasOwnProperty.call(prefixes, p)) return;
|
|
50
|
+
|
|
51
|
+
used.add(p);
|
|
52
|
+
}
|
|
53
|
+
|
|
54
|
+
for (let i = 0; i < toks.length; i++) {
|
|
55
|
+
const t = toks[i];
|
|
56
|
+
if (!t) continue;
|
|
57
|
+
|
|
58
|
+
// Skip @prefix ... .
|
|
59
|
+
if (t.typ === 'AtPrefix') {
|
|
60
|
+
while (i < toks.length && toks[i].typ !== 'Dot' && toks[i].typ !== 'EOF') i++;
|
|
61
|
+
continue;
|
|
62
|
+
}
|
|
63
|
+
|
|
64
|
+
// Skip @base ... .
|
|
65
|
+
if (t.typ === 'AtBase') {
|
|
66
|
+
while (i < toks.length && toks[i].typ !== 'Dot' && toks[i].typ !== 'EOF') i++;
|
|
67
|
+
continue;
|
|
68
|
+
}
|
|
69
|
+
|
|
70
|
+
// Skip SPARQL/Turtle PREFIX pfx: <iri>
|
|
71
|
+
if (
|
|
72
|
+
t.typ === 'Ident' &&
|
|
73
|
+
typeof t.value === 'string' &&
|
|
74
|
+
t.value.toLowerCase() === 'prefix' &&
|
|
75
|
+
toks[i + 1] &&
|
|
76
|
+
toks[i + 1].typ === 'Ident' &&
|
|
77
|
+
typeof toks[i + 1].value === 'string' &&
|
|
78
|
+
toks[i + 1].value.endsWith(':') &&
|
|
79
|
+
toks[i + 2] &&
|
|
80
|
+
(toks[i + 2].typ === 'IriRef' || toks[i + 2].typ === 'Ident')
|
|
81
|
+
) {
|
|
82
|
+
i += 2;
|
|
83
|
+
continue;
|
|
84
|
+
}
|
|
85
|
+
|
|
86
|
+
// Skip SPARQL BASE <iri>
|
|
87
|
+
if (
|
|
88
|
+
t.typ === 'Ident' &&
|
|
89
|
+
typeof t.value === 'string' &&
|
|
90
|
+
t.value.toLowerCase() === 'base' &&
|
|
91
|
+
toks[i + 1] &&
|
|
92
|
+
toks[i + 1].typ === 'IriRef'
|
|
93
|
+
) {
|
|
94
|
+
i += 1;
|
|
95
|
+
continue;
|
|
96
|
+
}
|
|
97
|
+
|
|
98
|
+
// Count QNames in identifiers, including datatypes like xsd:integer.
|
|
99
|
+
if (t.typ === 'Ident') maybeAddFromQName(t.value);
|
|
100
|
+
}
|
|
101
|
+
|
|
102
|
+
return used;
|
|
103
|
+
}
|
|
104
|
+
|
|
34
105
|
function parseN3Text(text, opts = {}) {
|
|
35
|
-
const { baseIri = '', label = '<input>' } = opts || {};
|
|
106
|
+
const { baseIri = '', label = '<input>', keepSourceArtifacts = true, collectUsedPrefixes = false } = opts || {};
|
|
36
107
|
const tokens = lex(text);
|
|
37
108
|
const parser = new Parser(tokens);
|
|
38
109
|
if (baseIri) parser.prefixes.setBase(baseIri);
|
|
39
110
|
const [prefixes, triples, frules, brules, logQueryRules] = parser.parseDocument();
|
|
40
|
-
|
|
111
|
+
|
|
112
|
+
const doc = { prefixes, triples, frules, brules, logQueryRules, label };
|
|
113
|
+
|
|
114
|
+
if (collectUsedPrefixes) {
|
|
115
|
+
Object.defineProperty(doc, 'usedPrefixes', {
|
|
116
|
+
value: prefixesUsedInTokens(tokens, prefixes),
|
|
117
|
+
enumerable: false,
|
|
118
|
+
writable: false,
|
|
119
|
+
configurable: true,
|
|
120
|
+
});
|
|
121
|
+
}
|
|
122
|
+
|
|
123
|
+
if (keepSourceArtifacts) {
|
|
124
|
+
doc.tokens = tokens;
|
|
125
|
+
doc.text = text;
|
|
126
|
+
}
|
|
127
|
+
|
|
128
|
+
return doc;
|
|
41
129
|
}
|
|
42
130
|
|
|
43
131
|
function sourceBlankPrefix(sourceIndex) {
|
|
@@ -96,16 +184,27 @@ function scopeBlankNodesInDocument(doc, sourceIndex) {
|
|
|
96
184
|
return out;
|
|
97
185
|
}
|
|
98
186
|
|
|
99
|
-
|
|
187
|
+
const out = {
|
|
100
188
|
prefixes: doc.prefixes,
|
|
101
189
|
triples: (doc.triples || []).map(cloneTriple),
|
|
102
190
|
frules: (doc.frules || []).map(cloneRule),
|
|
103
191
|
brules: (doc.brules || []).map(cloneRule),
|
|
104
192
|
logQueryRules: (doc.logQueryRules || []).map(cloneRule),
|
|
105
|
-
tokens: doc.tokens,
|
|
106
|
-
text: doc.text,
|
|
107
193
|
label: doc.label,
|
|
108
194
|
};
|
|
195
|
+
|
|
196
|
+
if (doc.usedPrefixes instanceof Set) {
|
|
197
|
+
Object.defineProperty(out, 'usedPrefixes', {
|
|
198
|
+
value: new Set(doc.usedPrefixes),
|
|
199
|
+
enumerable: false,
|
|
200
|
+
writable: false,
|
|
201
|
+
configurable: true,
|
|
202
|
+
});
|
|
203
|
+
}
|
|
204
|
+
if (Object.prototype.hasOwnProperty.call(doc, 'tokens')) out.tokens = doc.tokens;
|
|
205
|
+
if (Object.prototype.hasOwnProperty.call(doc, 'text')) out.text = doc.text;
|
|
206
|
+
|
|
207
|
+
return out;
|
|
109
208
|
}
|
|
110
209
|
|
|
111
210
|
function mergePrefixEnvs(target, source) {
|
|
@@ -124,9 +223,10 @@ function mergePrefixEnvs(target, source) {
|
|
|
124
223
|
function mergeParsedDocuments(docs, opts = {}) {
|
|
125
224
|
const documents = Array.isArray(docs) ? docs : [];
|
|
126
225
|
const scopeBlankNodes = typeof opts.scopeBlankNodes === 'boolean' ? opts.scopeBlankNodes : documents.length > 1;
|
|
226
|
+
const keepSources = !!opts.keepSources || !!opts.keepSourceArtifacts;
|
|
127
227
|
|
|
128
228
|
const merged = emptyParsedDocument();
|
|
129
|
-
const mergedSources = [];
|
|
229
|
+
const mergedSources = keepSources ? [] : null;
|
|
130
230
|
|
|
131
231
|
for (let i = 0; i < documents.length; i++) {
|
|
132
232
|
const originalDoc = documents[i] || emptyParsedDocument();
|
|
@@ -137,15 +237,30 @@ function mergeParsedDocuments(docs, opts = {}) {
|
|
|
137
237
|
merged.frules.push(...(doc.frules || []));
|
|
138
238
|
merged.brules.push(...(doc.brules || []));
|
|
139
239
|
merged.logQueryRules.push(...(doc.logQueryRules || []));
|
|
140
|
-
|
|
240
|
+
|
|
241
|
+
if (doc.usedPrefixes instanceof Set) {
|
|
242
|
+
if (!(merged.usedPrefixes instanceof Set)) {
|
|
243
|
+
Object.defineProperty(merged, 'usedPrefixes', {
|
|
244
|
+
value: new Set(),
|
|
245
|
+
enumerable: false,
|
|
246
|
+
writable: false,
|
|
247
|
+
configurable: true,
|
|
248
|
+
});
|
|
249
|
+
}
|
|
250
|
+
for (const pfx of doc.usedPrefixes) merged.usedPrefixes.add(pfx);
|
|
251
|
+
}
|
|
252
|
+
|
|
253
|
+
if (keepSources) mergedSources.push(doc);
|
|
141
254
|
}
|
|
142
255
|
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
|
|
147
|
-
|
|
148
|
-
|
|
256
|
+
if (keepSources) {
|
|
257
|
+
Object.defineProperty(merged, 'sources', {
|
|
258
|
+
value: mergedSources,
|
|
259
|
+
enumerable: false,
|
|
260
|
+
writable: false,
|
|
261
|
+
configurable: true,
|
|
262
|
+
});
|
|
263
|
+
}
|
|
149
264
|
|
|
150
265
|
return merged;
|
|
151
266
|
}
|
|
@@ -177,14 +292,17 @@ function parseN3SourceList(input, opts = {}) {
|
|
|
177
292
|
if (!isN3SourceListInput(input)) return null;
|
|
178
293
|
const sources = input.sources.map(normalizeN3SourceItem);
|
|
179
294
|
const defaultBaseIri = typeof opts.baseIri === 'string' ? opts.baseIri : '';
|
|
180
|
-
const parsed = sources.map((source
|
|
295
|
+
const parsed = sources.map((source) =>
|
|
181
296
|
parseN3Text(source.text, {
|
|
182
297
|
label: source.label,
|
|
183
298
|
baseIri: source.baseIri || (sources.length === 1 ? defaultBaseIri : ''),
|
|
299
|
+
collectUsedPrefixes: true,
|
|
300
|
+
keepSourceArtifacts: !!opts.keepSourceArtifacts,
|
|
184
301
|
}),
|
|
185
302
|
);
|
|
186
303
|
return mergeParsedDocuments(parsed, {
|
|
187
304
|
scopeBlankNodes: typeof input.scopeBlankNodes === 'boolean' ? input.scopeBlankNodes : parsed.length > 1,
|
|
305
|
+
keepSources: !!opts.keepSourceArtifacts,
|
|
188
306
|
});
|
|
189
307
|
}
|
|
190
308
|
|
|
@@ -193,6 +311,7 @@ module.exports = {
|
|
|
193
311
|
parseN3Text,
|
|
194
312
|
mergeParsedDocuments,
|
|
195
313
|
scopeBlankNodesInDocument,
|
|
314
|
+
prefixesUsedInTokens,
|
|
196
315
|
isN3SourceListInput,
|
|
197
316
|
parseN3SourceList,
|
|
198
317
|
};
|
package/lib/rules.js
CHANGED
|
@@ -7,9 +7,23 @@
|
|
|
7
7
|
|
|
8
8
|
'use strict';
|
|
9
9
|
|
|
10
|
-
const {
|
|
10
|
+
const {
|
|
11
|
+
LOG_NS,
|
|
12
|
+
Iri,
|
|
13
|
+
Var,
|
|
14
|
+
Blank,
|
|
15
|
+
ListTerm,
|
|
16
|
+
OpenListTerm,
|
|
17
|
+
GraphTerm,
|
|
18
|
+
Triple,
|
|
19
|
+
copyQuotedGraphMetadata,
|
|
20
|
+
} = require('./prelude');
|
|
11
21
|
|
|
12
22
|
function liftBlankRuleVars(premise, conclusion) {
|
|
23
|
+
function isLogIncludesLikePredicate(p) {
|
|
24
|
+
return p instanceof Iri && (p.value === LOG_NS + 'includes' || p.value === LOG_NS + 'notIncludes');
|
|
25
|
+
}
|
|
26
|
+
|
|
13
27
|
// Map blank labels to stable rule-local variable names.
|
|
14
28
|
// This runs at rule construction time; keep it simple and allocation-light.
|
|
15
29
|
const mapping = Object.create(null);
|
|
@@ -66,9 +80,19 @@ function liftBlankRuleVars(premise, conclusion) {
|
|
|
66
80
|
return t;
|
|
67
81
|
}
|
|
68
82
|
|
|
69
|
-
const newPremise = premise.map(
|
|
70
|
-
|
|
71
|
-
|
|
83
|
+
const newPremise = premise.map((tr) => {
|
|
84
|
+
// In log:includes / log:notIncludes, quoted formula operands are formulas
|
|
85
|
+
// consumed by the builtin rather than ordinary triple patterns. Keep their
|
|
86
|
+
// local blank nodes as Blank terms so the builtin can treat them as local
|
|
87
|
+
// existentials, and bindings returned from an explicit scope are blank nodes
|
|
88
|
+
// instead of synthetic rule variables such as ?_b1.
|
|
89
|
+
const keepFormulaBlanks = isLogIncludesLikePredicate(tr.p);
|
|
90
|
+
return new Triple(
|
|
91
|
+
keepFormulaBlanks && tr.s instanceof GraphTerm ? copyQuotedTerm(tr.s) : convertTerm(tr.s, true),
|
|
92
|
+
convertTerm(tr.p, true),
|
|
93
|
+
keepFormulaBlanks && tr.o instanceof GraphTerm ? copyQuotedTerm(tr.o) : convertTerm(tr.o, true),
|
|
94
|
+
);
|
|
95
|
+
});
|
|
72
96
|
return [newPremise, conclusion];
|
|
73
97
|
}
|
|
74
98
|
|
package/package.json
CHANGED
package/test/api.test.js
CHANGED
|
@@ -1643,6 +1643,30 @@ _:x :hates { _:foo :making :mess }.
|
|
|
1643
1643
|
expect: [/:(?:test)\s+:(?:contains)\s+:(?:success-literal-3)\s*\./, /:(?:test)\s+:(?:is)\s+true\s*\./],
|
|
1644
1644
|
},
|
|
1645
1645
|
|
|
1646
|
+
{
|
|
1647
|
+
name: '60a regression: log:includes explicit-scope blank node is returned as blank, not synthetic variable',
|
|
1648
|
+
opt: { proofComments: false },
|
|
1649
|
+
input: `@prefix : <http://example.org/ns#> .
|
|
1650
|
+
@prefix log: <http://www.w3.org/2000/10/swap/log#>.
|
|
1651
|
+
|
|
1652
|
+
{
|
|
1653
|
+
{
|
|
1654
|
+
_:b1 a :Mortal .
|
|
1655
|
+
} log:includes {
|
|
1656
|
+
?CS ?CP ?CO .
|
|
1657
|
+
} .
|
|
1658
|
+
}
|
|
1659
|
+
=>
|
|
1660
|
+
{
|
|
1661
|
+
_:b2 :conclusion {
|
|
1662
|
+
?CS ?CP ?CO .
|
|
1663
|
+
} .
|
|
1664
|
+
} .
|
|
1665
|
+
`,
|
|
1666
|
+
expect: [/_:sk_\d+\s+:(?:conclusion)\s+\{\s*_:b\d+\s+a\s+:(?:Mortal)\s*\.\s*\}\s*\./s],
|
|
1667
|
+
notExpect: [/\?_b\d+\s+a\s+:(?:Mortal)/],
|
|
1668
|
+
},
|
|
1669
|
+
|
|
1646
1670
|
{
|
|
1647
1671
|
name: '61 RDF/JS input + rule objects: reason() accepts quads with rules',
|
|
1648
1672
|
run() {
|