eyeling 1.13.2 → 1.13.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/HANDBOOK.md +2 -0
- package/examples/allen-interval-calculus.n3 +180 -0
- package/examples/deck/odrl-dpv-risk-ranked.md +251 -0
- package/examples/dining-philosophers.n3 +383 -0
- package/examples/input/annotation.ttl +3 -1
- package/examples/input/reifies.ttl +2 -0
- package/examples/input/triple-term.ttl +3 -2
- package/examples/kaprekar.n3 +205 -0
- package/examples/odrl-dpv-ehds-risk-ranked.n3 +473 -0
- package/examples/odrl-dpv-healthcare-risk-ranked.n3 +575 -0
- package/examples/odrl-dpv-risk-ranked.n3 +30 -31
- package/examples/output/allen-interval-calculus.n3 +157 -0
- package/examples/output/dining-philosophers.n3 +808 -0
- package/examples/output/kaprekar.n3 +9992 -0
- package/examples/output/odrl-dpv-ehds-risk-ranked.n3 +144 -0
- package/examples/output/odrl-dpv-healthcare-risk-ranked.n3 +117 -0
- package/examples/output/odrl-dpv-risk-ranked.n3 +70 -6
- package/examples/output/wind-turbine.n3 +6 -0
- package/examples/reifies.n3 +1 -2
- package/examples/triple-term.n3 +3 -3
- package/examples/wind-turbine.n3 +63 -0
- package/eyeling.js +7 -2
- package/lib/cli.js +4 -1
- package/lib/engine.js +3 -1
- package/package.json +1 -1
- package/test/api.test.js +11 -0
- package/tools/n3gen.js +36 -7
package/HANDBOOK.md
CHANGED
|
@@ -614,6 +614,8 @@ repeat
|
|
|
614
614
|
until not changed
|
|
615
615
|
```
|
|
616
616
|
|
|
617
|
+
Top-level input triples are kept as parsed (including non-ground triples such as ?X :p :o.). Groundness is enforced when adding derived facts during forward chaining, and when selecting printed/query output triples.
|
|
618
|
+
|
|
617
619
|
### 9.2 Strict-ground head optimization
|
|
618
620
|
|
|
619
621
|
There is a nice micro-compiler optimization in `runFixpoint()`:
|
|
@@ -0,0 +1,180 @@
|
|
|
1
|
+
# ======================================================================
|
|
2
|
+
# Allen Interval Calculus
|
|
3
|
+
#
|
|
4
|
+
# This file illustrates the full set of 13 base relations of Allen's
|
|
5
|
+
# interval algebra, encoded as forward-chaining N3 rules.
|
|
6
|
+
#
|
|
7
|
+
# Data model:
|
|
8
|
+
# - Any resource with :start and :end denotes a closed-open interval
|
|
9
|
+
# [start, end) over time.
|
|
10
|
+
# - :start and :end are xsd:dateTime typed literals.
|
|
11
|
+
# - Optionally, an interval may have :duration (xsd:duration).
|
|
12
|
+
#
|
|
13
|
+
# Built-ins (EYE / Eyeling style):
|
|
14
|
+
# - math:lessThan and math:equalTo can compare xsd:dateTime (via epoch
|
|
15
|
+
# seconds) and xsd:duration (treated as seconds).
|
|
16
|
+
# - Timestamp arithmetic uses:
|
|
17
|
+
# (dateTime duration) math:sum dateTime
|
|
18
|
+
# (dateTime1 dateTime2) math:difference duration
|
|
19
|
+
# (dateTime duration) math:difference dateTime
|
|
20
|
+
# where xsd:duration uses a simplified seconds model.
|
|
21
|
+
# ======================================================================
|
|
22
|
+
|
|
23
|
+
@prefix : <http://example.org/allen#>.
|
|
24
|
+
@prefix math: <http://www.w3.org/2000/10/swap/math#>.
|
|
25
|
+
@prefix xsd: <http://www.w3.org/2001/XMLSchema#>.
|
|
26
|
+
|
|
27
|
+
# -------------------------------------------------------------
|
|
28
|
+
# Optional interval completion rules (derive missing endpoints)
|
|
29
|
+
# -------------------------------------------------------------
|
|
30
|
+
|
|
31
|
+
# If start + duration given, derive end.
|
|
32
|
+
{
|
|
33
|
+
?I :start ?s; :duration ?d.
|
|
34
|
+
( ?s ?d ) math:sum ?e.
|
|
35
|
+
} => { ?I :end ?e. }.
|
|
36
|
+
|
|
37
|
+
# If end + duration given, derive start.
|
|
38
|
+
{
|
|
39
|
+
?I :end ?e; :duration ?d.
|
|
40
|
+
( ?e ?d ) math:difference ?s.
|
|
41
|
+
} => { ?I :start ?s. }.
|
|
42
|
+
|
|
43
|
+
# If start + end given, derive duration (seconds-only lexical output).
|
|
44
|
+
{
|
|
45
|
+
?I :start ?s; :end ?e.
|
|
46
|
+
( ?e ?s ) math:difference ?d.
|
|
47
|
+
} => { ?I :duration ?d. }.
|
|
48
|
+
|
|
49
|
+
# ----------------------------------------------------
|
|
50
|
+
# 7 "forward" base relations (the rest are converses)
|
|
51
|
+
# ----------------------------------------------------
|
|
52
|
+
|
|
53
|
+
# before(I,J) <=> end(I) < start(J)
|
|
54
|
+
{
|
|
55
|
+
?I :end ?eI.
|
|
56
|
+
?J :start ?sJ.
|
|
57
|
+
?eI math:lessThan ?sJ.
|
|
58
|
+
} => { ?I :before ?J. }.
|
|
59
|
+
|
|
60
|
+
# meets(I,J) <=> end(I) = start(J)
|
|
61
|
+
{
|
|
62
|
+
?I :end ?eI.
|
|
63
|
+
?J :start ?sJ.
|
|
64
|
+
?eI math:equalTo ?sJ.
|
|
65
|
+
} => { ?I :meets ?J. }.
|
|
66
|
+
|
|
67
|
+
# overlaps(I,J) <=> start(I) < start(J) < end(I) < end(J)
|
|
68
|
+
{
|
|
69
|
+
?I :start ?sI; :end ?eI.
|
|
70
|
+
?J :start ?sJ; :end ?eJ.
|
|
71
|
+
?sI math:lessThan ?sJ.
|
|
72
|
+
?sJ math:lessThan ?eI.
|
|
73
|
+
?eI math:lessThan ?eJ.
|
|
74
|
+
} => { ?I :overlaps ?J. }.
|
|
75
|
+
|
|
76
|
+
# starts(I,J) <=> start(I)=start(J) AND end(I) < end(J)
|
|
77
|
+
{
|
|
78
|
+
?I :start ?s; :end ?eI.
|
|
79
|
+
?J :start ?s; :end ?eJ.
|
|
80
|
+
?eI math:lessThan ?eJ.
|
|
81
|
+
} => { ?I :starts ?J. }.
|
|
82
|
+
|
|
83
|
+
# during(I,J) <=> start(J) < start(I) AND end(I) < end(J)
|
|
84
|
+
{
|
|
85
|
+
?I :start ?sI; :end ?eI.
|
|
86
|
+
?J :start ?sJ; :end ?eJ.
|
|
87
|
+
?sJ math:lessThan ?sI.
|
|
88
|
+
?eI math:lessThan ?eJ.
|
|
89
|
+
} => { ?I :during ?J. }.
|
|
90
|
+
|
|
91
|
+
# finishes(I,J) <=> end(I)=end(J) AND start(J) < start(I)
|
|
92
|
+
{
|
|
93
|
+
?I :start ?sI; :end ?e.
|
|
94
|
+
?J :start ?sJ; :end ?e.
|
|
95
|
+
?sJ math:lessThan ?sI.
|
|
96
|
+
} => { ?I :finishes ?J. }.
|
|
97
|
+
|
|
98
|
+
# equals(I,J) <=> start(I)=start(J) AND end(I)=end(J)
|
|
99
|
+
{
|
|
100
|
+
?I :start ?s; :end ?e.
|
|
101
|
+
?J :start ?s; :end ?e.
|
|
102
|
+
} => { ?I :equals ?J. }.
|
|
103
|
+
|
|
104
|
+
# ------------------------------------------------------
|
|
105
|
+
# 6 converse relations to complete the 13 base relations
|
|
106
|
+
# ------------------------------------------------------
|
|
107
|
+
|
|
108
|
+
{ ?I :before ?J. } => { ?J :after ?I. }.
|
|
109
|
+
{ ?I :meets ?J. } => { ?J :metBy ?I. }.
|
|
110
|
+
{ ?I :overlaps ?J. } => { ?J :overlappedBy ?I. }.
|
|
111
|
+
{ ?I :starts ?J. } => { ?J :startedBy ?I. }.
|
|
112
|
+
{ ?I :during ?J. } => { ?J :contains ?I. }.
|
|
113
|
+
{ ?I :finishes ?J. } => { ?J :finishedBy ?I. }.
|
|
114
|
+
|
|
115
|
+
# -------------------------------------------------------
|
|
116
|
+
# Sanity check: flag invalid intervals where start >= end
|
|
117
|
+
# -------------------------------------------------------
|
|
118
|
+
|
|
119
|
+
{
|
|
120
|
+
?I :start ?s; :end ?e.
|
|
121
|
+
?e math:lessThan ?s.
|
|
122
|
+
} => { ?I :invalidInterval true. }.
|
|
123
|
+
|
|
124
|
+
{
|
|
125
|
+
?I :start ?s; :end ?e.
|
|
126
|
+
?s math:equalTo ?e.
|
|
127
|
+
} => { ?I :invalidInterval true. }.
|
|
128
|
+
|
|
129
|
+
# ------------------------------------------
|
|
130
|
+
# Example intervals (xsd:dateTime endpoints)
|
|
131
|
+
# ------------------------------------------
|
|
132
|
+
|
|
133
|
+
# A canonical set to demonstrate multiple relations
|
|
134
|
+
:A :start "2026-02-18T10:00:00Z"^^xsd:dateTime;
|
|
135
|
+
:end "2026-02-18T12:00:00Z"^^xsd:dateTime.
|
|
136
|
+
|
|
137
|
+
:B :start "2026-02-18T13:00:00Z"^^xsd:dateTime;
|
|
138
|
+
:end "2026-02-18T15:00:00Z"^^xsd:dateTime.
|
|
139
|
+
|
|
140
|
+
# :A meets :C (A ends at 12:00; C starts at 12:00)
|
|
141
|
+
:C :start "2026-02-18T12:00:00Z"^^xsd:dateTime;
|
|
142
|
+
:end "2026-02-18T14:00:00Z"^^xsd:dateTime.
|
|
143
|
+
|
|
144
|
+
# :A overlaps :D (10:00 < 11:00 < 12:00 < 13:00)
|
|
145
|
+
# and :D meets :B (D ends at 13:00; B starts at 13:00)
|
|
146
|
+
:D :start "2026-02-18T11:00:00Z"^^xsd:dateTime;
|
|
147
|
+
:end "2026-02-18T13:00:00Z"^^xsd:dateTime.
|
|
148
|
+
|
|
149
|
+
# :A equals :E
|
|
150
|
+
:E :start "2026-02-18T10:00:00Z"^^xsd:dateTime;
|
|
151
|
+
:end "2026-02-18T12:00:00Z"^^xsd:dateTime.
|
|
152
|
+
|
|
153
|
+
# :F starts :A
|
|
154
|
+
:F :start "2026-02-18T10:00:00Z"^^xsd:dateTime;
|
|
155
|
+
:end "2026-02-18T11:00:00Z"^^xsd:dateTime.
|
|
156
|
+
|
|
157
|
+
# :G finishes :A
|
|
158
|
+
:G :start "2026-02-18T11:00:00Z"^^xsd:dateTime;
|
|
159
|
+
:end "2026-02-18T12:00:00Z"^^xsd:dateTime.
|
|
160
|
+
|
|
161
|
+
# :A during :H (and thus :H contains :A)
|
|
162
|
+
:H :start "2026-02-18T09:00:00Z"^^xsd:dateTime;
|
|
163
|
+
:end "2026-02-18T16:00:00Z"^^xsd:dateTime.
|
|
164
|
+
|
|
165
|
+
# -----------------------------------------------
|
|
166
|
+
# Examples using xsd:duration + derived endpoints
|
|
167
|
+
# -----------------------------------------------
|
|
168
|
+
|
|
169
|
+
# :I is given as start + duration; :end will be derived via math:sum
|
|
170
|
+
:I :start "2026-02-18T16:00:00Z"^^xsd:dateTime;
|
|
171
|
+
:duration "PT2H"^^xsd:duration.
|
|
172
|
+
|
|
173
|
+
# :J meets :I once :I's end is derived (J ends at 16:00; I starts at 16:00)
|
|
174
|
+
:J :start "2026-02-18T15:00:00Z"^^xsd:dateTime;
|
|
175
|
+
:end "2026-02-18T16:00:00Z"^^xsd:dateTime.
|
|
176
|
+
|
|
177
|
+
# :K is given as start + duration; it will derive end=14:00 and thus finishes :C
|
|
178
|
+
:K :start "2026-02-18T13:30:00Z"^^xsd:dateTime;
|
|
179
|
+
:duration "PT30M"^^xsd:duration.
|
|
180
|
+
|
|
@@ -0,0 +1,251 @@
|
|
|
1
|
+
# ODRL + DPV Risk Assessment
|
|
2
|
+
|
|
3
|
+
## Ranked, explainable output from machine-readable “Terms of Service”
|
|
4
|
+
|
|
5
|
+
This deck explains how an agreement is modeled in **ODRL**, how risks are expressed in **DPV**, and how **N3 rules** connect the two into a ranked report. ([Playground][1])
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## The idea
|
|
10
|
+
|
|
11
|
+
We want ToS / policy clauses that are:
|
|
12
|
+
|
|
13
|
+
- **Readable by humans** (the actual clause text)
|
|
14
|
+
- **Processable by machines** (permissions, prohibitions, duties, constraints)
|
|
15
|
+
- **Auditable** (why a risk was flagged)
|
|
16
|
+
- **Actionable** (what mitigations to add)
|
|
17
|
+
- **Prioritized** (ranked by score)
|
|
18
|
+
|
|
19
|
+
This example does that by combining **ODRL** (policy structure) + **DPV** (risk vocabulary) + **N3 rules** (logic). ([Playground][1])
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
## Why ODRL matters here
|
|
24
|
+
|
|
25
|
+
ODRL is used to encode the _normative_ structure of agreements:
|
|
26
|
+
|
|
27
|
+
- **Permission**: something is allowed
|
|
28
|
+
- **Prohibition**: something is disallowed
|
|
29
|
+
- **Duty**: something must be done (e.g., inform)
|
|
30
|
+
- **Constraint**: conditions like “noticeDays ≥ 14”
|
|
31
|
+
|
|
32
|
+
This turns ToS clauses into a structured “policy graph” you can reason over. ([ODRL Vocabulary & Expression 2.2][2])
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
## Why DPV matters here
|
|
37
|
+
|
|
38
|
+
DPV provides shared terms to describe privacy-related concepts, including:
|
|
39
|
+
|
|
40
|
+
- **dpv:Risk**
|
|
41
|
+
- consequences / impacts
|
|
42
|
+
- severity & risk level (via the DPV Risk extension)
|
|
43
|
+
|
|
44
|
+
So the output isn’t just “something seems bad”, but _typed, interoperable risks_ that other systems can understand. ([DPV Risk & Impact Assessment][3])
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## What the file contains (5 parts)
|
|
49
|
+
|
|
50
|
+
1. **Consumer profile** (needs + importance weights)
|
|
51
|
+
2. **Agreement** as an **ODRL policy graph** + linked clause text
|
|
52
|
+
3. **Risk rules**: patterns over ODRL → create DPV risks + mitigations
|
|
53
|
+
4. **Score + severity/level** classification
|
|
54
|
+
5. **Ranked explainable output** strings
|
|
55
|
+
|
|
56
|
+
All in one Notation3 (N3) program. ([Playground][1])
|
|
57
|
+
|
|
58
|
+
---
|
|
59
|
+
|
|
60
|
+
## Part 1 — Consumer profile (what the user cares about)
|
|
61
|
+
|
|
62
|
+
The example profile defines four “needs”, each with an importance weight:
|
|
63
|
+
|
|
64
|
+
| Need | Meaning | Importance |
|
|
65
|
+
| -------------------------- | -------------------------------------- | ---------: |
|
|
66
|
+
| Data cannot be removed | provider shouldn’t remove account/data | 20 |
|
|
67
|
+
| Changes need notice | must notify ≥ 14 days | 15 |
|
|
68
|
+
| No sharing without consent | explicit consent required | 12 |
|
|
69
|
+
| Data portability | must allow export | 10 |
|
|
70
|
+
|
|
71
|
+
These weights later boost the risk score when a need is violated. ([Playground][1])
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## Part 2 — Agreement modeled as ODRL
|
|
76
|
+
|
|
77
|
+
Inside a quoted graph (`:policyGraph { ... }`) the policy defines:
|
|
78
|
+
|
|
79
|
+
- **C1** Permission to remove account/data
|
|
80
|
+
- **C2** Permission to change terms with an **inform duty** and **noticeDays ≥ 3**
|
|
81
|
+
- **C3** Permission to share user data (no consent safeguard)
|
|
82
|
+
- **C4** Prohibition to export data (blocks portability)
|
|
83
|
+
|
|
84
|
+
Each ODRL rule links to a `:Clause` resource that stores the human text. ([Playground][1])
|
|
85
|
+
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
## ODRL clause pattern (how to read it)
|
|
89
|
+
|
|
90
|
+
A typical ODRL rule here looks like:
|
|
91
|
+
|
|
92
|
+
- **assigner**: provider
|
|
93
|
+
- **assignee**: consumer
|
|
94
|
+
- **action**: (removeAccount / shareData / changeTerms / exportData)
|
|
95
|
+
- **target**: (UserAccount / UserData / AgreementText)
|
|
96
|
+
- optional **duty** (e.g., inform)
|
|
97
|
+
- optional **constraint** (e.g., noticeDays threshold)
|
|
98
|
+
|
|
99
|
+
That structure is what the logic rules match on. ([Playground][1])
|
|
100
|
+
|
|
101
|
+
---
|
|
102
|
+
|
|
103
|
+
## Part 3 — The logic bridge: N3 rules
|
|
104
|
+
|
|
105
|
+
Each risk rule follows the same recipe:
|
|
106
|
+
|
|
107
|
+
1. **Match** a clause in the policy graph (`log:includes`)
|
|
108
|
+
2. **Detect missing safeguards** (`log:notIncludes`) or insufficient safeguards (comparisons)
|
|
109
|
+
3. **Create** a DPV risk instance (`dpv:Risk`) + risk source + explanation text
|
|
110
|
+
4. **Attach mitigations** as `dpv:RiskMitigationMeasure`
|
|
111
|
+
5. **Store** a numeric score seed (`:scoreRaw`)
|
|
112
|
+
|
|
113
|
+
Key N3 tools you’ll see:
|
|
114
|
+
|
|
115
|
+
- `log:includes` / `log:notIncludes` for scoped graph checks ([Notation3 Language][4])
|
|
116
|
+
- `log:skolem` to mint stable identifiers for risks/measures ([Playground][1])
|
|
117
|
+
- `string:format`, `math:sum`, `math:difference`, comparisons, etc. ([Playground][1])
|
|
118
|
+
|
|
119
|
+
---
|
|
120
|
+
|
|
121
|
+
## Deep dive: Rule R3 (share data without consent)
|
|
122
|
+
|
|
123
|
+
**Natural language translation:**
|
|
124
|
+
|
|
125
|
+
> If the agreement permits sharing user data, and the consumer requires “no sharing without explicit consent”, and the policy graph does **not** contain a consent constraint for that sharing permission, then generate a DPV risk “unwanted disclosure”, explain it, score it, and suggest adding a consent constraint.
|
|
126
|
+
|
|
127
|
+
This is exactly what `log:includes` + `log:notIncludes` is doing. ([Playground][1])
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
131
|
+
## R3 “missing safeguard” pattern (conceptual)
|
|
132
|
+
|
|
133
|
+
```n3
|
|
134
|
+
?G log:includes { :PermShareData odrl:action tosl:shareData . } .
|
|
135
|
+
?G log:notIncludes { :PermShareData odrl:constraint [
|
|
136
|
+
odrl:leftOperand tosl:consent ;
|
|
137
|
+
odrl:operator odrl:eq ;
|
|
138
|
+
odrl:rightOperand true
|
|
139
|
+
] . } .
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
Result: create `dpv:Risk` + add mitigation “Add explicit consent constraint before data sharing.” ([Playground][1])
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
|
|
146
|
+
## Part 4 — Scoring and DPV risk levels
|
|
147
|
+
|
|
148
|
+
### Scoring (simple and explainable)
|
|
149
|
+
|
|
150
|
+
Each rule computes:
|
|
151
|
+
|
|
152
|
+
- `:scoreRaw = base + needImportance`
|
|
153
|
+
- then caps at **100**
|
|
154
|
+
|
|
155
|
+
### Mapping score → severity/level
|
|
156
|
+
|
|
157
|
+
- **80–100** → High severity / High risk
|
|
158
|
+
- **50–79** → Moderate
|
|
159
|
+
- **0–49** → Low
|
|
160
|
+
|
|
161
|
+
This gives a consistent DPV-style classification (`dpv:hasSeverity`, `dpv:hasRiskLevel`). ([Playground][1])
|
|
162
|
+
|
|
163
|
+
---
|
|
164
|
+
|
|
165
|
+
## What the example will rank (with the given numbers)
|
|
166
|
+
|
|
167
|
+
From the file’s constants + importance weights:
|
|
168
|
+
|
|
169
|
+
1. **C1 account removal w/o notice + inform** base 90 + 20 = 110 → capped **100** (High)
|
|
170
|
+
2. **C3 sharing w/o consent** base 85 + 12 = **97** (High)
|
|
171
|
+
3. **C2 terms change notice too short (3 < 14)** base 70 + 15 = **85** (High)
|
|
172
|
+
4. **C4 export prohibited (no portability)** base 60 + 10 = **70** (Moderate)
|
|
173
|
+
|
|
174
|
+
So the “worst” risks appear first. ([Playground][1])
|
|
175
|
+
|
|
176
|
+
---
|
|
177
|
+
|
|
178
|
+
## Part 5 — Ranked, explainable output
|
|
179
|
+
|
|
180
|
+
Instead of “printing during reasoning”, the program emits facts like:
|
|
181
|
+
|
|
182
|
+
- `log:outputString "..."`
|
|
183
|
+
|
|
184
|
+
Then Eyeling’s `--strings` / `-r` mode collects and sorts them deterministically. ([Handbook Inside Eyeling][5])
|
|
185
|
+
|
|
186
|
+
To force ranking, it uses an **inverse score key**:
|
|
187
|
+
|
|
188
|
+
- `inv = 1000 - score`
|
|
189
|
+
- smaller `inv` → higher score → printed first
|
|
190
|
+
|
|
191
|
+
That’s why high-risk items appear at the top. ([Playground][1])
|
|
192
|
+
|
|
193
|
+
---
|
|
194
|
+
|
|
195
|
+
## What makes it “explainable”
|
|
196
|
+
|
|
197
|
+
Every risk carries:
|
|
198
|
+
|
|
199
|
+
- **Which clause** it came from (`:aboutClause`, clauseId + text)
|
|
200
|
+
- **Which need** it violated (`:violatesNeed`)
|
|
201
|
+
- A human explanation string (`dct:description`, built with `string:format`)
|
|
202
|
+
- Suggested **mitigations**, each with a description and even a “patch-like” triple snippet (`:suggestAdd { ... }`)
|
|
203
|
+
|
|
204
|
+
So you can show a ranked report _and_ justify every item. ([Playground][1])
|
|
205
|
+
|
|
206
|
+
---
|
|
207
|
+
|
|
208
|
+
## Why this ODRL + DPV combo is powerful
|
|
209
|
+
|
|
210
|
+
- **ODRL** gives you the “contract logic” backbone (may/must/must-not + conditions)
|
|
211
|
+
- **DPV** gives you the “privacy/risk language” that tools can share
|
|
212
|
+
- **N3** glues them with rules that are:
|
|
213
|
+
- easy to audit
|
|
214
|
+
- easy to extend
|
|
215
|
+
- deterministic to run
|
|
216
|
+
|
|
217
|
+
This is a practical path from “legal-ish text” → “structured policy” → “ranked risk insights”.
|
|
218
|
+
|
|
219
|
+
---
|
|
220
|
+
|
|
221
|
+
## How you’d extend this in real life
|
|
222
|
+
|
|
223
|
+
1. **Add more needs** (e.g., retention limits, security measures, breach notice)
|
|
224
|
+
2. **Model more clause types** in ODRL (more actions, constraints, duties)
|
|
225
|
+
3. **Write additional risk rules**, each with:
|
|
226
|
+
- pattern match
|
|
227
|
+
- missing/weak safeguard test
|
|
228
|
+
- DPV risk type + mitigation
|
|
229
|
+
|
|
230
|
+
4. Tune scoring:
|
|
231
|
+
- different bases per risk category
|
|
232
|
+
- incorporate likelihood, data sensitivity, etc.
|
|
233
|
+
|
|
234
|
+
This stays explainable because it remains rule-based. ([Playground][1])
|
|
235
|
+
|
|
236
|
+
---
|
|
237
|
+
|
|
238
|
+
## Closing takeaway
|
|
239
|
+
|
|
240
|
+
This file is a compact demo of:
|
|
241
|
+
|
|
242
|
+
- ODRL as **machine-readable agreement structure**
|
|
243
|
+
- DPV as **machine-readable privacy risk output**
|
|
244
|
+
- N3 reasoning as the **transparent logic** connecting them
|
|
245
|
+
- A ranked report that’s **deterministic** and **explainable**
|
|
246
|
+
|
|
247
|
+
[1]: https://eyereasoner.github.io/eyeling/demo?url=https://raw.githubusercontent.com/eyereasoner/eyeling/refs/heads/main/examples/odrl-dpv-risk-ranked.n3 'Playground'
|
|
248
|
+
[2]: https://www.w3.org/TR/odrl-vocab/ 'ODRL Vocabulary & Expression 2.2'
|
|
249
|
+
[3]: https://dev.dpvcg.org/dpv/modules/risk 'Risk and Impact Assessment'
|
|
250
|
+
[4]: https://w3c.github.io/N3/spec/ 'Notation3 Language'
|
|
251
|
+
[5]: https://eyereasoner.github.io/eyeling/HANDBOOK 'Handbook Inside Eyeling'
|