mustrd 0.1.8__tar.gz → 0.2.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {mustrd-0.1.8 → mustrd-0.2.1}/LICENSE +21 -21
- {mustrd-0.1.8 → mustrd-0.2.1}/PKG-INFO +4 -2
- {mustrd-0.1.8 → mustrd-0.2.1}/README.adoc +58 -58
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/README.adoc +210 -201
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/TestResult.py +136 -136
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/logger_setup.py +48 -48
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/model/catalog-v001.xml +5 -5
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/model/mustrdShapes.ttl +253 -253
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/model/mustrdTestShapes.ttl +24 -24
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/model/ontology.ttl +494 -494
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/model/test-resources/resources.ttl +60 -60
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/model/triplestoreOntology.ttl +174 -174
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/model/triplestoreshapes.ttl +41 -41
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/mustrd.py +787 -788
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/mustrdAnzo.py +236 -236
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/mustrdGraphDb.py +125 -125
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/mustrdRdfLib.py +56 -56
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/mustrdTestPlugin.py +327 -328
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/namespace.py +125 -125
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/run.py +106 -106
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/spec_component.py +690 -682
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/steprunner.py +166 -166
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/templates/md_ResultList_leaf_template.jinja +18 -18
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/templates/md_ResultList_template.jinja +8 -8
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/templates/md_stats_template.jinja +2 -2
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/test/test_mustrd.py +4 -4
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/utils.py +38 -38
- {mustrd-0.1.8 → mustrd-0.2.1}/pyproject.toml +55 -54
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/__init__.py +0 -0
- {mustrd-0.1.8 → mustrd-0.2.1}/mustrd/model/mustrdTestOntology.ttl +0 -0
@@ -1,21 +1,21 @@
|
|
1
|
-
MIT License
|
2
|
-
|
3
|
-
Copyright (c) 2023 Semantic Partners Ltd
|
4
|
-
|
5
|
-
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6
|
-
of this software and associated documentation files (the "Software"), to deal
|
7
|
-
in the Software without restriction, including without limitation the rights
|
8
|
-
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9
|
-
copies of the Software, and to permit persons to whom the Software is
|
10
|
-
furnished to do so, subject to the following conditions:
|
11
|
-
|
12
|
-
The above copyright notice and this permission notice shall be included in all
|
13
|
-
copies or substantial portions of the Software.
|
14
|
-
|
15
|
-
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16
|
-
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17
|
-
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18
|
-
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19
|
-
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20
|
-
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21
|
-
SOFTWARE.
|
1
|
+
MIT License
|
2
|
+
|
3
|
+
Copyright (c) 2023 Semantic Partners Ltd
|
4
|
+
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
7
|
+
in the Software without restriction, including without limitation the rights
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
10
|
+
furnished to do so, subject to the following conditions:
|
11
|
+
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
13
|
+
copies or substantial portions of the Software.
|
14
|
+
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21
|
+
SOFTWARE.
|
@@ -1,17 +1,18 @@
|
|
1
1
|
Metadata-Version: 2.1
|
2
2
|
Name: mustrd
|
3
|
-
Version: 0.1
|
3
|
+
Version: 0.2.1
|
4
4
|
Summary: A Spec By Example framework for RDF and SPARQL, Inspired by Cucumber.
|
5
5
|
Home-page: https://github.com/Semantic-partners/mustrd
|
6
6
|
License: MIT
|
7
7
|
Author: John Placek
|
8
8
|
Author-email: john.placek@semanticpartners.com
|
9
|
-
Requires-Python:
|
9
|
+
Requires-Python: >=3.11.7,<4.0.0
|
10
10
|
Classifier: Framework :: Pytest
|
11
11
|
Classifier: License :: OSI Approved :: MIT License
|
12
12
|
Classifier: Natural Language :: English
|
13
13
|
Classifier: Programming Language :: Python
|
14
14
|
Classifier: Programming Language :: Python :: 3
|
15
|
+
Classifier: Programming Language :: Python :: 3.12
|
15
16
|
Classifier: Topic :: Software Development :: Quality Assurance
|
16
17
|
Classifier: Topic :: Software Development :: Testing
|
17
18
|
Classifier: Topic :: Utilities
|
@@ -22,6 +23,7 @@ Requires-Dist: colorlog (>=6.7.0,<7.0.0)
|
|
22
23
|
Requires-Dist: coverage (==7.4.3)
|
23
24
|
Requires-Dist: flake8 (==7.0.0)
|
24
25
|
Requires-Dist: multimethods-py (>=0.5.3,<0.6.0)
|
26
|
+
Requires-Dist: numpy (>=1.26.0,<2.0.0)
|
25
27
|
Requires-Dist: openpyxl (>=3.1.2,<4.0.0)
|
26
28
|
Requires-Dist: pandas (>=1.5.2,<2.0.0)
|
27
29
|
Requires-Dist: pyanzo (>=3.3.10,<4.0.0)
|
@@ -1,58 +1,58 @@
|
|
1
|
-
== Mustrd
|
2
|
-
|
3
|
-
// tag::body[]
|
4
|
-
|
5
|
-
image::https://github.com/Semantic-partners/mustrd/raw/python-coverage-comment-action-data/badge.svg[Coverage badge,link="https://github.com/Semantic-partners/mustrd/tree/python-coverage-comment-action-data"]
|
6
|
-
|
7
|
-
=== Why?
|
8
|
-
|
9
|
-
How do you know your SPARQL, whether it's in a pipeline, or a query, is doing what you intend?
|
10
|
-
|
11
|
-
As much as we love RDF and SPARQL and Semantic Tech in general, we found a small gap in tooling which would give us that certainty.
|
12
|
-
|
13
|
-
We missed the powerful testing frameworks that have evolved in imperative languages that help ensure you've written code that does what you think it should.
|
14
|
-
|
15
|
-
We wanted to be able to:
|
16
|
-
|
17
|
-
* setup data scenarios and ensure queries worked as expected
|
18
|
-
* setup edge cases for queries and ensure they still work
|
19
|
-
* isolate small sparql enrichment / transformation steps and to know we're only INSERTing what we intend
|
20
|
-
|
21
|
-
Enter MustRD.
|
22
|
-
|
23
|
-
=== What?
|
24
|
-
|
25
|
-
MustRD is a Spec-By-Example ontology, with a reference python implementation, inspired by the likes of Cucumber.
|
26
|
-
|
27
|
-
It's designed to be triplestore/SPARQL engine agnostic (aren't open standards *wonderful*!).
|
28
|
-
|
29
|
-
=== What it is NOT
|
30
|
-
MustRD is nothing to do with SHACL, or an alternative to it. In fact, we use SHACL for some of our features.
|
31
|
-
|
32
|
-
SHACL provides validation around data.
|
33
|
-
|
34
|
-
MustRD provides validation around data transformations.
|
35
|
-
|
36
|
-
=== How?
|
37
|
-
You define your specs in ttl, or trig files.
|
38
|
-
We use the SBE approach of *Given*, *When*, *Then* to define starting dataset, an action, and a set of expectations. We build up a set of data.
|
39
|
-
Then, depending on whether your SPARQL is a CONSTRUCT, SELECT or a INSERT/DELETE, we run it, and compare results against a set of expectations (*Then*) that are defined in the same way as a *Given* .
|
40
|
-
Alternatively, you could define your *Then*
|
41
|
-
|
42
|
-
* as an explicit ASK, or
|
43
|
-
* select; or
|
44
|
-
* in a higher-order expectation language like you will be used to in various platforms, a set of expectations.
|
45
|
-
|
46
|
-
|
47
|
-
=== When?
|
48
|
-
|
49
|
-
Soon. It's a work in progress, and we're building the things *we* need for the projects we work on at multiple clients, with multiple vendor stacks.
|
50
|
-
We already think it's useful, but it might not meet *your* needs, out of the box.
|
51
|
-
|
52
|
-
We invite you to try it, see where it doesn't fit, and raise an issue, or even better, a PR! If you need something custom, please check out our consultancy rates, and we might be able to prioritise a new feature for you.
|
53
|
-
|
54
|
-
== Support
|
55
|
-
We're a specialist consultancy in Semantic Tech, we're putting this out in case it's useful, but if you need more support, kindly contact our business team on info@semanticpartners.com
|
56
|
-
|
57
|
-
// tag::body[]
|
58
|
-
include::src/README.adoc[tags=body]
|
1
|
+
== Mustrd
|
2
|
+
|
3
|
+
// tag::body[]
|
4
|
+
|
5
|
+
image::https://github.com/Semantic-partners/mustrd/raw/python-coverage-comment-action-data/badge.svg[Coverage badge,link="https://github.com/Semantic-partners/mustrd/tree/python-coverage-comment-action-data"]
|
6
|
+
|
7
|
+
=== Why?
|
8
|
+
|
9
|
+
How do you know your SPARQL, whether it's in a pipeline, or a query, is doing what you intend?
|
10
|
+
|
11
|
+
As much as we love RDF and SPARQL and Semantic Tech in general, we found a small gap in tooling which would give us that certainty.
|
12
|
+
|
13
|
+
We missed the powerful testing frameworks that have evolved in imperative languages that help ensure you've written code that does what you think it should.
|
14
|
+
|
15
|
+
We wanted to be able to:
|
16
|
+
|
17
|
+
* setup data scenarios and ensure queries worked as expected
|
18
|
+
* setup edge cases for queries and ensure they still work
|
19
|
+
* isolate small sparql enrichment / transformation steps and to know we're only INSERTing what we intend
|
20
|
+
|
21
|
+
Enter MustRD.
|
22
|
+
|
23
|
+
=== What?
|
24
|
+
|
25
|
+
MustRD is a Spec-By-Example ontology, with a reference python implementation, inspired by the likes of Cucumber.
|
26
|
+
|
27
|
+
It's designed to be triplestore/SPARQL engine agnostic (aren't open standards *wonderful*!).
|
28
|
+
|
29
|
+
=== What it is NOT
|
30
|
+
MustRD is nothing to do with SHACL, or an alternative to it. In fact, we use SHACL for some of our features.
|
31
|
+
|
32
|
+
SHACL provides validation around data.
|
33
|
+
|
34
|
+
MustRD provides validation around data transformations.
|
35
|
+
|
36
|
+
=== How?
|
37
|
+
You define your specs in ttl, or trig files.
|
38
|
+
We use the SBE approach of *Given*, *When*, *Then* to define starting dataset, an action, and a set of expectations. We build up a set of data.
|
39
|
+
Then, depending on whether your SPARQL is a CONSTRUCT, SELECT or a INSERT/DELETE, we run it, and compare results against a set of expectations (*Then*) that are defined in the same way as a *Given* .
|
40
|
+
Alternatively, you could define your *Then*
|
41
|
+
|
42
|
+
* as an explicit ASK, or
|
43
|
+
* select; or
|
44
|
+
* in a higher-order expectation language like you will be used to in various platforms, a set of expectations.
|
45
|
+
|
46
|
+
|
47
|
+
=== When?
|
48
|
+
|
49
|
+
Soon. It's a work in progress, and we're building the things *we* need for the projects we work on at multiple clients, with multiple vendor stacks.
|
50
|
+
We already think it's useful, but it might not meet *your* needs, out of the box.
|
51
|
+
|
52
|
+
We invite you to try it, see where it doesn't fit, and raise an issue, or even better, a PR! If you need something custom, please check out our consultancy rates, and we might be able to prioritise a new feature for you.
|
53
|
+
|
54
|
+
== Support
|
55
|
+
We're a specialist consultancy in Semantic Tech, we're putting this out in case it's useful, but if you need more support, kindly contact our business team on info@semanticpartners.com
|
56
|
+
|
57
|
+
// tag::body[]
|
58
|
+
include::src/README.adoc[tags=body]
|
@@ -1,201 +1,210 @@
|
|
1
|
-
= Developer helper
|
2
|
-
// tag::body[]
|
3
|
-
|
4
|
-
== Try it out
|
5
|
-
|
6
|
-
Ensure you have python3 installed, before you begin.
|
7
|
-
To install the necessary dependencies, run the following command from the project root.
|
8
|
-
|
9
|
-
`pip3 install -r requirements.txt`
|
10
|
-
|
11
|
-
Run the following command to execute the accompanying tests specifications.
|
12
|
-
|
13
|
-
`python3 src/run.py -v -p "test/test-specs" -g "test/data" -w "test/data" -t "test/data"`
|
14
|
-
|
15
|
-
You will see some warnings. Do not worry, some tests specifications are invalid and intentionally skipped.
|
16
|
-
|
17
|
-
For a brief explanation of the meaning of these options use the help option.
|
18
|
-
|
19
|
-
`python3 src/run.py --help`
|
20
|
-
|
21
|
-
== Run the tests
|
22
|
-
|
23
|
-
Run `pytest` from the project root.
|
24
|
-
|
25
|
-
== Creating your own Test Specifications
|
26
|
-
|
27
|
-
If you have got this far then you are probably ready to create your own specifications to test your application SPARQL queries. These will be executed against the default RDFLib triplestore unless you configure one or more alternatives. The instructions for this are included in <<Configuring external triplestores>> below.
|
28
|
-
|
29
|
-
===
|
30
|
-
|
31
|
-
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
|
40
|
-
|
41
|
-
|
42
|
-
|
43
|
-
* *
|
44
|
-
----
|
45
|
-
must:given [ a must:
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
|
51
|
-
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
|
57
|
-
|
58
|
-
|
59
|
-
|
60
|
-
must:
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
must:
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
|
90
|
-
|
91
|
-
|
92
|
-
|
93
|
-
|
94
|
-
|
95
|
-
|
96
|
-
|
97
|
-
|
98
|
-
|
99
|
-
|
100
|
-
|
101
|
-
|
102
|
-
|
103
|
-
|
104
|
-
|
105
|
-
|
106
|
-
|
107
|
-
* *
|
108
|
-
----
|
109
|
-
must:then
|
110
|
-
must:
|
111
|
-
----
|
112
|
-
|
113
|
-
|
114
|
-
|
115
|
-
|
116
|
-
|
117
|
-
|
118
|
-
|
119
|
-
|
120
|
-
|
121
|
-
|
122
|
-
----
|
123
|
-
must:then
|
124
|
-
|
125
|
-
|
126
|
-
|
127
|
-
|
128
|
-
|
129
|
-
|
130
|
-
|
131
|
-
|
132
|
-
|
133
|
-
|
134
|
-
|
135
|
-
|
136
|
-
|
137
|
-
|
138
|
-
|
139
|
-
|
140
|
-
|
141
|
-
|
142
|
-
|
143
|
-
|
144
|
-
|
145
|
-
|
146
|
-
|
147
|
-
|
148
|
-
must:boundValue test-data:
|
149
|
-
[ must:variable "
|
150
|
-
must:boundValue test-data:
|
151
|
-
|
152
|
-
|
153
|
-
|
154
|
-
|
155
|
-
|
156
|
-
|
157
|
-
|
158
|
-
|
159
|
-
|
160
|
-
|
161
|
-
|
162
|
-
|
163
|
-
|
164
|
-
|
165
|
-
|
166
|
-
|
167
|
-
----
|
168
|
-
|
169
|
-
|
170
|
-
|
171
|
-
|
172
|
-
|
173
|
-
|
174
|
-
|
175
|
-
|
176
|
-
----
|
177
|
-
|
178
|
-
|
179
|
-
|
180
|
-
|
181
|
-
|
182
|
-
|
183
|
-
|
184
|
-
|
185
|
-
|
186
|
-
|
187
|
-
|
188
|
-
|
189
|
-
must:
|
190
|
-
|
191
|
-
|
192
|
-
|
193
|
-
|
194
|
-
|
195
|
-
|
196
|
-
|
197
|
-
|
198
|
-
|
199
|
-
|
200
|
-
|
201
|
-
|
1
|
+
= Developer helper
|
2
|
+
// tag::body[]
|
3
|
+
|
4
|
+
== Try it out
|
5
|
+
|
6
|
+
Ensure you have python3 installed, before you begin.
|
7
|
+
To install the necessary dependencies, run the following command from the project root.
|
8
|
+
|
9
|
+
`pip3 install -r requirements.txt`
|
10
|
+
|
11
|
+
Run the following command to execute the accompanying tests specifications.
|
12
|
+
|
13
|
+
`python3 src/run.py -v -p "test/test-specs" -g "test/data" -w "test/data" -t "test/data"`
|
14
|
+
|
15
|
+
You will see some warnings. Do not worry, some tests specifications are invalid and intentionally skipped.
|
16
|
+
|
17
|
+
For a brief explanation of the meaning of these options use the help option.
|
18
|
+
|
19
|
+
`python3 src/run.py --help`
|
20
|
+
|
21
|
+
== Run the tests
|
22
|
+
|
23
|
+
Run `pytest` from the project root.
|
24
|
+
|
25
|
+
== Creating your own Test Specifications
|
26
|
+
|
27
|
+
If you have got this far then you are probably ready to create your own specifications to test your application SPARQL queries. These will be executed against the default RDFLib triplestore unless you configure one or more alternatives. The instructions for this are included in <<Configuring external triplestores>> below.
|
28
|
+
|
29
|
+
=== Paths
|
30
|
+
All paths are consired relative. That way mustrd tests can be versionned and shared easily.
|
31
|
+
To get absolute path from relative path in a spec file, we prefix it with the first existing result in:
|
32
|
+
1) Path where the spec is located
|
33
|
+
2) spec_path defined in mustrd test configuration files or cmd line argument
|
34
|
+
3) data_path defined in mustrd test configuration files or cmd line argument
|
35
|
+
4) Mustrd folder: In case of default resources packaged with mustrd source (will be in venv when mustrd is called as library)
|
36
|
+
We intentionally use the same method to build paths in all spec components to avoid confusion.
|
37
|
+
|
38
|
+
=== Givens
|
39
|
+
These are used to specify the dataset against which the SPARQL statement will be run.
|
40
|
+
They can be generated from external sources such as an existing graph, or a file or folder containing serialised RDF. It is also possible to specify the dataset as reified RDF directly in the test step. Currently tabular data sources such as csv files or TableDatasets are not supported.
|
41
|
+
Multiple given statements can be supplied and data is combined into a single dataset for the test.
|
42
|
+
|
43
|
+
* *InheritedDataset* - This is where no data is specified but the existing data in the target graph is retained rather than being replaced with a defined set. This can be used to chain tests together or to perform checks on application data.
|
44
|
+
----
|
45
|
+
must:given [ a must:InheritedDataset ] ;
|
46
|
+
----
|
47
|
+
* *FileDataset* - The dataset is a local file containing serialised RDF. The formats supported are the same as those for the RDFLib Graph().parse function i.e. Turtle (.ttl), NTriples (.nt), N3 (.n3), RDF/XML (.xml) and TriX. The data is used to replace any existing content in the target graph for the test.
|
48
|
+
----
|
49
|
+
must:given [ a must:FileDataset ;
|
50
|
+
must:file "test/data/given.ttl" . ] ;
|
51
|
+
----
|
52
|
+
* *FolderDataset* - Very similar to the file dataset except that the location of the file is passed to the test specification as an argument from the caller. i.e. the -g option on the command line.
|
53
|
+
----
|
54
|
+
must:given [ a must:FolderDataset ;
|
55
|
+
must:fileName "given.ttl" ] ;
|
56
|
+
----
|
57
|
+
* *StatementsDataset* - The dataset is defined within the test in the form of reified RDF statements. e.g.
|
58
|
+
----
|
59
|
+
must:given [ a must:StatementsDataset ;
|
60
|
+
must:hasStatement [ a rdf:Statement ;
|
61
|
+
rdf:subject test-data:sub ;
|
62
|
+
rdf:predicate test-data:pred ;
|
63
|
+
rdf:object test-data:obj ; ] ; ] ;
|
64
|
+
----
|
65
|
+
* *AnzoGraphmartDataset* - The dataset is contained in an Anzo graphmart and needs to be retrieved from there. The Anzo instance containing the dataset needs to be indicated in the configuration file as documented in <<Configuring external triplestores>>.
|
66
|
+
----
|
67
|
+
must:given [ a must:AnzoGraphmartDataset ;
|
68
|
+
must:graphmart "http://cambridgesemantics.com/Graphmart/43445aeadf674e09818c81cf7049e46a";
|
69
|
+
must:layer "http://cambridgesemantics.com/Layer/33b97531d7e148748b75e4e3c6bbf164";
|
70
|
+
] .
|
71
|
+
----
|
72
|
+
=== Whens
|
73
|
+
These are the actual SPARQL queries that you wish to test. Queries can be supplied as a string directly in the test or as a file containing the query. Only single When statements are currently supported.
|
74
|
+
Mustrd does not derive the query type from the actual query, so it is necessary to provide this in the specification. Supported query types are SelectSparql, ConstructSparql and UpdateSparql.
|
75
|
+
|
76
|
+
* *TextSparqlSource* - The SPARQL query is included in the test as a (multiline) string value for the property queryText.
|
77
|
+
e.g.
|
78
|
+
----
|
79
|
+
must:when [ a must:TextSparqlSource ;
|
80
|
+
must:queryText "SELECT ?s ?p ?o WHERE { ?s ?p ?o }" ;
|
81
|
+
must:queryType must:SelectSparql ] ;
|
82
|
+
----
|
83
|
+
|
84
|
+
* *FileSparqlSource* - The SPARQL query is contained in a local file.
|
85
|
+
e.g.
|
86
|
+
----
|
87
|
+
must:when [ a must:FileSparqlSource ;
|
88
|
+
must:file "test/data/construct.rq" ;
|
89
|
+
must:queryType must:ConstructSparql ; ] ;
|
90
|
+
----
|
91
|
+
* *FolderSparqlSource* - Similar to the file SPARQL source except that the location of the file is passed to the test specification as an argument from the caller. i.e. the -w option on the command line.
|
92
|
+
----
|
93
|
+
must:when [ a must:FolderSparqlSource ;
|
94
|
+
must:fileName "construct.rq" ;
|
95
|
+
must:queryType must:ConstructSparql ; ] ;
|
96
|
+
----
|
97
|
+
* *AnzoQueryBuilderDataset* - The query is saved in the Query Builder of an Anzo instance and needs to be retrieved from there. The Anzo instance containing the dataset needs to be indicated in the configuration file as documented in <<Configuring external triplestores>>.
|
98
|
+
----
|
99
|
+
must:when [ a must:AnzoQueryBuilderDataset ;
|
100
|
+
must:queryFolder "Mustrd";
|
101
|
+
must:queryName "mustrd-construct" ;
|
102
|
+
must:queryType must:ConstructSparql
|
103
|
+
];
|
104
|
+
----
|
105
|
+
=== Thens
|
106
|
+
Then clauses are used to specify the expected result dataset for the test. These datasets can be specified in the same way as <<Givens>> except that an extended set of dataset types is supported. For the tabular results of SELECT queries TabularDatasets are required and again can be in file format such as CSV, or an inline table within the specification.
|
107
|
+
* *FileDataset* - The dataset is a local file containing serialised RDF or tabular data. The formats supported are the same as those for the RDFLib Graph().parse function i.e. Turtle (.ttl), NTriples (.nt), N3 (.n3), RDF/XML (.xml) and TriX, as well as tabular formats (.csv, .xls, .xlsx).
|
108
|
+
----
|
109
|
+
must:then [ a must:FileDataset ;
|
110
|
+
must:file "test/data/thenSuccess.xlsx" ] .
|
111
|
+
----
|
112
|
+
----
|
113
|
+
must:then [ a must:FileDataset ;
|
114
|
+
must:file "test/data/thenSuccess.nt" ] .
|
115
|
+
----
|
116
|
+
* *FolderDataset* - Very similar to the file dataset except that the location of the file is passed to the test specification as an argument from the caller. i.e. the -t option on the command line.
|
117
|
+
----
|
118
|
+
must:then [ a must:FolderDataset ;
|
119
|
+
must:fileName "then.ttl" ] ;
|
120
|
+
----
|
121
|
+
* *StatementsDataset* - The dataset is defined within the test in the form of reified RDF statements e.g.
|
122
|
+
----
|
123
|
+
must:then [ a must:StatementsDataset ;
|
124
|
+
must:hasStatement [ a rdf:Statement ;
|
125
|
+
rdf:subject test-data:sub ;
|
126
|
+
rdf:predicate test-data:pred ;
|
127
|
+
rdf:object test-data:obj ; ] ; ] ;
|
128
|
+
----
|
129
|
+
* *TableDataset* - The contents of the table defined in RDF syntax within the specification.
|
130
|
+
E.g. a table dataset consisting of a single row and three columns.
|
131
|
+
----
|
132
|
+
must:then [ a must:TableDataset ;
|
133
|
+
must:hasRow [ must:hasBinding[
|
134
|
+
must:variable "s" ;
|
135
|
+
must:boundValue test-data:sub ; ],
|
136
|
+
[ must:variable "p" ;
|
137
|
+
must:boundValue test-data:pred ; ],
|
138
|
+
[ must:variable "o" ;
|
139
|
+
must:boundValue test-data:obj ; ] ;
|
140
|
+
] ; ] .
|
141
|
+
----
|
142
|
+
* *OrderedTableDataset* - This is an extension of the TableDataset which allows the row order of the dataset to be specified using the SHACL order property to support the ORDER BY clause in SPARQL SELECT queries
|
143
|
+
E.g. A table dataset consisting of two ordered rows and three columns.
|
144
|
+
----
|
145
|
+
must:then [ a must:OrderedTableDataset ;
|
146
|
+
must:hasRow [ sh:order 1 ;
|
147
|
+
must:hasBinding[ must:variable "s" ;
|
148
|
+
must:boundValue test-data:sub1 ; ],
|
149
|
+
[ must:variable "p" ;
|
150
|
+
must:boundValue test-data:pred1 ; ],
|
151
|
+
[ must:variable "o" ;
|
152
|
+
must:boundValue test-data:obj1 ; ] ; ] ,
|
153
|
+
[ sh:order 2 ;
|
154
|
+
must:hasBinding[ must:variable "s" ;
|
155
|
+
must:boundValue test-data:sub2 ; ],
|
156
|
+
[ must:variable "p" ;
|
157
|
+
must:boundValue test-data:pred2 ; ],
|
158
|
+
[ must:variable "o" ;
|
159
|
+
must:boundValue test-data:obj2 ; ] ; ] ;
|
160
|
+
] .
|
161
|
+
----
|
162
|
+
* *EmptyTable* - This is used to indicate that we are expecting an empty result from a SPARQL SELECT query.
|
163
|
+
----
|
164
|
+
must:then [ a must:EmptyTable ] .
|
165
|
+
----
|
166
|
+
* *EmptyGraph* - Similar to EmptyTable but used to indicate that we are expecting an empty graph as a result from a SPARQL query.
|
167
|
+
----
|
168
|
+
must:then [ a must:EmptyGraph ] .
|
169
|
+
----
|
170
|
+
* *AnzoGraphmartDataset* - The dataset is contained in an Anzo graphmart and needs to be retrieved from there. The Anzo instance containing the dataset needs to be indicated in the configuration file as documented in <<Configuring external triplestores>>.
|
171
|
+
----
|
172
|
+
must:then [ a must:AnzoGraphmartDataset ;
|
173
|
+
must:graphmart "http://cambridgesemantics.com/Graphmart/43445aeadf674e09818c81cf7049e46a";
|
174
|
+
must:layer "http://cambridgesemantics.com/Layer/33b97531d7e148748b75e4e3c6bbf164";
|
175
|
+
] .
|
176
|
+
----
|
177
|
+
== Configuring external triplestores
|
178
|
+
The configuration file for external triplestores can be located outside of the project root as it is specified as an argument to the mustard module or as the -c option on the commandline when running run.py.
|
179
|
+
|
180
|
+
It is anticipated that the external triplestore is running as mustrd is not configured to start them.
|
181
|
+
|
182
|
+
Currently, the supported external triplestores are GraphDB and Anzo.
|
183
|
+
|
184
|
+
The configuration file should be serialised RDF. An example in Turtle format is included below for GraphDB. For Anzo the *must:repository* value is replaced with a *must:gqeURI*.
|
185
|
+
----
|
186
|
+
@prefix must: <https://mustrd.com/model/> .
|
187
|
+
must:GraphDbConfig1 a must:GraphDbConfig ;
|
188
|
+
must:url "http://localhost";
|
189
|
+
must:port "7200";
|
190
|
+
must:inputGraph "http://localhost:7200/test-graph" ;
|
191
|
+
must:repository "mustrd" .
|
192
|
+
----
|
193
|
+
To avoid versioning secrets when you want to version triplestore configuration (for example in case you want to run mustrd in CI), you have to configure user/password in a different file.
|
194
|
+
This file must be named as the triple store configuration file, but with "_secrets" just before the extension. For example triplestores.ttl -> triplestores_secrets.ttl
|
195
|
+
Subjects in the two files must match, no need to redefine the type, for example:
|
196
|
+
----
|
197
|
+
@prefix must: <https://mustrd.com/model/> .
|
198
|
+
must:GraphDbConfig1 must:username 'test' ;
|
199
|
+
must:password 'test' .
|
200
|
+
----
|
201
|
+
|
202
|
+
== Additional Notes for Developers
|
203
|
+
Mustrd remains very much under development. It is anticipated that additional functionality and triplestore support will be added over time. The project uses https://python-poetry.org/docs/[Poetry] to manage dependencies so it will be necessary to have this installed to contribute towards the project. The link contains instructions on how to install and use this.
|
204
|
+
As the project is actually built from the requirements.txt file at the project root, it is necessary to export dependencies from poetry to this file before committing and pushing changes to the repository, using the following command.
|
205
|
+
|
206
|
+
`poetry export -f requirements.txt --without-hashes > requirements.txt`
|
207
|
+
|
208
|
+
|
209
|
+
|
210
|
+
// end::body[]
|