iolanta 2.1.4__py3-none-any.whl → 2.1.7__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. iolanta/cli/main.py +44 -24
  2. iolanta/data/context.yaml +1 -1
  3. iolanta/data/iolanta.yaml +1 -1
  4. iolanta/data/textual-browser.yaml +1 -7
  5. iolanta/declension/data/declension.yamlld +1 -1
  6. iolanta/facets/locator/sparql/get-query-to-facet.sparql +5 -0
  7. iolanta/facets/locator.py +4 -7
  8. iolanta/facets/mkdocs_material_insiders_markdown/__init__.py +4 -0
  9. iolanta/facets/mkdocs_material_insiders_markdown/data/mkdocs_material_insiders_markdown.yamlld +20 -0
  10. iolanta/facets/mkdocs_material_insiders_markdown/facet.py +86 -0
  11. iolanta/facets/mkdocs_material_insiders_markdown/templates/datatype.jinja2.md +24 -0
  12. iolanta/facets/textual_browser/page_switcher.py +1 -1
  13. iolanta/facets/textual_graphs/__init__.py +6 -0
  14. iolanta/facets/textual_graphs/data/textual_graphs.yamlld +23 -0
  15. iolanta/facets/textual_graphs/facets.py +138 -0
  16. iolanta/facets/textual_graphs/sparql/graphs.sparql +5 -0
  17. iolanta/iolanta.py +4 -3
  18. iolanta/labeled_triple_set/data/labeled_triple_set.yamlld +1 -1
  19. iolanta/mcp/__init__.py +0 -0
  20. iolanta/mcp/cli.py +39 -0
  21. iolanta/mcp/prompts/nanopublication_assertion_authoring_rules.md +63 -0
  22. iolanta/mcp/prompts/rules.md +83 -0
  23. iolanta/mermaid/facet.py +0 -3
  24. iolanta/mermaid/mermaid.yamlld +7 -24
  25. iolanta/mermaid/models.py +4 -2
  26. iolanta/mermaid/sparql/ask-has-triples.sparql +3 -0
  27. iolanta/models.py +0 -3
  28. iolanta/namespaces.py +2 -2
  29. iolanta/parse_quads.py +2 -2
  30. iolanta/sparqlspace/inference/wikidata-prop-label.sparql +10 -0
  31. iolanta/sparqlspace/inference/wikidata-statement-label.sparql +27 -0
  32. iolanta/sparqlspace/processor.py +80 -78
  33. {iolanta-2.1.4.dist-info → iolanta-2.1.7.dist-info}/METADATA +6 -3
  34. {iolanta-2.1.4.dist-info → iolanta-2.1.7.dist-info}/RECORD +36 -22
  35. {iolanta-2.1.4.dist-info → iolanta-2.1.7.dist-info}/WHEEL +1 -1
  36. {iolanta-2.1.4.dist-info → iolanta-2.1.7.dist-info}/entry_points.txt +3 -1
  37. iolanta/sparqlspace/inference/wikibase-claim.sparql +0 -9
  38. iolanta/sparqlspace/inference/wikibase-statement-property.sparql +0 -9
@@ -0,0 +1,83 @@
1
+ # How to author Linked Data with Iolanta
2
+
3
+ **R00.** Follow this YAML-LD authoring workflow:
4
+ - Draft YAML-LD from user text
5
+ - Use the Iolanta MCP `render_uri` tool with `as_format: labeled-triple-set` to validate and get feedback
6
+ - Address the feedback, correct the YAML-LD document appropriately
7
+ - **After each change to the YAML-LD file, re-run the validation to check for new feedback**
8
+
9
+ **R01.** Acceptance Criteria:
10
+
11
+ - The document fits the original statement the user wanted to express;
12
+ - No negative feedback is received.
13
+
14
+ **R02.** Use YAML-LD format, which is JSON-LD in YAML syntax, for writing Linked Data.
15
+
16
+ **R03.** Always quote the @ character in YAML since it's reserved. Use `"@id":` instead of `@id:`.
17
+
18
+ **R04.** Prefer YAML-LD Convenience Context which maps @-keywords to $-keywords that don't need quoting: `"@type"` → `$type`, `"@id"` → `$id`, `"@graph"` → `$graph`.
19
+
20
+ **R05.** Use the dollar-convenience context with `@import` syntax instead of array syntax. This provides cleaner, more readable YAML-LD documents.
21
+
22
+ Example:
23
+ ```yaml
24
+ "@context":
25
+ "@import": "https://json-ld.org/contexts/dollar-convenience.jsonld"
26
+
27
+ schema: "https://schema.org/"
28
+ wd: "https://www.wikidata.org/entity/"
29
+
30
+ author:
31
+ "@id": "https://schema.org/author"
32
+ "@type": "@id"
33
+ ```
34
+
35
+ Instead of:
36
+ ```yaml
37
+ "@context":
38
+ - "https://json-ld.org/contexts/dollar-convenience.jsonld"
39
+ - schema: "https://schema.org/"
40
+ - wd: "https://www.wikidata.org/entity/"
41
+ - author:
42
+ "@id": "https://schema.org/author"
43
+ "@type": "@id"
44
+ ```
45
+
46
+ **R06.** Reduce quoting when not required by YAML syntax rules.
47
+
48
+ **R07.** Do not use mock URLs like `https://example.org`. Use resolvable URLs that preferably point to Linked Data.
49
+
50
+ **R08.** Use URIs that convey meaning and are renderable with Linked Data visualization tools. Search for appropriate URIs from sources like DBPedia or Wikidata.
51
+
52
+ **R09.** Use the Iolanta MCP `render_uri` tool with `as_format: mermaid` to generate Mermaid graph visualizations of Linked Data. If the user asks, you can save them to `.mmd` files for preview and documentation purposes.
53
+
54
+ **R10.** For language tags, use YAML-LD syntax: `rdfs:label: { $value: "text", $language: "lang" }` instead of Turtle syntax `"text"@lang`.
55
+
56
+ **R11.** Do not attach labels to external URIs that are expected to return Linked Data. Iolanta will fetch those URIs and render labels from the fetched data.
57
+
58
+ **R12.** Use `"@type": "@id"` in the context to coerce properties to IRIs instead of using `$id` wrappers in the document body.
59
+
60
+ **R13.** For software packages, use `schema:SoftwareApplication` as the main type rather than `codemeta:SoftwareSourceCode`.
61
+
62
+ **R14.** Use Wikidata entities for programming languages (e.g., `https://www.wikidata.org/entity/Q28865` for Python) instead of string literals.
63
+
64
+ **R15.** Use proper ORCID URIs for authors (e.g., `https://orcid.org/0009-0001-8740-4213`) and coerce them to IRIs in the context.
65
+
66
+ **R16.** For tools that provide both library and CLI functionality, classify as `schema:Tool` with `schema:applicationSubCategory: Command-line tool`.
67
+
68
+ **R17.** Use real, resolvable repository URLs (e.g., `https://github.com/iolanta-tech/python-yaml-ld`) instead of placeholder URLs.
69
+
70
+ **R18.** Include comprehensive metadata: name, description, author, license, programming language, version, repository links, and application category.
71
+
72
+ **R19.** Use standard vocabularies: schema.org, RDFS, RDF, DCTerms, FOAF, and CodeMeta when appropriate.
73
+
74
+ **R20.** Validate Linked Data using the Iolanta MCP `render_uri` tool with `as_format: labeled-triple-set` to check for URL-as-literal issues and proper IRI handling.
75
+
76
+ **R21.** Do not add `rdfs:label` to external URIs that are expected to return Linked Data. If a URI does not exist or cannot be resolved, do not mask this fact by adding labels. Instead, use a different, existing URI or document the issue with a comment.
77
+
78
+ **R22.** Define URI coercion in the context using `"@type": "@id"` rather than using `$id` wrappers in the document body. This keeps the document body clean and readable while ensuring proper URI handling.
79
+
80
+ **R23.** When defining local shortcuts for URIs in the context, use dashed-case (e.g., `appears-in`, `named-after`) instead of camelCase (e.g., `appearsIn`, `namedAfter`). This improves readability and follows common YAML conventions.
81
+
82
+ **R24.** Do not rely upon `owl:sameAs` or `schema:sameAs` to express identity relationships. This necessitates OWL inference at the side of the reader, which is performance-taxing and tends to create conflicts. Instead, use direct URIs for entities without relying on sameAs statements for identity.
83
+
iolanta/mermaid/facet.py CHANGED
@@ -122,6 +122,3 @@ class Mermaid(Facet[str]):
122
122
  direct_children = self.construct_mermaid_for_graph(self.this)
123
123
  subgraphs = self.construct_mermaid_subgraphs()
124
124
  return str(Diagram(children=[*direct_children, *subgraphs]))
125
-
126
-
127
-
@@ -1,42 +1,25 @@
1
1
  "@context":
2
2
  "@import": https://json-ld.org/contexts/dollar-convenience.jsonld
3
- vann: https://purl.org/vocab/vann/
4
- foaf: https://xmlns.com/foaf/0.1/
5
- owl: https://www.w3.org/2002/07/owl#
6
3
  iolanta: https://iolanta.tech/
7
- rdfs: "https://www.w3.org/2000/01/rdf-schema#"
8
- rdf: https://www.w3.org/1999/02/22-rdf-syntax-ns#
9
- dcterms: https://purl.org/dc/terms/
10
- dcam: https://purl.org/dc/dcam/
11
-
12
- iolanta:outputs:
13
- "@type": "@id"
14
-
15
- iolanta:when-no-facet-found:
16
- "@type": "@id"
4
+ rdfs: http://www.w3.org/2000/01/rdf-schema#
17
5
 
18
6
  $: rdfs:label
19
7
  →:
20
8
  "@type": "@id"
21
9
  "@id": iolanta:outputs
22
10
 
23
- ⊆:
24
- "@type": "@id"
25
- "@id": rdfs:subClassOf
26
-
27
- ⪯:
28
- "@type": "@id"
29
- "@id": iolanta:is-preferred-over
30
-
31
- ↦: iolanta:matches
32
- iolanta:hasDefaultFacet:
33
- "@type": "@id"
11
+ ↦:
12
+ "@id": iolanta:matches
13
+ "@type": iolanta:SPARQLText
34
14
 
35
15
  $id: pkg:pypi/iolanta#mermaid-graph
36
16
  $: Mermaid Graph
17
+
37
18
  →:
38
19
  $id: https://iolanta.tech/datatypes/mermaid
39
20
  $: Mermaid
21
+ $type: iolanta:OutputDatatype
22
+
40
23
  ↦:
41
24
  - ASK WHERE { GRAPH $this { ?s ?p ?o } }
42
25
  - ASK WHERE { $this iolanta:has-sub-graph ?subgraph }
iolanta/mermaid/models.py CHANGED
@@ -60,7 +60,9 @@ class MermaidLiteral(Documented, BaseModel, arbitrary_types_allowed=True, frozen
60
60
 
61
61
  @property
62
62
  def id(self) -> str:
63
- value_hash = hashlib.md5(str(self.literal.value).encode()).hexdigest()
63
+ # Use the lexical form of the literal, not rdflib's .value (which may be empty for typed literals),
64
+ # to ensure different texts get distinct node IDs in Mermaid.
65
+ value_hash = hashlib.md5(str(self.literal).encode()).hexdigest()
64
66
  return f'Literal-{value_hash}'
65
67
 
66
68
 
@@ -142,7 +144,7 @@ class Diagram(Documented, BaseModel):
142
144
  """
143
145
  graph {self.direction}
144
146
  {self.formatted_body}
145
- classDef predicate fill:none,stroke:none,stroke-width:0px;
147
+ classDef predicate fill:transparent,stroke:transparent,stroke-width:0px;
146
148
  """
147
149
 
148
150
  children: list[MermaidScalar | MermaidSubgraph]
@@ -0,0 +1,3 @@
1
+ ASK WHERE {
2
+ GRAPH $this { ?s ?p ?o }
3
+ }
iolanta/models.py CHANGED
@@ -83,9 +83,6 @@ class TripleTemplate(NamedTuple):
83
83
 
84
84
 
85
85
  def _normalize_term(term: Node):
86
- if isinstance(term, URIRef) and term.startswith('http://'):
87
- return URIRef(re.sub('^http', 'https', term))
88
-
89
86
  return term
90
87
 
91
88
 
iolanta/namespaces.py CHANGED
@@ -20,11 +20,11 @@ class OWL(rdflib.OWL):
20
20
 
21
21
 
22
22
  class RDFS(rdflib.RDFS):
23
- _NS = rdflib.Namespace('https://www.w3.org/2000/01/rdf-schema#')
23
+ _NS = rdflib.Namespace('http://www.w3.org/2000/01/rdf-schema#')
24
24
 
25
25
 
26
26
  class RDF(rdflib.RDF):
27
- _NS = rdflib.Namespace('https://www.w3.org/1999/02/22-rdf-syntax-ns#')
27
+ _NS = rdflib.Namespace('http://www.w3.org/1999/02/22-rdf-syntax-ns#')
28
28
 
29
29
 
30
30
  class DCTERMS(rdflib.DCTERMS):
iolanta/parse_quads.py CHANGED
@@ -13,8 +13,8 @@ from iolanta.models import Quad
13
13
  from iolanta.namespaces import IOLANTA, META
14
14
 
15
15
  NORMALIZE_TERMS_MAP = MappingProxyType({
16
- URIRef(_url := 'https://www.w3.org/2002/07/owl'): URIRef(f'{_url}#'),
17
- URIRef(_url := 'https://www.w3.org/2000/01/rdf-schema'): URIRef(f'{_url}#'),
16
+ URIRef(_url := 'http://www.w3.org/2002/07/owl'): URIRef(f'{_url}#'),
17
+ URIRef(_url := 'http://www.w3.org/2000/01/rdf-schema'): URIRef(f'{_url}#'),
18
18
  })
19
19
 
20
20
 
@@ -0,0 +1,10 @@
1
+ PREFIX wikibase: <http://wikiba.se/ontology#>
2
+
3
+ CONSTRUCT {
4
+ ?thing rdfs:label ?label .
5
+ }
6
+ WHERE {
7
+ ?entity
8
+ (wikibase:claim | wikibase:qualifier | wikibase:statementProperty | wikibase:statementValueNormalized) ?thing ;
9
+ rdfs:label ?label .
10
+ }
@@ -0,0 +1,27 @@
1
+ PREFIX wikibase: <http://wikiba.se/ontology#>
2
+ PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
3
+ PREFIX prov: <http://www.w3.org/ns/prov#>
4
+
5
+ CONSTRUCT {
6
+ ?statement rdfs:label ?label .
7
+ }
8
+ WHERE {
9
+ ?statement a wikibase:Statement .
10
+
11
+ # Find predicates from statement to values (excluding metadata predicates)
12
+ ?statement ?statementProp ?value .
13
+
14
+ FILTER(?statementProp != wikibase:rank)
15
+ FILTER(?statementProp != rdf:type)
16
+ FILTER(?statementProp != prov:wasDerivedFrom)
17
+ FILTER(?statementProp != iolanta:last-loaded-time)
18
+
19
+ # Handle entity values: get their label
20
+ {
21
+ ?value rdfs:label ?label .
22
+ } UNION {
23
+ # Handle literal values: use the literal directly
24
+ ?statement ?statementProp ?label .
25
+ FILTER(isLiteral(?label))
26
+ }
27
+ }
@@ -35,6 +35,7 @@ from iolanta.namespaces import ( # noqa: WPS235
35
35
  DCTERMS,
36
36
  FOAF,
37
37
  IOLANTA,
38
+ LOCAL,
38
39
  META,
39
40
  OWL,
40
41
  PROV,
@@ -46,6 +47,8 @@ from iolanta.parse_quads import NORMALIZE_TERMS_MAP, parse_quads
46
47
 
47
48
  REASONING_ENABLED = True
48
49
  OWL_REASONING_ENABLED = False
50
+
51
+ INFERENCE_DIR = Path(__file__).parent / 'inference'
49
52
  INDICES = [
50
53
  URIRef('https://iolanta.tech/visualizations/index.yaml'),
51
54
  ]
@@ -112,7 +115,7 @@ def _extract_from_mapping( # noqa: WPS213
112
115
  algebra: Mapping[str, Any],
113
116
  ) -> Iterable[URIRef | Variable]:
114
117
  match algebra.name:
115
- case 'SelectQuery' | 'AskQuery' | 'Project' | 'Distinct':
118
+ case 'SelectQuery' | 'AskQuery' | 'Project' | 'Distinct' | 'Slice':
116
119
  yield from extract_mentioned_urls(algebra['p'])
117
120
 
118
121
  case 'BGP':
@@ -159,7 +162,7 @@ def _extract_from_mapping( # noqa: WPS213
159
162
 
160
163
  case unknown_name:
161
164
  formatted_keys = ', '.join(algebra.keys())
162
- loguru.logger.error(
165
+ loguru.logger.info(
163
166
  'Unknown SPARQL expression '
164
167
  f'{unknown_name}({formatted_keys}): {algebra}',
165
168
  )
@@ -203,9 +206,6 @@ def normalize_term(term: Node) -> Node:
203
206
  * A dirty hack;
204
207
  * Based on hard code.
205
208
  """
206
- if isinstance(term, URIRef) and term.startswith('http://'):
207
- term = URIRef(re.sub('^http', 'https', term))
208
-
209
209
  return NORMALIZE_TERMS_MAP.get(term, term)
210
210
 
211
211
 
@@ -263,6 +263,12 @@ def _extract_nanopublication_uris(
263
263
  match algebra.name:
264
264
  case 'SelectQuery' | 'AskQuery' | 'Project' | 'Distinct' | 'Graph':
265
265
  yield from _extract_nanopublication_uris(algebra['p'])
266
+ case 'ConstructQuery':
267
+ # CONSTRUCT queries don't have nanopublication URIs in bindings
268
+ return
269
+
270
+ case 'Slice':
271
+ yield from _extract_nanopublication_uris(algebra['p'])
266
272
 
267
273
  case 'BGP':
268
274
  for retractor, retracts, retractee in algebra['triples']:
@@ -404,64 +410,6 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
404
410
  self.graph.last_not_inferred_source = None
405
411
  self.graph._indices_loaded = False
406
412
 
407
- def _infer_with_sparql(self):
408
- """
409
- Infer triples with SPARQL rules.
410
-
411
- FIXME:
412
- * Code these rules into SHACL or some other RDF based syntax;
413
- * Make them available at iolanta.tech/visualizations/ and indexed.
414
- """
415
- inference = Path(__file__).parent / 'inference'
416
-
417
- file_names = {
418
- 'wikibase-claim.sparql': URIRef('local:inference-wikibase-claim'),
419
- 'wikibase-statement-property.sparql': URIRef(
420
- 'local:inference-statement-property',
421
- ),
422
- }
423
-
424
- for file_name, graph_name in file_names.items():
425
- start_time = time.time()
426
- self.graph.update(
427
- update_object=(inference / file_name).read_text(),
428
- )
429
- triple_count = len(self.graph.get_context(graph_name))
430
- duration = datetime.timedelta(seconds=time.time() - start_time)
431
- self.logger.info(
432
- f'{file_name}: {triple_count} triple(s), '
433
- f'inferred at {duration}',
434
- )
435
-
436
- def maybe_apply_inference(self):
437
- """Apply global OWL RL inference if necessary."""
438
- if not REASONING_ENABLED:
439
- return
440
-
441
- if self.graph.last_not_inferred_source is None:
442
- return
443
-
444
- with self.inference_lock:
445
- self._infer_with_sparql()
446
- self._infer_with_owl_rl()
447
- self.logger.info('Inference @ cyberspace: complete.')
448
-
449
- self.graph.last_not_inferred_source = None
450
-
451
- def _infer_with_owl_rl(self):
452
- if not OWL_REASONING_ENABLED:
453
- return
454
-
455
- reasoner = reasonable.PyReasoner()
456
- reasoner.from_graph(self.graph)
457
- inferred_triples = reasoner.reason()
458
- inference_graph_name = BNode('_:inference')
459
- inferred_quads = [
460
- (*triple, inference_graph_name)
461
- for triple in inferred_triples
462
- ]
463
- self.graph.addN(inferred_quads)
464
-
465
413
  def _maybe_load_indices(self):
466
414
  if not self.graph._indices_loaded:
467
415
  for index in INDICES:
@@ -486,7 +434,7 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
486
434
 
487
435
  initBindings = initBindings or {}
488
436
  initNs = initNs or {}
489
-
437
+
490
438
  if isinstance(strOrQuery, Query):
491
439
  query = strOrQuery
492
440
 
@@ -494,12 +442,14 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
494
442
  parse_tree = parseQuery(strOrQuery)
495
443
  query = translateQuery(parse_tree, base, initNs)
496
444
 
497
- self.load_retracting_nanopublications_by_query(
498
- query=query,
499
- bindings=initBindings,
500
- base=base,
501
- namespaces=initNs,
502
- )
445
+ # Only extract nanopublications from SELECT/ASK queries, not CONSTRUCT
446
+ if query.algebra.name != 'ConstructQuery':
447
+ self.load_retracting_nanopublications_by_query(
448
+ query=query,
449
+ bindings=initBindings,
450
+ base=base,
451
+ namespaces=initNs,
452
+ )
503
453
 
504
454
  query, urls = extract_mentioned_urls_from_query(
505
455
  query=query,
@@ -508,15 +458,24 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
508
458
  namespaces=initNs,
509
459
  )
510
460
 
461
+ # Filter out inference graph names (they're not URLs to load)
462
+ urls = {url for url in urls if not str(url).startswith('inference:')}
463
+
511
464
  for url in urls:
512
465
  try:
513
466
  self.load(url)
514
467
  except Exception as err:
515
- self.logger.error(f'Failed to load {url}: {err}', url, err)
468
+ self.logger.exception(f'Failed to load {url}: {err}', url, err)
516
469
 
517
- NanopubQueryPlugin(graph=self.graph)(query, bindings=initBindings)
470
+ # Run inference if there's new data since last inference run
471
+ # (after URLs are loaded so inference can use the loaded data)
472
+ if self.graph.last_not_inferred_source is not None:
473
+ self.logger.debug(f'Running inference, last_not_inferred_source: {self.graph.last_not_inferred_source}')
474
+ self._run_inference()
475
+ else:
476
+ self.logger.debug('Skipping inference, last_not_inferred_source is None')
518
477
 
519
- self.maybe_apply_inference()
478
+ NanopubQueryPlugin(graph=self.graph)(query, bindings=initBindings)
520
479
 
521
480
  is_anything_loaded = True
522
481
  while is_anything_loaded:
@@ -531,6 +490,7 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
531
490
  return query_result
532
491
 
533
492
  for row in bindings:
493
+ break
534
494
  for _, maybe_iri in row.items():
535
495
  if (
536
496
  isinstance(maybe_iri, URIRef)
@@ -566,12 +526,10 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
566
526
 
567
527
  def _follow_is_visualized_with_links(self, uri: URIRef):
568
528
  """Follow `dcterms:isReferencedBy` links."""
569
- self.logger.info(f'Following links for {uri}…')
570
529
  triples = self.graph.triples((uri, DCTERMS.isReferencedBy, None))
571
530
  for _, _, visualization in triples:
572
531
  if isinstance(visualization, URIRef):
573
532
  self.load(visualization)
574
- self.logger.info('Links followed.')
575
533
 
576
534
  def load( # noqa: C901, WPS210, WPS212, WPS213, WPS231
577
535
  self,
@@ -611,8 +569,6 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
611
569
  source_uri = normalize_term(source)
612
570
  if self._is_loaded(source_uri):
613
571
  return Skipped()
614
- else:
615
- self.logger.info(f'{source_uri} is not loaded yet')
616
572
 
617
573
  # FIXME This is definitely inefficient. However, python-yaml-ld caches
618
574
  # the document, so the performance overhead is not super high.
@@ -718,8 +674,9 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
718
674
  for quad in quads
719
675
  })
720
676
  self.logger.info(
721
- f'{source} | loaded successfully into graphs: {into_graphs}',
677
+ f'{source} | loaded {len(quads)} triples into graphs: {into_graphs}',
722
678
  )
679
+
723
680
  return Loaded()
724
681
 
725
682
  def resolve_term(self, term: Node, bindings: dict[str, Node]):
@@ -732,6 +689,51 @@ class GlobalSPARQLProcessor(Processor): # noqa: WPS338, WPS214
732
689
 
733
690
  return term
734
691
 
692
+ def _run_inference(self): # noqa: WPS231
693
+ """
694
+ Run inference queries from the inference directory.
695
+
696
+ For each SPARQL file in the inference directory:
697
+ 1. Truncate the named graph `local:inference-{filename}`
698
+ 2. Execute the CONSTRUCT query
699
+ 3. Insert the resulting triples into that graph
700
+ """
701
+ with self.inference_lock:
702
+ for inference_file in INFERENCE_DIR.glob('*.sparql'):
703
+ filename = inference_file.stem # filename without .sparql extension
704
+ inference_graph = URIRef(f'inference:{filename}')
705
+
706
+ # Truncate the inference graph
707
+ context = self.graph.get_context(inference_graph)
708
+ context.remove((None, None, None))
709
+
710
+ # Read and execute the CONSTRUCT query
711
+ query_text = inference_file.read_text()
712
+ result = self.graph.query(query_text)
713
+
714
+ # CONSTRUCT queries return a SPARQLResult with a graph attribute
715
+ result_graph = result.get('graph') if isinstance(result, dict) else result.graph
716
+ self.logger.debug(f'Inference {filename}: result_graph is {result_graph}, type: {type(result_graph)}')
717
+ if result_graph is not None:
718
+ inferred_quads = [
719
+ (s, p, o, inference_graph)
720
+ for s, p, o in result_graph
721
+ ]
722
+ self.logger.debug(f'Inference {filename}: generated {len(inferred_quads)} quads')
723
+
724
+ if inferred_quads:
725
+ self.graph.addN(inferred_quads)
726
+ self.logger.info(
727
+ 'Inference {filename}: added {count} triples',
728
+ filename=filename,
729
+ count=len(inferred_quads),
730
+ )
731
+ else:
732
+ self.logger.debug(f'Inference {filename}: result_graph is None')
733
+
734
+ # Clear the flag after running inference
735
+ self.graph.last_not_inferred_source = None
736
+
735
737
  def load_retracting_nanopublications_by_query( # noqa: WPS231
736
738
  self,
737
739
  query: Query,
@@ -1,6 +1,6 @@
1
- Metadata-Version: 2.3
1
+ Metadata-Version: 2.4
2
2
  Name: iolanta
3
- Version: 2.1.4
3
+ Version: 2.1.7
4
4
  Summary: Semantic Web browser
5
5
  License: MIT
6
6
  Author: Anatoly Scherbakov
@@ -10,6 +10,7 @@ Classifier: License :: OSI Approved :: MIT License
10
10
  Classifier: Programming Language :: Python :: 3
11
11
  Classifier: Programming Language :: Python :: 3.12
12
12
  Classifier: Programming Language :: Python :: 3.13
13
+ Classifier: Programming Language :: Python :: 3.14
13
14
  Provides-Extra: all
14
15
  Requires-Dist: boltons (>=24.0.0)
15
16
  Requires-Dist: classes (>=0.4.0)
@@ -17,9 +18,11 @@ Requires-Dist: deepmerge (>=0.1.1)
17
18
  Requires-Dist: diskcache (>=5.6.3)
18
19
  Requires-Dist: documented (>=0.1.1)
19
20
  Requires-Dist: dominate (>=2.6.0)
21
+ Requires-Dist: fastmcp (>=2.12.4,<3.0.0)
20
22
  Requires-Dist: funcy (>=2.0)
21
23
  Requires-Dist: loguru (>=0.7.3)
22
24
  Requires-Dist: more-itertools (>=9.0.0)
25
+ Requires-Dist: nanopub (>=2.1.0,<3.0.0)
23
26
  Requires-Dist: owlrl (>=6.0.2)
24
27
  Requires-Dist: oxrdflib (>=0.4.0)
25
28
  Requires-Dist: packageurl-python (>=0.17.5)
@@ -31,7 +34,7 @@ Requires-Dist: rich (>=13.3.1)
31
34
  Requires-Dist: textual (>=0.83.0)
32
35
  Requires-Dist: typer (>=0.9.0)
33
36
  Requires-Dist: watchfiles (>=1.0.4)
34
- Requires-Dist: yaml-ld (>=1.1.12)
37
+ Requires-Dist: yaml-ld (>=1.1.15)
35
38
  Requires-Dist: yarl (>=1.9.4)
36
39
  Description-Content-Type: text/markdown
37
40