triadic-engine 0.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,38 @@
1
+ Business Source License 1.1
2
+
3
+ Licensed Work: Triadic Neurosymbolic Engine
4
+ Licensor: J. Arturo Ornelas Brand <arturoornelas62@gmail.com>
5
+ Change Date: 2030-03-21
6
+ Change License: GNU Affero General Public License v3.0
7
+
8
+ Additional Use Grant:
9
+ Production use of the Licensed Work is permitted without a commercial
10
+ license for: (a) individual persons, (b) academic and research
11
+ institutions, and (c) non-profit organizations.
12
+
13
+ All other production use — including by for-profit companies,
14
+ government contractors, and commercial entities — requires a
15
+ commercial participation agreement with the Licensor.
16
+
17
+ All users are subject to the contribution terms in TERMS.md.
18
+
19
+ Terms:
20
+ The Licensor hereby grants you the right to copy, modify, create
21
+ derivative works, redistribute, and make non-production use of the
22
+ Licensed Work. The Licensor may make an Additional Use Grant above,
23
+ permitting limited production use.
24
+
25
+ Effective on the Change Date, or the fourth anniversary of the first
26
+ publicly available distribution of a specific version of the Licensed
27
+ Work under this License, whichever comes first, the Licensor hereby
28
+ grants you rights under the terms of the Change License, and the
29
+ rights granted in the paragraph above terminate.
30
+
31
+ If your use of the Licensed Work does not comply with the requirements
32
+ currently in effect as described in this License, you must purchase a
33
+ commercial license from the Licensor, its affiliated entities, or
34
+ authorized resellers, or you must refrain from using the Licensed Work.
35
+
36
+ For commercial licensing inquiries, contact: arturoornelas62@gmail.com
37
+
38
+ Full BUSL-1.1 text: https://mariadb.com/bsl11/
@@ -0,0 +1,322 @@
1
+ Metadata-Version: 2.4
2
+ Name: triadic-engine
3
+ Version: 0.2.0
4
+ Summary: Deterministic AI auditing and semantic validation via prime factorization — 28.4x faster than cosine, fully explainable
5
+ Author-email: "J. Arturo Ornelas Brand" <arturoornelas62@gmail.com>
6
+ License: BUSL-1.1
7
+ Project-URL: Repository, https://github.com/arturoornelasb/Triadic-Neurosymbolic-Engine
8
+ Project-URL: Paper, https://doi.org/10.5281/zenodo.19205805
9
+ Keywords: neurosymbolic,semantic-search,prime-factorization,ai-auditing,embeddings,explainable-ai,knowledge-graph,triadic,deterministic,model-evaluation
10
+ Classifier: Development Status :: 4 - Beta
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
14
+ Classifier: Topic :: Scientific/Engineering :: Mathematics
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Operating System :: OS Independent
20
+ Classifier: License :: Other/Proprietary License
21
+ Requires-Python: >=3.10
22
+ Description-Content-Type: text/markdown
23
+ License-File: LICENSE
24
+ Requires-Dist: numpy>=1.20
25
+ Requires-Dist: networkx>=2.5
26
+ Requires-Dist: scikit-learn>=1.0
27
+ Requires-Dist: sentence-transformers>=2.2
28
+ Requires-Dist: sympy>=1.12
29
+ Requires-Dist: pandas>=2.0.0
30
+ Provides-Extra: dashboard
31
+ Requires-Dist: streamlit>=1.31.0; extra == "dashboard"
32
+ Requires-Dist: streamlit-agraph>=0.0.45; extra == "dashboard"
33
+ Requires-Dist: plotly>=5.0; extra == "dashboard"
34
+ Provides-Extra: api
35
+ Requires-Dist: fastapi>=0.109.0; extra == "api"
36
+ Requires-Dist: uvicorn[standard]>=0.27.0; extra == "api"
37
+ Requires-Dist: pydantic>=2.0; extra == "api"
38
+ Provides-Extra: dev
39
+ Requires-Dist: pytest>=7.0; extra == "dev"
40
+ Requires-Dist: black>=22.0; extra == "dev"
41
+ Requires-Dist: ruff>=0.3.0; extra == "dev"
42
+ Requires-Dist: httpx>=0.25.0; extra == "dev"
43
+ Dynamic: license-file
44
+
45
+ # Triadic Neurosymbolic Engine
46
+
47
+ [![License: BUSL-1.1](https://img.shields.io/badge/License-BUSL--1.1-blue.svg)](https://mariadb.com/bsl11/)
48
+ [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
49
+ [![PyPI](https://img.shields.io/pypi/v/triadic-engine.svg)](https://pypi.org/project/triadic-engine/)
50
+ [![CI](https://github.com/arturoornelasb/Triadic-Neurosymbolic-Engine/actions/workflows/ci.yml/badge.svg)](https://github.com/arturoornelasb/Triadic-Neurosymbolic-Engine/actions)
51
+ [![DOI Software](https://zenodo.org/badge/DOI/10.5281/zenodo.18748671.svg)](https://doi.org/10.5281/zenodo.18748671)
52
+ [![DOI Paper](https://zenodo.org/badge/DOI/10.5281/zenodo.19205805.svg)](https://doi.org/10.5281/zenodo.19205805)
53
+
54
+ **A deterministic algebraic framework for neurosymbolic validation, semantic projection, and AI model auditing.**
55
+
56
+ Cosine similarity tells you *"King and Queen are 0.87 similar"* — a black-box number.
57
+
58
+ The Triadic Engine tells you *"King = 2×3×5 and Queen = 2×5×7. They share {2,5} (Royalty). King has {3} (Male) that Queen lacks. Queen has {7} (Female) that King lacks."* — fully transparent, deterministic decomposition.
59
+
60
+ ---
61
+
62
+ ## Why not cosine similarity?
63
+
64
+ | | Cosine Similarity | **Triadic Engine** |
65
+ |---|:---:|:---:|
66
+ | Speed (50K pairs) | baseline | **28.4× faster** |
67
+ | Explainability | Black box | ✅ Prime factor proof |
68
+ | Subsumption (`A ⊆ B`?) | ❌ Approximation | ✅ Exact (`Φ(A) mod Φ(B) == 0`) |
69
+ | Composition (`A ∪ B`) | ❌ Geometric average | ✅ `lcm(Φ(A), Φ(B))` |
70
+ | Gap analysis | ❌ Not possible | ✅ `gcd` + quotient decomposition |
71
+ | Determinism | ❌ Seed-dependent | ✅ PCA / contrastive modes |
72
+ | AI model audit | ❌ Not supported | ✅ Topological discrepancy |
73
+
74
+ ---
75
+
76
+ ## Install
77
+
78
+ ```bash
79
+ pip install triadic-engine
80
+
81
+ # With optional extras
82
+ pip install "triadic-engine[dashboard]" # Streamlit dashboard
83
+ pip install "triadic-engine[api]" # FastAPI server
84
+ ```
85
+
86
+ ---
87
+
88
+ ## Quickstart
89
+
90
+ ```python
91
+ from neurosym import ContinuousEncoder, DiscreteMapper, DiscreteValidator
92
+
93
+ encoder = ContinuousEncoder("all-MiniLM-L6-v2")
94
+
95
+ # Choose a projection mode:
96
+ mapper = DiscreteMapper(n_bits=8, projection="pca") # Deterministic, corpus-adapted
97
+ # mapper = DiscreteMapper(n_bits=8, projection="random") # Classic LSH
98
+ # mapper = DiscreteMapper(n_bits=8, projection="consensus") # Multi-seed noise filtering
99
+ # mapper = DiscreteMapper(n_bits=8, projection="contrastive", # Supervised
100
+ # hypernym_pairs=[("Animal","Dog"), ("Vehicle","Car")])
101
+
102
+ concepts = ["King", "Queen", "Man", "Woman"]
103
+ embeddings = encoder.encode(concepts)
104
+ prime_map = mapper.fit_transform(concepts, embeddings)
105
+
106
+ validator = DiscreteValidator()
107
+
108
+ print(validator.subsumes(prime_map["King"], prime_map["Queen"]))
109
+ # → False (King does not contain ALL features of Queen)
110
+
111
+ print(validator.explain_gap(prime_map["King"], prime_map["Queen"]))
112
+ # → {"shared": 10, "only_in_a": 3, "only_in_b": 7, "a_contains_b": False, "b_contains_a": False}
113
+
114
+ print(validator.compose(prime_map["King"], prime_map["Queen"]))
115
+ # → LCM of both — a new integer containing all features of King AND Queen
116
+
117
+ # Analogy: King:Man :: Queen:?
118
+ result = validator.analogy_prediction(prime_map["King"], prime_map["Man"], prime_map["Queen"])
119
+ print(result.output_value) # → predicted integer for "Woman"
120
+ ```
121
+
122
+ ---
123
+
124
+ ## How It Works
125
+
126
+ ```
127
+ Text → Neural Embedding → LSH Hyperplanes → Composite Prime Integer
128
+ (R^384) (k projections) (Φ(x) = ∏ pᵢ)
129
+ ```
130
+
131
+ Each concept becomes a single integer whose **prime factors are its semantic features**. This enables three operations **impossible** under cosine similarity:
132
+
133
+ | Operation | Math | What it answers |
134
+ |-----------|------|----------------|
135
+ | **Subsumption** | `Φ(A) mod Φ(B) == 0` | "Does A contain every feature of B?" |
136
+ | **Composition** | `lcm(Φ(A), Φ(B))` | "What concept has all features of both A and B?" |
137
+ | **Gap Analysis** | `gcd(Φ(A), Φ(B))` + quotients | "Which features do they share? Which are unique?" |
138
+
139
+ ---
140
+
141
+ ## Projection Modes
142
+
143
+ | Mode | Deterministic | Requires Labels | Best For |
144
+ |------|:---:|:---:|------------|
145
+ | `random` | ✗ (seed-dependent) | ✗ | Baseline, exploration |
146
+ | `pca` | ✓ | ✗ | Production, reproducibility |
147
+ | `consensus` | ✓ | ✗ | Noise filtering, stability analysis |
148
+ | `contrastive` | ✓ | ✓ (hypernym pairs) | Maximum accuracy (100% TP at k=6) |
149
+
150
+ ---
151
+
152
+ ## Core Modules
153
+
154
+ | Module | Description |
155
+ |--------|-------------|
156
+ | `neurosym.encoder` | Multi-backend embedding encoder (HuggingFace, OpenAI, Cohere) + 4-mode LSH→Prime projection |
157
+ | `neurosym.triadic` | Algebraic validation: subsumption, composition, abductive gap analysis |
158
+ | `neurosym.graph` | Scalable graph builder with inverted prime index (avoids O(N²)) |
159
+ | `neurosym.storage` | SQLite persistence for prime indices and audit results |
160
+ | `neurosym.reports` | Exportable reports in HTML, JSON, and CSV formats |
161
+ | `neurosym.ingest` | DataFrame ingestion with inverted prime index and semantic search |
162
+ | `neurosym.anomaly` | Multiplicative anomaly detection for tabular data |
163
+
164
+ ---
165
+
166
+ ## Use Cases
167
+
168
+ **Explainable RAG** — Instead of returning top-k by cosine score, return documents whose prime signatures *subsume* the query signature. Every result is provably relevant.
169
+
170
+ **AI Model Auditing** — Detect when two LLMs structure the same concept differently. The engine found 108,694 discrepancies auditing 2M semantic chains across two embedding models.
171
+
172
+ **Semantic Deduplication** — Two records are semantically duplicate if `Φ(A) mod Φ(B) == 0`. Exact, not probabilistic.
173
+
174
+ **Compliance Validation** — Verify that "GDPR" subsumes "consent" and "data-subject-rights" in your ontology. Machine-checkable, not fuzzy.
175
+
176
+ **Anomaly Detection** — Tabular rows that break the multiplicative patterns of their peers are flagged as anomalies — with a proof, not just a score.
177
+
178
+ ---
179
+
180
+ ## Interactive Dashboard
181
+
182
+ ```bash
183
+ pip install "triadic-engine[dashboard]"
184
+ streamlit run app.py
185
+ ```
186
+
187
+ Six tabs: **Ingestion & Encoding**, **Semantic Graph**, **Logic & Search**, **AI Auditor**, **Anomaly Detection**, **Benchmarks**
188
+
189
+ The AI Auditor compares how different embedding models structure the same concepts using topological shortest-path differencing — finding exact structural discrepancies between models.
190
+
191
+ ---
192
+
193
+ ## REST API
194
+
195
+ ```bash
196
+ pip install "triadic-engine[api]"
197
+ uvicorn api.server:app --host 0.0.0.0 --port 8000
198
+ ```
199
+
200
+ | Endpoint | Method | Description |
201
+ |----------|--------|-------------|
202
+ | `/health` | GET | Engine status and loaded concepts count |
203
+ | `/encode` | POST | Encode concepts into composite prime integers |
204
+ | `/audit` | POST | Compare two embedding models topologically |
205
+ | `/search` | POST | GCD-based semantic search over indexed concepts |
206
+ | `/report` | GET | Export engine state as HTML, JSON, or CSV |
207
+
208
+ Interactive docs at `http://localhost:8000/docs` (Swagger UI).
209
+
210
+ ---
211
+
212
+ ## CLI Tools
213
+
214
+ ```bash
215
+ # Massive topological audit (model vs model)
216
+ python scripts/triadic_auditor.py --input examples/data/wordnet_2k.csv --col concept --output reports/audit.csv
217
+
218
+ # PCA vs Random vs Consensus vs Contrastive benchmark
219
+ python scripts/benchmark_pca.py
220
+ ```
221
+
222
+ ---
223
+
224
+ ## Benchmarks
225
+
226
+ | Metric | Result |
227
+ |--------|--------|
228
+ | Pairwise verification speed | **28.4× faster** than cosine (50K operations) |
229
+ | Composition guarantee | **100%** verified across 5,671 word pairs |
230
+ | Hypernym detection accuracy | **100% TP** with contrastive projection at k=6 |
231
+ | Model audit scale | **108,694 discrepancies** in 2M semantic chains (2 models) |
232
+
233
+ ---
234
+
235
+ ## Academic Paper
236
+
237
+ Full paper with 9 experiments: [`paper/`](paper/)
238
+
239
+ ```bash
240
+ make paper # requires pdflatex + bibtex
241
+ ```
242
+
243
+ ---
244
+
245
+ ## Citation
246
+
247
+ ### Paper
248
+
249
+ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.19205805.svg)](https://doi.org/10.5281/zenodo.19205805)
250
+
251
+ Ornelas Brand, J. A. (2026). Triadic Neurosymbolic Engine: Prime Factorization as a Neurosymbolic Bridge: Projecting Continuous Embeddings into Discrete Algebraic Space for Deterministic Verification. Zenodo. https://doi.org/10.5281/zenodo.19205805
252
+
253
+ ```bibtex
254
+ @article{ornelas2026prime,
255
+ author = {Ornelas Brand, J. Arturo},
256
+ title = {Triadic Neurosymbolic Engine: Prime Factorization as a
257
+ Neurosymbolic Bridge: Projecting Continuous Embeddings
258
+ into Discrete Algebraic Space for Deterministic Verification},
259
+ year = 2026,
260
+ month = mar,
261
+ doi = {10.5281/zenodo.19205805},
262
+ url = {https://doi.org/10.5281/zenodo.19205805}
263
+ }
264
+ ```
265
+
266
+ ### Repository
267
+
268
+ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.18748671.svg)](https://doi.org/10.5281/zenodo.18748671)
269
+
270
+ Ornelas Brand, J. A. (2026). Prime Factorization as a Neurosymbolic Bridge: Projecting Continuous Embeddings into Discrete Algebraic Space for Deterministic Verification (Repository) (0.1.0). Zenodo. https://doi.org/10.5281/zenodo.18748671
271
+
272
+ ```bibtex
273
+ @software{ornelas2026triadic,
274
+ author = {Ornelas Brand, J. Arturo},
275
+ title = {Prime Factorization as a Neurosymbolic Bridge: Projecting
276
+ Continuous Embeddings into Discrete Algebraic Space
277
+ for Deterministic Verification (Repository)},
278
+ year = 2026,
279
+ month = feb,
280
+ version = {0.1.0},
281
+ doi = {10.5281/zenodo.18748671},
282
+ url = {https://doi.org/10.5281/zenodo.18748671}
283
+ }
284
+ ```
285
+
286
+ ---
287
+
288
+ ## Project Structure
289
+
290
+ ```
291
+ ├── src/neurosym/ ← Core Python package (pip installable)
292
+ ├── api/ ← FastAPI REST server
293
+ ├── app.py ← Streamlit interactive dashboard
294
+ ├── paper/ ← Academic paper (LaTeX, 12 pages)
295
+ ├── scripts/ ← CLI auditing & benchmark tools
296
+ ├── tests/ ← Test suite
297
+ ├── notebooks/ ← Reproducibility demo (Jupyter)
298
+ ├── examples/ ← Sample datasets (WordNet, e-commerce)
299
+ └── pyproject.toml ← Package metadata & dependencies
300
+ ```
301
+
302
+ ---
303
+
304
+ ## License
305
+
306
+ **Business Source License 1.1 (BUSL-1.1)**
307
+
308
+ | | Allowed |
309
+ |---|---|
310
+ | Individuals / personal projects / freelancing | ✅ Free |
311
+ | Academic / research institutions | ✅ Free |
312
+ | Non-profit organizations | ✅ Free |
313
+ | For-profit companies (production use) | ❌ Requires participation agreement |
314
+
315
+ All users must contribute improvements back. See [TERMS.md](TERMS.md).
316
+ Companies: see [COMMERCIAL.md](COMMERCIAL.md) for the consortium participation model.
317
+
318
+ **Change Date:** 2030-03-21 — auto-converts to AGPL-3.0.
319
+
320
+ Contact: arturoornelas62@gmail.com
321
+
322
+ © 2026 J. Arturo Ornelas Brand
@@ -0,0 +1,278 @@
1
+ # Triadic Neurosymbolic Engine
2
+
3
+ [![License: BUSL-1.1](https://img.shields.io/badge/License-BUSL--1.1-blue.svg)](https://mariadb.com/bsl11/)
4
+ [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
5
+ [![PyPI](https://img.shields.io/pypi/v/triadic-engine.svg)](https://pypi.org/project/triadic-engine/)
6
+ [![CI](https://github.com/arturoornelasb/Triadic-Neurosymbolic-Engine/actions/workflows/ci.yml/badge.svg)](https://github.com/arturoornelasb/Triadic-Neurosymbolic-Engine/actions)
7
+ [![DOI Software](https://zenodo.org/badge/DOI/10.5281/zenodo.18748671.svg)](https://doi.org/10.5281/zenodo.18748671)
8
+ [![DOI Paper](https://zenodo.org/badge/DOI/10.5281/zenodo.19205805.svg)](https://doi.org/10.5281/zenodo.19205805)
9
+
10
+ **A deterministic algebraic framework for neurosymbolic validation, semantic projection, and AI model auditing.**
11
+
12
+ Cosine similarity tells you *"King and Queen are 0.87 similar"* — a black-box number.
13
+
14
+ The Triadic Engine tells you *"King = 2×3×5 and Queen = 2×5×7. They share {2,5} (Royalty). King has {3} (Male) that Queen lacks. Queen has {7} (Female) that King lacks."* — fully transparent, deterministic decomposition.
15
+
16
+ ---
17
+
18
+ ## Why not cosine similarity?
19
+
20
+ | | Cosine Similarity | **Triadic Engine** |
21
+ |---|:---:|:---:|
22
+ | Speed (50K pairs) | baseline | **28.4× faster** |
23
+ | Explainability | Black box | ✅ Prime factor proof |
24
+ | Subsumption (`A ⊆ B`?) | ❌ Approximation | ✅ Exact (`Φ(A) mod Φ(B) == 0`) |
25
+ | Composition (`A ∪ B`) | ❌ Geometric average | ✅ `lcm(Φ(A), Φ(B))` |
26
+ | Gap analysis | ❌ Not possible | ✅ `gcd` + quotient decomposition |
27
+ | Determinism | ❌ Seed-dependent | ✅ PCA / contrastive modes |
28
+ | AI model audit | ❌ Not supported | ✅ Topological discrepancy |
29
+
30
+ ---
31
+
32
+ ## Install
33
+
34
+ ```bash
35
+ pip install triadic-engine
36
+
37
+ # With optional extras
38
+ pip install "triadic-engine[dashboard]" # Streamlit dashboard
39
+ pip install "triadic-engine[api]" # FastAPI server
40
+ ```
41
+
42
+ ---
43
+
44
+ ## Quickstart
45
+
46
+ ```python
47
+ from neurosym import ContinuousEncoder, DiscreteMapper, DiscreteValidator
48
+
49
+ encoder = ContinuousEncoder("all-MiniLM-L6-v2")
50
+
51
+ # Choose a projection mode:
52
+ mapper = DiscreteMapper(n_bits=8, projection="pca") # Deterministic, corpus-adapted
53
+ # mapper = DiscreteMapper(n_bits=8, projection="random") # Classic LSH
54
+ # mapper = DiscreteMapper(n_bits=8, projection="consensus") # Multi-seed noise filtering
55
+ # mapper = DiscreteMapper(n_bits=8, projection="contrastive", # Supervised
56
+ # hypernym_pairs=[("Animal","Dog"), ("Vehicle","Car")])
57
+
58
+ concepts = ["King", "Queen", "Man", "Woman"]
59
+ embeddings = encoder.encode(concepts)
60
+ prime_map = mapper.fit_transform(concepts, embeddings)
61
+
62
+ validator = DiscreteValidator()
63
+
64
+ print(validator.subsumes(prime_map["King"], prime_map["Queen"]))
65
+ # → False (King does not contain ALL features of Queen)
66
+
67
+ print(validator.explain_gap(prime_map["King"], prime_map["Queen"]))
68
+ # → {"shared": 10, "only_in_a": 3, "only_in_b": 7, "a_contains_b": False, "b_contains_a": False}
69
+
70
+ print(validator.compose(prime_map["King"], prime_map["Queen"]))
71
+ # → LCM of both — a new integer containing all features of King AND Queen
72
+
73
+ # Analogy: King:Man :: Queen:?
74
+ result = validator.analogy_prediction(prime_map["King"], prime_map["Man"], prime_map["Queen"])
75
+ print(result.output_value) # → predicted integer for "Woman"
76
+ ```
77
+
78
+ ---
79
+
80
+ ## How It Works
81
+
82
+ ```
83
+ Text → Neural Embedding → LSH Hyperplanes → Composite Prime Integer
84
+ (R^384) (k projections) (Φ(x) = ∏ pᵢ)
85
+ ```
86
+
87
+ Each concept becomes a single integer whose **prime factors are its semantic features**. This enables three operations **impossible** under cosine similarity:
88
+
89
+ | Operation | Math | What it answers |
90
+ |-----------|------|----------------|
91
+ | **Subsumption** | `Φ(A) mod Φ(B) == 0` | "Does A contain every feature of B?" |
92
+ | **Composition** | `lcm(Φ(A), Φ(B))` | "What concept has all features of both A and B?" |
93
+ | **Gap Analysis** | `gcd(Φ(A), Φ(B))` + quotients | "Which features do they share? Which are unique?" |
94
+
95
+ ---
96
+
97
+ ## Projection Modes
98
+
99
+ | Mode | Deterministic | Requires Labels | Best For |
100
+ |------|:---:|:---:|------------|
101
+ | `random` | ✗ (seed-dependent) | ✗ | Baseline, exploration |
102
+ | `pca` | ✓ | ✗ | Production, reproducibility |
103
+ | `consensus` | ✓ | ✗ | Noise filtering, stability analysis |
104
+ | `contrastive` | ✓ | ✓ (hypernym pairs) | Maximum accuracy (100% TP at k=6) |
105
+
106
+ ---
107
+
108
+ ## Core Modules
109
+
110
+ | Module | Description |
111
+ |--------|-------------|
112
+ | `neurosym.encoder` | Multi-backend embedding encoder (HuggingFace, OpenAI, Cohere) + 4-mode LSH→Prime projection |
113
+ | `neurosym.triadic` | Algebraic validation: subsumption, composition, abductive gap analysis |
114
+ | `neurosym.graph` | Scalable graph builder with inverted prime index (avoids O(N²)) |
115
+ | `neurosym.storage` | SQLite persistence for prime indices and audit results |
116
+ | `neurosym.reports` | Exportable reports in HTML, JSON, and CSV formats |
117
+ | `neurosym.ingest` | DataFrame ingestion with inverted prime index and semantic search |
118
+ | `neurosym.anomaly` | Multiplicative anomaly detection for tabular data |
119
+
120
+ ---
121
+
122
+ ## Use Cases
123
+
124
+ **Explainable RAG** — Instead of returning top-k by cosine score, return documents whose prime signatures *subsume* the query signature. Every result is provably relevant.
125
+
126
+ **AI Model Auditing** — Detect when two LLMs structure the same concept differently. The engine found 108,694 discrepancies auditing 2M semantic chains across two embedding models.
127
+
128
+ **Semantic Deduplication** — Two records are semantically duplicate if `Φ(A) mod Φ(B) == 0`. Exact, not probabilistic.
129
+
130
+ **Compliance Validation** — Verify that "GDPR" subsumes "consent" and "data-subject-rights" in your ontology. Machine-checkable, not fuzzy.
131
+
132
+ **Anomaly Detection** — Tabular rows that break the multiplicative patterns of their peers are flagged as anomalies — with a proof, not just a score.
133
+
134
+ ---
135
+
136
+ ## Interactive Dashboard
137
+
138
+ ```bash
139
+ pip install "triadic-engine[dashboard]"
140
+ streamlit run app.py
141
+ ```
142
+
143
+ Six tabs: **Ingestion & Encoding**, **Semantic Graph**, **Logic & Search**, **AI Auditor**, **Anomaly Detection**, **Benchmarks**
144
+
145
+ The AI Auditor compares how different embedding models structure the same concepts using topological shortest-path differencing — finding exact structural discrepancies between models.
146
+
147
+ ---
148
+
149
+ ## REST API
150
+
151
+ ```bash
152
+ pip install "triadic-engine[api]"
153
+ uvicorn api.server:app --host 0.0.0.0 --port 8000
154
+ ```
155
+
156
+ | Endpoint | Method | Description |
157
+ |----------|--------|-------------|
158
+ | `/health` | GET | Engine status and loaded concepts count |
159
+ | `/encode` | POST | Encode concepts into composite prime integers |
160
+ | `/audit` | POST | Compare two embedding models topologically |
161
+ | `/search` | POST | GCD-based semantic search over indexed concepts |
162
+ | `/report` | GET | Export engine state as HTML, JSON, or CSV |
163
+
164
+ Interactive docs at `http://localhost:8000/docs` (Swagger UI).
165
+
166
+ ---
167
+
168
+ ## CLI Tools
169
+
170
+ ```bash
171
+ # Massive topological audit (model vs model)
172
+ python scripts/triadic_auditor.py --input examples/data/wordnet_2k.csv --col concept --output reports/audit.csv
173
+
174
+ # PCA vs Random vs Consensus vs Contrastive benchmark
175
+ python scripts/benchmark_pca.py
176
+ ```
177
+
178
+ ---
179
+
180
+ ## Benchmarks
181
+
182
+ | Metric | Result |
183
+ |--------|--------|
184
+ | Pairwise verification speed | **28.4× faster** than cosine (50K operations) |
185
+ | Composition guarantee | **100%** verified across 5,671 word pairs |
186
+ | Hypernym detection accuracy | **100% TP** with contrastive projection at k=6 |
187
+ | Model audit scale | **108,694 discrepancies** in 2M semantic chains (2 models) |
188
+
189
+ ---
190
+
191
+ ## Academic Paper
192
+
193
+ Full paper with 9 experiments: [`paper/`](paper/)
194
+
195
+ ```bash
196
+ make paper # requires pdflatex + bibtex
197
+ ```
198
+
199
+ ---
200
+
201
+ ## Citation
202
+
203
+ ### Paper
204
+
205
+ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.19205805.svg)](https://doi.org/10.5281/zenodo.19205805)
206
+
207
+ Ornelas Brand, J. A. (2026). Triadic Neurosymbolic Engine: Prime Factorization as a Neurosymbolic Bridge: Projecting Continuous Embeddings into Discrete Algebraic Space for Deterministic Verification. Zenodo. https://doi.org/10.5281/zenodo.19205805
208
+
209
+ ```bibtex
210
+ @article{ornelas2026prime,
211
+ author = {Ornelas Brand, J. Arturo},
212
+ title = {Triadic Neurosymbolic Engine: Prime Factorization as a
213
+ Neurosymbolic Bridge: Projecting Continuous Embeddings
214
+ into Discrete Algebraic Space for Deterministic Verification},
215
+ year = 2026,
216
+ month = mar,
217
+ doi = {10.5281/zenodo.19205805},
218
+ url = {https://doi.org/10.5281/zenodo.19205805}
219
+ }
220
+ ```
221
+
222
+ ### Repository
223
+
224
+ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.18748671.svg)](https://doi.org/10.5281/zenodo.18748671)
225
+
226
+ Ornelas Brand, J. A. (2026). Prime Factorization as a Neurosymbolic Bridge: Projecting Continuous Embeddings into Discrete Algebraic Space for Deterministic Verification (Repository) (0.1.0). Zenodo. https://doi.org/10.5281/zenodo.18748671
227
+
228
+ ```bibtex
229
+ @software{ornelas2026triadic,
230
+ author = {Ornelas Brand, J. Arturo},
231
+ title = {Prime Factorization as a Neurosymbolic Bridge: Projecting
232
+ Continuous Embeddings into Discrete Algebraic Space
233
+ for Deterministic Verification (Repository)},
234
+ year = 2026,
235
+ month = feb,
236
+ version = {0.1.0},
237
+ doi = {10.5281/zenodo.18748671},
238
+ url = {https://doi.org/10.5281/zenodo.18748671}
239
+ }
240
+ ```
241
+
242
+ ---
243
+
244
+ ## Project Structure
245
+
246
+ ```
247
+ ├── src/neurosym/ ← Core Python package (pip installable)
248
+ ├── api/ ← FastAPI REST server
249
+ ├── app.py ← Streamlit interactive dashboard
250
+ ├── paper/ ← Academic paper (LaTeX, 12 pages)
251
+ ├── scripts/ ← CLI auditing & benchmark tools
252
+ ├── tests/ ← Test suite
253
+ ├── notebooks/ ← Reproducibility demo (Jupyter)
254
+ ├── examples/ ← Sample datasets (WordNet, e-commerce)
255
+ └── pyproject.toml ← Package metadata & dependencies
256
+ ```
257
+
258
+ ---
259
+
260
+ ## License
261
+
262
+ **Business Source License 1.1 (BUSL-1.1)**
263
+
264
+ | | Allowed |
265
+ |---|---|
266
+ | Individuals / personal projects / freelancing | ✅ Free |
267
+ | Academic / research institutions | ✅ Free |
268
+ | Non-profit organizations | ✅ Free |
269
+ | For-profit companies (production use) | ❌ Requires participation agreement |
270
+
271
+ All users must contribute improvements back. See [TERMS.md](TERMS.md).
272
+ Companies: see [COMMERCIAL.md](COMMERCIAL.md) for the consortium participation model.
273
+
274
+ **Change Date:** 2030-03-21 — auto-converts to AGPL-3.0.
275
+
276
+ Contact: arturoornelas62@gmail.com
277
+
278
+ © 2026 J. Arturo Ornelas Brand