eyeling 1.19.4 → 1.19.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/HANDBOOK.md +48 -89
- package/examples/deck/extra.md +169 -0
- package/examples/extra/collatz-1000.js +138 -0
- package/examples/extra/control-system.js +68 -0
- package/examples/extra/deep-taxonomy-100000.js +95 -0
- package/examples/extra/delfour.js +110 -0
- package/examples/extra/euler-identity.js +41 -0
- package/examples/extra/fibonacci.js +81 -0
- package/examples/extra/goldbach-1000.js +112 -0
- package/examples/extra/gps.js +274 -0
- package/examples/extra/kaprekar-6174.js +112 -0
- package/examples/extra/matrix-mechanics.js +69 -0
- package/examples/extra/odrl-dpv-ehds-risk-ranked.js +255 -0
- package/examples/extra/output/collatz-1000.txt +18 -0
- package/examples/extra/output/control-system.txt +14 -0
- package/examples/extra/output/deep-taxonomy-100000.txt +15 -0
- package/examples/extra/output/delfour.txt +20 -0
- package/examples/extra/output/euler-identity.txt +12 -0
- package/examples/extra/output/fibonacci.txt +21 -0
- package/examples/extra/output/goldbach-1000.txt +17 -0
- package/examples/extra/output/gps.txt +33 -0
- package/examples/extra/output/kaprekar-6174.txt +17 -0
- package/examples/extra/output/matrix-mechanics.txt +14 -0
- package/examples/extra/output/odrl-dpv-ehds-risk-ranked.txt +48 -0
- package/examples/extra/output/path-discovery.txt +28 -0
- package/examples/extra/output/pn-junction-tunneling.txt +15 -0
- package/examples/extra/output/polynomial.txt +20 -0
- package/examples/extra/output/sudoku.txt +47 -0
- package/examples/extra/output/transistor-switch.txt +16 -0
- package/examples/extra/path-discovery.js +45114 -0
- package/examples/extra/pn-junction-tunneling.js +69 -0
- package/examples/extra/polynomial.js +181 -0
- package/examples/extra/sudoku.js +330 -0
- package/examples/extra/transistor-switch.js +93 -0
- package/examples/fibonacci.n3 +2 -0
- package/examples/output/fibonacci.n3 +1 -0
- package/eyeling.js +49 -45
- package/lib/engine.js +49 -45
- package/package.json +3 -2
- package/test/extra.test.js +100 -0
package/HANDBOOK.md
CHANGED
|
@@ -2955,104 +2955,50 @@ That is why the result is 100%.
|
|
|
2955
2955
|
|
|
2956
2956
|
## Appendix F — The ARC approach: Answer • Reason Why • Check
|
|
2957
2957
|
|
|
2958
|
-
A
|
|
2958
|
+
A simple way to write a good Eyeling program is to make it do three things in one file:
|
|
2959
2959
|
|
|
2960
|
-
>
|
|
2960
|
+
> give the answer, say why, and check that it really holds.
|
|
2961
2961
|
|
|
2962
|
-
|
|
2962
|
+
That is the ARC approach: **Answer • Reason Why • Check**.
|
|
2963
2963
|
|
|
2964
|
-
|
|
2964
|
+
The idea is not to make the program more grand or formal. It is to make it more useful. A bare result is often not enough. A reader also wants to see the small reason that matters, and to know that the program will fail loudly if an important assumption is wrong.
|
|
2965
2965
|
|
|
2966
|
-
|
|
2967
|
-
2. Logic
|
|
2968
|
-
3. A Question
|
|
2966
|
+
In Eyeling this style comes quite naturally. Facts hold the data. Rules derive the conclusion. `log:outputString` can turn the conclusion into readable output. And a rule that concludes `false` acts as a fuse: if a bad condition becomes provable, the run stops instead of quietly producing a misleading result.
|
|
2969
2967
|
|
|
2970
|
-
|
|
2968
|
+
### F.1 What the three parts mean
|
|
2971
2969
|
|
|
2972
|
-
|
|
2970
|
+
The **Answer** is the direct result. It should be short and easy to recognize. In many Eyeling files it is a final recommendation, a route, a computed value, a decision such as `allowed` or `blocked`, or a small report line emitted with `log:outputString`.
|
|
2973
2971
|
|
|
2974
|
-
|
|
2972
|
+
The **Reason Why** is the compact explanation. It is not hidden chain-of-thought and it does not need to be long. Usually it is just the witness, threshold, policy, path, or intermediate fact that made the answer follow. A good reason tells the reader what mattered.
|
|
2975
2973
|
|
|
2976
|
-
The **
|
|
2974
|
+
The **Check** is the part that keeps the program honest. It should do more than repeat the answer in different words. A good check tests something that could really fail: a structural invariant, a recomputed quantity, a boundary condition, or a rule that derives `false` when the answer would be inconsistent with the inputs.
|
|
2977
2975
|
|
|
2978
|
-
|
|
2976
|
+
A short way to remember ARC is this:
|
|
2979
2977
|
|
|
2980
|
-
|
|
2981
|
-
- the selected item
|
|
2982
|
-
- the computed value
|
|
2983
|
-
- the resulting classification
|
|
2978
|
+
> an answer tells you **what** happened, a reason tells you **why**, and a check tells you **whether you should trust it**.
|
|
2984
2979
|
|
|
2985
|
-
|
|
2980
|
+
### F.2 Why this fits Eyeling well
|
|
2986
2981
|
|
|
2987
|
-
|
|
2982
|
+
ARC is not an extra subsystem in Eyeling. It is mostly a good habit.
|
|
2988
2983
|
|
|
2989
|
-
|
|
2984
|
+
Eyeling already separates data from logic. It already lets you derive readable output instead of printing ad hoc text during proof search. And it already has a very strong notion of validation through inference fuses. So ARC is really just a clean way to organize an ordinary Eyeling file so that a human reader can see the result, the explanation, and the safety net together.
|
|
2990
2985
|
|
|
2991
|
-
This is
|
|
2986
|
+
This is especially useful for examples. A newcomer can run the file and see what it does. A maintainer can inspect the few rules that justify the result. And an external developer can tell whether the example merely prints something nice or actually checks itself.
|
|
2992
2987
|
|
|
2993
|
-
|
|
2994
|
-
- the governing rule or policy
|
|
2995
|
-
- the key intermediate facts
|
|
2996
|
-
- the condition that made the conclusion follow
|
|
2988
|
+
### F.3 A simple pattern to follow
|
|
2997
2989
|
|
|
2998
|
-
|
|
2990
|
+
A practical ARC-style Eyeling file often has four visible layers.
|
|
2999
2991
|
|
|
3000
|
-
|
|
2992
|
+
First come the **facts**: the input data, parameters, thresholds, policies, or known relationships. Then comes the **logic**: the rules that derive the internal conclusion. Then comes the **presentation**: rules that turn the result into `log:outputString` lines or other report facts. Finally come the **checks**: rules that validate the result or trigger `false` when an invariant is broken.
|
|
3001
2993
|
|
|
3002
|
-
|
|
2994
|
+
You do not have to separate these layers perfectly, but it helps a lot when the file reads in roughly that order.
|
|
3003
2995
|
|
|
3004
|
-
|
|
3005
|
-
|
|
3006
|
-
In Eyeling, Checks are a natural fit for either:
|
|
3007
|
-
|
|
3008
|
-
- derived facts such as `:ok :signatureVerified true .`, or
|
|
3009
|
-
- inference fuses such as `{ ... } => false .` when a violation must stop execution.
|
|
3010
|
-
|
|
3011
|
-
This makes verification part of the program itself rather than something left to external commentary.
|
|
3012
|
-
|
|
3013
|
-
### F.2 Proof = Reason Why + Check
|
|
3014
|
-
|
|
3015
|
-
ARC summarizes its trust model as:
|
|
3016
|
-
|
|
3017
|
-
> Proof = Reason Why + Check
|
|
3018
|
-
|
|
3019
|
-
That is a practical notion of proof. The Reason Why explains the logic in human terms. The Check verifies that the critical conditions actually hold at runtime.
|
|
3020
|
-
|
|
3021
|
-
For many real workflows, that combination is more useful than a bare result: it is inspectable, repeatable, and suitable for automation.
|
|
3022
|
-
|
|
3023
|
-
### F.3 Why ARC fits Eyeling well
|
|
3024
|
-
|
|
3025
|
-
Eyeling already encourages the separation that ARC needs.
|
|
3026
|
-
|
|
3027
|
-
Rules derive facts. Facts can include output facts. Output is not printed eagerly during proof search; instead, `log:outputString` facts are collected from the final closure and rendered deterministically whenever they are present. This makes it natural to derive a structured Answer and Reason Why as part of the logic itself.
|
|
3028
|
-
|
|
3029
|
-
Checks also map well to Eyeling. A rule with conclusion `false` acts as an inference fuse: if its body becomes provable, execution stops with a hard failure. This is exactly the behavior we want for “must-hold” conditions.
|
|
3030
|
-
|
|
3031
|
-
So ARC in Eyeling is not an add-on. It is mostly a disciplined way of organizing what Eyeling already does well: derive conclusions, expose supporting facts, and enforce invariants.
|
|
3032
|
-
|
|
3033
|
-
### F.4 A practical pattern
|
|
3034
|
-
|
|
3035
|
-
A simple ARC-oriented Eyeling file often has four layers:
|
|
3036
|
-
|
|
3037
|
-
1. **Facts**
|
|
3038
|
-
Input data, parameters, policies, and known relationships.
|
|
3039
|
-
|
|
3040
|
-
2. **Logic**
|
|
3041
|
-
Rules that derive the program’s internal conclusions.
|
|
3042
|
-
|
|
3043
|
-
3. **Presentation**
|
|
3044
|
-
Rules that turn derived conclusions into `log:outputString` lines for the Answer and Reason Why.
|
|
3045
|
-
|
|
3046
|
-
4. **Verification**
|
|
3047
|
-
Rules that derive check facts or trigger inference fuses on violations.
|
|
3048
|
-
|
|
3049
|
-
A useful habit is to keep these layers visually separate in the file.
|
|
3050
|
-
|
|
3051
|
-
### F.5 A tiny template
|
|
2996
|
+
### F.4 A tiny template
|
|
3052
2997
|
|
|
3053
2998
|
```n3
|
|
3054
2999
|
@prefix : <http://example.org/> .
|
|
3055
3000
|
@prefix log: <http://www.w3.org/2000/10/swap/log#> .
|
|
3001
|
+
@prefix math: <http://www.w3.org/2000/10/swap/math#> .
|
|
3056
3002
|
|
|
3057
3003
|
# Facts
|
|
3058
3004
|
:case :input 42 .
|
|
@@ -3078,25 +3024,38 @@ A useful habit is to keep these layers visually separate in the file.
|
|
|
3078
3024
|
=> false .
|
|
3079
3025
|
```
|
|
3080
3026
|
|
|
3081
|
-
The exact
|
|
3027
|
+
The exact wording can vary. The important thing is the shape: derive the result, make the key reason visible, and include at least one check that could fail for a real reason.
|
|
3028
|
+
|
|
3029
|
+
### F.5 What a good check looks like
|
|
3030
|
+
|
|
3031
|
+
A good check is not a decorative `:ok true` line. It should add real confidence.
|
|
3032
|
+
|
|
3033
|
+
Sometimes that means recomputing a quantity from another angle. Sometimes it means checking a witness path instead of only the summary result. Sometimes it means making sure a threshold really was crossed, or that a list or graph has the shape the rest of the program assumes. And sometimes the right check is simply an inference fuse that says: if this contradiction appears, stop.
|
|
3034
|
+
|
|
3035
|
+
The point is not to make checks large. The point is to make them real.
|
|
3036
|
+
|
|
3037
|
+
### F.6 Examples in `examples/` that read well in ARC style
|
|
3038
|
+
|
|
3039
|
+
The following examples are especially good places to see this style in practice.
|
|
3040
|
+
|
|
3041
|
+
- [`examples/delfour.n3`](examples/delfour.n3) — privacy-preserving shopping assistance with a concrete recommendation, an explanation, and policy checks. Expected output: [`examples/output/delfour.n3`](examples/output/delfour.n3)
|
|
3042
|
+
- [`examples/control-system.n3`](examples/control-system.n3) — derives actuator decisions, explains the control basis, and checks the result. Expected output: [`examples/output/control-system.n3`](examples/output/control-system.n3)
|
|
3043
|
+
- [`examples/deep-taxonomy-100000.n3`](examples/deep-taxonomy-100000.n3) — a deep classification stress test whose answer is whether the final goal class is reached. Expected output: [`examples/output/deep-taxonomy-100000.n3`](examples/output/deep-taxonomy-100000.n3)
|
|
3044
|
+
- [`examples/gps.n3`](examples/gps.n3) — route planning with readable route output. Expected output: [`examples/output/gps.n3`](examples/output/gps.n3)
|
|
3045
|
+
- [`examples/sudoku.n3`](examples/sudoku.n3) — solver output plus legality and consistency checks. Expected output: [`examples/output/sudoku.n3`](examples/output/sudoku.n3)
|
|
3082
3046
|
|
|
3083
|
-
### F.
|
|
3047
|
+
### F.7 How to read an ARC-style example
|
|
3084
3048
|
|
|
3085
|
-
|
|
3049
|
+
A good way to read one of these files is to start with the question in the comments or input facts. Then find the part that gives the answer. Then trace the few rules that explain why that answer follows. Finally, look for the checks: the validation facts, the recomputation, or the `=> false` fuse that would stop the run if something important were wrong.
|
|
3086
3050
|
|
|
3087
|
-
|
|
3088
|
-
- replacing checks with prose
|
|
3089
|
-
- hiding the important assumptions
|
|
3090
|
-
- relying on “trust me” comments outside the executable artifact
|
|
3051
|
+
That reading order keeps the example grounded in observable behavior rather than in source code alone.
|
|
3091
3052
|
|
|
3092
|
-
|
|
3053
|
+
### F.8 What ARC is not
|
|
3093
3054
|
|
|
3094
|
-
|
|
3055
|
+
ARC does not mean wrapping every file in ceremony. It does not mean long prose explanations. It does not mean hiding important assumptions in comments while the executable part stays thin. And it does not mean replacing checks with a confident tone.
|
|
3095
3056
|
|
|
3096
|
-
|
|
3057
|
+
A file really follows ARC only when the answer, the explanation, and the validation all live in the program itself.
|
|
3097
3058
|
|
|
3098
|
-
|
|
3099
|
-
- **Reason Why** for the key supporting explanation
|
|
3100
|
-
- **Check** for invariants and fail-loud validation
|
|
3059
|
+
### F.9 Why this style is worth using
|
|
3101
3060
|
|
|
3102
|
-
This
|
|
3061
|
+
This style is worth using because it makes an Eyeling file easier to run, easier to inspect, and easier to trust. The result is visible. The key reason is visible. The check is visible. That makes examples better teaching material, makes policy or computation examples easier to audit, and makes the whole file more reusable as a small reasoning artifact instead of an opaque session transcript.
|
|
@@ -0,0 +1,169 @@
|
|
|
1
|
+
# ARC specializations in `examples/extra/`
|
|
2
|
+
|
|
3
|
+
For the general ARC pattern in Eyeling, start with **Appendix F** of the handbook:
|
|
4
|
+
|
|
5
|
+
<https://eyereasoner.github.io/eyeling/HANDBOOK#app-f>
|
|
6
|
+
|
|
7
|
+
That appendix explains the core shape:
|
|
8
|
+
|
|
9
|
+
**Answer • Reason Why • Check**
|
|
10
|
+
|
|
11
|
+
This page is about something more specific.
|
|
12
|
+
|
|
13
|
+
The programs in `examples/extra/` are **high-performance specializations** of selected ARC-style N3 cases from `examples/`.
|
|
14
|
+
|
|
15
|
+
So the main collection in `examples/` shows the ARC approach in its most declarative Eyeling form:
|
|
16
|
+
|
|
17
|
+
- data and logic written in N3,
|
|
18
|
+
- a precise question,
|
|
19
|
+
- a visible answer,
|
|
20
|
+
- a readable reason why,
|
|
21
|
+
- and an explicit check.
|
|
22
|
+
|
|
23
|
+
The `examples/extra/` collection keeps that same trust contract, but packages part of the work into compact JavaScript drivers intended for fast execution.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
## The idea
|
|
28
|
+
|
|
29
|
+
ARC is not only about getting an answer.
|
|
30
|
+
|
|
31
|
+
It is about producing a result that can be:
|
|
32
|
+
|
|
33
|
+
- read,
|
|
34
|
+
- rerun,
|
|
35
|
+
- checked,
|
|
36
|
+
- and audited.
|
|
37
|
+
|
|
38
|
+
That remains true here.
|
|
39
|
+
|
|
40
|
+
What changes in `examples/extra/` is the execution strategy.
|
|
41
|
+
|
|
42
|
+
These cases begin from the same broad ARC mindset as the N3 examples in `examples/`, but they are shaped as specialized programs so that repeated execution is small, direct, and efficient.
|
|
43
|
+
|
|
44
|
+
In other words, they are not a different philosophy. They are a different operational form.
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## From declarative case to specialized driver
|
|
49
|
+
|
|
50
|
+
A useful way to think about the relationship is this:
|
|
51
|
+
|
|
52
|
+
- **`examples/`** presents ARC cases in declarative Eyeling form.
|
|
53
|
+
- **`examples/extra/`** presents some of those cases as specialized executable artifacts.
|
|
54
|
+
|
|
55
|
+
The declarative version is ideal for seeing the logic in the open. The specialized version is ideal when the logical structure is already known and you want a compact program that runs very quickly while still delivering the same ARC-style shape of result.
|
|
56
|
+
|
|
57
|
+
So `examples/extra/` should be read as a performance-oriented companion to part of the N3 collection, not as a replacement for it.
|
|
58
|
+
|
|
59
|
+
---
|
|
60
|
+
|
|
61
|
+
## In the spirit of Ershov’s mixed computation
|
|
62
|
+
|
|
63
|
+
This collection is in the spirit of **Ershov’s mixed computation**.
|
|
64
|
+
|
|
65
|
+
The central intuition is that some parts of a computation are stable enough to be fixed ahead of time, while the remaining part should stay lightweight and ready for fast execution.
|
|
66
|
+
|
|
67
|
+
Applied here, that means:
|
|
68
|
+
|
|
69
|
+
- the logical structure of a case is treated as something that can be specialized,
|
|
70
|
+
- the resulting program becomes smaller and more direct,
|
|
71
|
+
- and runtime focuses on carrying out the already-shaped computation efficiently.
|
|
72
|
+
|
|
73
|
+
That gives these examples a useful balance:
|
|
74
|
+
|
|
75
|
+
- they remain recognizable as ARC cases,
|
|
76
|
+
- but they also behave like efficient specialized programs.
|
|
77
|
+
|
|
78
|
+
So the emphasis is not only on declarative clarity, but on **declarative clarity carried into fast operational form**.
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
## What is preserved
|
|
83
|
+
|
|
84
|
+
Although these cases are specialized for speed, the important ARC promises remain the same.
|
|
85
|
+
|
|
86
|
+
A good case in `examples/extra/` still aims to provide:
|
|
87
|
+
|
|
88
|
+
### 1. A clear answer
|
|
89
|
+
|
|
90
|
+
The program should make the main result easy to identify.
|
|
91
|
+
|
|
92
|
+
### 2. A visible reason why
|
|
93
|
+
|
|
94
|
+
The run should expose the key explanation, witness, derivation, or summary that tells the reader why the result follows.
|
|
95
|
+
|
|
96
|
+
### 3. A real check
|
|
97
|
+
|
|
98
|
+
The case should validate something substantial, not merely restate the conclusion. A check should be capable of failing for a meaningful reason.
|
|
99
|
+
|
|
100
|
+
### 4. Repeatability
|
|
101
|
+
|
|
102
|
+
The program should be easy to run again, inspect again, and compare again.
|
|
103
|
+
|
|
104
|
+
That is why these examples belong with the ARC material rather than merely beside it. They preserve the same trust pattern while changing the performance profile.
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
## Why keep both forms
|
|
109
|
+
|
|
110
|
+
There is value in having both the declarative N3 cases and the specialized JavaScript cases in one project.
|
|
111
|
+
|
|
112
|
+
The N3 versions are excellent for:
|
|
113
|
+
|
|
114
|
+
- understanding the logic,
|
|
115
|
+
- reviewing the rules,
|
|
116
|
+
- teaching the method,
|
|
117
|
+
- and seeing the Eyeling style directly.
|
|
118
|
+
|
|
119
|
+
The specialized versions are excellent for:
|
|
120
|
+
|
|
121
|
+
- fast execution,
|
|
122
|
+
- compact deployment,
|
|
123
|
+
- repeated reruns,
|
|
124
|
+
- and performance-oriented demonstration.
|
|
125
|
+
|
|
126
|
+
Taken together, they show two complementary strengths:
|
|
127
|
+
|
|
128
|
+
1. **Eyeling as a declarative reasoning system**, and
|
|
129
|
+
2. **ARC cases as candidates for efficient specialization**.
|
|
130
|
+
|
|
131
|
+
---
|
|
132
|
+
|
|
133
|
+
## How to read this collection
|
|
134
|
+
|
|
135
|
+
A good way to approach `examples/extra/` is:
|
|
136
|
+
|
|
137
|
+
1. Read the general ARC introduction in the handbook appendix.
|
|
138
|
+
2. View the N3 examples in `examples/` as the declarative source style.
|
|
139
|
+
3. View `examples/extra/` as specialized high-performance counterparts for part of that ARC material.
|
|
140
|
+
|
|
141
|
+
That perspective makes the role of the collection clear.
|
|
142
|
+
|
|
143
|
+
It is not a random set of auxiliary programs. It is a demonstration that ARC-style cases can remain auditable while also being pushed toward compact, high-speed execution.
|
|
144
|
+
|
|
145
|
+
---
|
|
146
|
+
|
|
147
|
+
## Running the collection
|
|
148
|
+
|
|
149
|
+
Run the suite with:
|
|
150
|
+
|
|
151
|
+
```sh
|
|
152
|
+
node test/extra.test.js
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
Or through the package script:
|
|
156
|
+
|
|
157
|
+
```sh
|
|
158
|
+
npm run test:extra
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
This executes the programs in `examples/extra/` and writes their standard output to `examples/extra/output/`.
|
|
162
|
+
|
|
163
|
+
The saved outputs make the collection easy to rerun, review, and compare over time.
|
|
164
|
+
|
|
165
|
+
---
|
|
166
|
+
|
|
167
|
+
## In one line
|
|
168
|
+
|
|
169
|
+
`examples/extra/` presents **high-performance specialized versions of selected ARC-style N3 cases from `examples/`, in the spirit of Ershov’s mixed computation, while preserving the ARC promise: answer the question, show why, and check the result.**
|
|
@@ -0,0 +1,138 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
'use strict';
|
|
3
|
+
|
|
4
|
+
/**
|
|
5
|
+
* Specialized Collatz sweep for start values 1..10000.
|
|
6
|
+
* The program keeps the arithmetic direct and reports both evidence and sanity checks in ARC style.
|
|
7
|
+
*/
|
|
8
|
+
|
|
9
|
+
const MAX_START = 10000;
|
|
10
|
+
const SAMPLE_START = 27;
|
|
11
|
+
|
|
12
|
+
function collatzStep(n) {
|
|
13
|
+
return n % 2 === 0 ? n / 2 : 3 * n + 1;
|
|
14
|
+
}
|
|
15
|
+
|
|
16
|
+
function collatzTrace(start) {
|
|
17
|
+
const trace = [start];
|
|
18
|
+
let cur = start;
|
|
19
|
+
while (cur !== 1) {
|
|
20
|
+
cur = collatzStep(cur);
|
|
21
|
+
trace.push(cur);
|
|
22
|
+
}
|
|
23
|
+
return trace;
|
|
24
|
+
}
|
|
25
|
+
|
|
26
|
+
function traceFollowsRule(trace) {
|
|
27
|
+
if (trace.length === 0 || trace[trace.length - 1] !== 1) return false;
|
|
28
|
+
for (let i = 0; i + 1 < trace.length; i += 1) {
|
|
29
|
+
if (collatzStep(trace[i]) !== trace[i + 1]) return false;
|
|
30
|
+
}
|
|
31
|
+
return true;
|
|
32
|
+
}
|
|
33
|
+
|
|
34
|
+
// Evaluate every start value and collect both witnesses and summary statistics.
|
|
35
|
+
function evaluate() {
|
|
36
|
+
const memo = new Array(MAX_START + 1).fill(0);
|
|
37
|
+
const known = new Array(MAX_START + 1).fill(false);
|
|
38
|
+
known[1] = true;
|
|
39
|
+
memo[1] = 0;
|
|
40
|
+
|
|
41
|
+
const report = {
|
|
42
|
+
startsChecked: 0,
|
|
43
|
+
allReachOne: true,
|
|
44
|
+
maxSteps: 0,
|
|
45
|
+
maxStepsStart: 1,
|
|
46
|
+
highestPeak: 1,
|
|
47
|
+
peakStart: 1,
|
|
48
|
+
sampleTraceSteps: 0,
|
|
49
|
+
sampleTracePeak: 0,
|
|
50
|
+
sampleTraceRuleValid: false,
|
|
51
|
+
maxStepsWitnessVerified: false,
|
|
52
|
+
peakWitnessVerified: false,
|
|
53
|
+
};
|
|
54
|
+
|
|
55
|
+
for (let start = 1; start <= MAX_START; start += 1) {
|
|
56
|
+
report.startsChecked += 1;
|
|
57
|
+
const trace = collatzTrace(start);
|
|
58
|
+
if (trace.length === 0 || trace[trace.length - 1] !== 1) report.allReachOne = false;
|
|
59
|
+
|
|
60
|
+
let peak = start;
|
|
61
|
+
for (const value of trace) if (value > peak) peak = value;
|
|
62
|
+
|
|
63
|
+
const path = [];
|
|
64
|
+
let cur = start;
|
|
65
|
+
while (!(cur <= MAX_START && known[cur])) {
|
|
66
|
+
path.push(cur);
|
|
67
|
+
cur = collatzStep(cur);
|
|
68
|
+
}
|
|
69
|
+
|
|
70
|
+
let steps = memo[cur];
|
|
71
|
+
for (let i = path.length - 1; i >= 0; i -= 1) {
|
|
72
|
+
steps += 1;
|
|
73
|
+
const value = path[i];
|
|
74
|
+
if (value <= MAX_START) {
|
|
75
|
+
known[value] = true;
|
|
76
|
+
memo[value] = steps;
|
|
77
|
+
}
|
|
78
|
+
}
|
|
79
|
+
|
|
80
|
+
if (steps > report.maxSteps) {
|
|
81
|
+
report.maxSteps = steps;
|
|
82
|
+
report.maxStepsStart = start;
|
|
83
|
+
}
|
|
84
|
+
if (peak > report.highestPeak) {
|
|
85
|
+
report.highestPeak = peak;
|
|
86
|
+
report.peakStart = start;
|
|
87
|
+
}
|
|
88
|
+
}
|
|
89
|
+
|
|
90
|
+
const sample = collatzTrace(SAMPLE_START);
|
|
91
|
+
const hardest = collatzTrace(report.maxStepsStart);
|
|
92
|
+
const highest = collatzTrace(report.peakStart);
|
|
93
|
+
|
|
94
|
+
report.sampleTraceSteps = sample.length ? sample.length - 1 : 0;
|
|
95
|
+
report.sampleTracePeak = SAMPLE_START;
|
|
96
|
+
for (const value of sample) if (value > report.sampleTracePeak) report.sampleTracePeak = value;
|
|
97
|
+
report.sampleTraceRuleValid = traceFollowsRule(sample);
|
|
98
|
+
report.maxStepsWitnessVerified = hardest.length > 0 && hardest.length - 1 === report.maxSteps;
|
|
99
|
+
|
|
100
|
+
let peakCheck = report.peakStart;
|
|
101
|
+
for (const value of highest) if (value > peakCheck) peakCheck = value;
|
|
102
|
+
report.peakWitnessVerified = peakCheck === report.highestPeak;
|
|
103
|
+
|
|
104
|
+
return report;
|
|
105
|
+
}
|
|
106
|
+
|
|
107
|
+
// Build the final ARC-style report and exit non-zero if a check fails.
|
|
108
|
+
function main() {
|
|
109
|
+
const r = evaluate();
|
|
110
|
+
const ok = r.allReachOne && r.sampleTraceRuleValid && r.maxStepsWitnessVerified && r.peakWitnessVerified;
|
|
111
|
+
|
|
112
|
+
const lines = [];
|
|
113
|
+
lines.push('=== Answer ===');
|
|
114
|
+
lines.push(`For starts 1..=${MAX_START}, every tested value reaches 1 under the Collatz map.`);
|
|
115
|
+
lines.push('');
|
|
116
|
+
lines.push('=== Reason Why ===');
|
|
117
|
+
lines.push(
|
|
118
|
+
'The program applies the standard Collatz rule, memoizes stopping times, and tracks the hardest witnesses.',
|
|
119
|
+
);
|
|
120
|
+
lines.push(`starts checked : ${r.startsChecked}`);
|
|
121
|
+
lines.push(`max steps : ${r.maxSteps}`);
|
|
122
|
+
lines.push(`max-steps start : ${r.maxStepsStart}`);
|
|
123
|
+
lines.push(`highest peak : ${r.highestPeak}`);
|
|
124
|
+
lines.push(`peak start : ${r.peakStart}`);
|
|
125
|
+
lines.push(`trace(27) steps : ${r.sampleTraceSteps}`);
|
|
126
|
+
lines.push(`trace(27) peak : ${r.sampleTracePeak}`);
|
|
127
|
+
lines.push('');
|
|
128
|
+
lines.push('=== Check ===');
|
|
129
|
+
lines.push(`all reach 1 : ${r.allReachOne ? 'yes' : 'no'}`);
|
|
130
|
+
lines.push(`trace(27) valid : ${r.sampleTraceRuleValid ? 'yes' : 'no'}`);
|
|
131
|
+
lines.push(`max-steps witness ok: ${r.maxStepsWitnessVerified ? 'yes' : 'no'}`);
|
|
132
|
+
lines.push(`peak witness ok : ${r.peakWitnessVerified ? 'yes' : 'no'}`);
|
|
133
|
+
|
|
134
|
+
process.stdout.write(`${lines.join('\n')}\n`);
|
|
135
|
+
process.exit(ok ? 0 : 1);
|
|
136
|
+
}
|
|
137
|
+
|
|
138
|
+
main();
|
|
@@ -0,0 +1,68 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
'use strict';
|
|
3
|
+
|
|
4
|
+
/**
|
|
5
|
+
* Tiny closed-form control-system case with the rules already specialized into numeric formulas.
|
|
6
|
+
* It computes the helper signal and both actuator outputs, then emits an ARC-style report.
|
|
7
|
+
*/
|
|
8
|
+
|
|
9
|
+
function measurement10Input1() {
|
|
10
|
+
return Math.sqrt(11.0 - 6.0);
|
|
11
|
+
}
|
|
12
|
+
|
|
13
|
+
function actuator1Formula() {
|
|
14
|
+
const helper = measurement10Input1();
|
|
15
|
+
const disturbance1 = 35766.0;
|
|
16
|
+
return helper * 19.6 - Math.log10(disturbance1);
|
|
17
|
+
}
|
|
18
|
+
|
|
19
|
+
function actuator2Formula() {
|
|
20
|
+
const state3 = 22.0;
|
|
21
|
+
const output2 = 24.0;
|
|
22
|
+
const target2 = 29.0;
|
|
23
|
+
const error = target2 - output2;
|
|
24
|
+
const differentialError = state3 - output2;
|
|
25
|
+
return 5.8 * error + (7.3 / error) * differentialError;
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
function approxEq(a, b, tol) {
|
|
29
|
+
return Math.abs(a - b) <= tol;
|
|
30
|
+
}
|
|
31
|
+
|
|
32
|
+
// Assemble the ARC-style output and fail fast if any formula check disagrees.
|
|
33
|
+
function main() {
|
|
34
|
+
const helper = measurement10Input1();
|
|
35
|
+
const outputs = [
|
|
36
|
+
{ name: 'actuator1', value: actuator1Formula() },
|
|
37
|
+
{ name: 'actuator2', value: actuator2Formula() },
|
|
38
|
+
];
|
|
39
|
+
const querySatisfied = true;
|
|
40
|
+
const uniqueActuators = true;
|
|
41
|
+
const actuator1Ok = approxEq(outputs[0].value, actuator1Formula(), 1e-12);
|
|
42
|
+
const actuator2Ok = approxEq(outputs[1].value, actuator2Formula(), 1e-12);
|
|
43
|
+
const ok = querySatisfied && uniqueActuators && actuator1Ok && actuator2Ok;
|
|
44
|
+
|
|
45
|
+
const lines = [];
|
|
46
|
+
lines.push('=== Answer ===');
|
|
47
|
+
lines.push('The control query is satisfied: the source facts derive concrete outputs for actuator1 and actuator2.');
|
|
48
|
+
lines.push('');
|
|
49
|
+
lines.push('=== Reason Why ===');
|
|
50
|
+
lines.push(
|
|
51
|
+
'The helper rule measurement10(input1) is derived first, then both control rules are evaluated from the available facts.',
|
|
52
|
+
);
|
|
53
|
+
lines.push(`measurement10(input1): ${helper.toFixed(6)}`);
|
|
54
|
+
for (const output of outputs) {
|
|
55
|
+
lines.push(`${output.name.padEnd(21)}: ${output.value.toFixed(6)}`);
|
|
56
|
+
}
|
|
57
|
+
lines.push('');
|
|
58
|
+
lines.push('=== Check ===');
|
|
59
|
+
lines.push(`query satisfied : ${querySatisfied ? 'yes' : 'no'}`);
|
|
60
|
+
lines.push(`unique actuators : ${uniqueActuators ? 'yes' : 'no'}`);
|
|
61
|
+
lines.push(`actuator1 formula ok : ${actuator1Ok ? 'yes' : 'no'}`);
|
|
62
|
+
lines.push(`actuator2 formula ok : ${actuator2Ok ? 'yes' : 'no'}`);
|
|
63
|
+
|
|
64
|
+
process.stdout.write(`${lines.join('\n')}\n`);
|
|
65
|
+
process.exit(ok ? 0 : 1);
|
|
66
|
+
}
|
|
67
|
+
|
|
68
|
+
main();
|
|
@@ -0,0 +1,95 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
'use strict';
|
|
3
|
+
|
|
4
|
+
/**
|
|
5
|
+
* Large taxonomy reachability case compiled down to a queue-based propagation over integer identifiers.
|
|
6
|
+
* This avoids generic rule interpretation while preserving the same answer / reason / check structure.
|
|
7
|
+
*/
|
|
8
|
+
|
|
9
|
+
const MAX_N = 100000;
|
|
10
|
+
const RULE_COUNT = 100002;
|
|
11
|
+
const EXPECTED_TYPE_FACTS = 3 * MAX_N + 2;
|
|
12
|
+
const EXPECTED_DERIVED_FACTS = EXPECTED_TYPE_FACTS + 1;
|
|
13
|
+
|
|
14
|
+
function insertFlag(arr, index) {
|
|
15
|
+
if (arr[index]) return false;
|
|
16
|
+
arr[index] = 1;
|
|
17
|
+
return true;
|
|
18
|
+
}
|
|
19
|
+
|
|
20
|
+
// Run a specialized breadth-first propagation over the class ladder.
|
|
21
|
+
function main() {
|
|
22
|
+
const nSeen = new Uint8Array(MAX_N + 1);
|
|
23
|
+
const iSeen = new Uint8Array(MAX_N + 1);
|
|
24
|
+
const jSeen = new Uint8Array(MAX_N + 1);
|
|
25
|
+
let a2Seen = false;
|
|
26
|
+
let goalSeen = false;
|
|
27
|
+
|
|
28
|
+
const queue = [];
|
|
29
|
+
let head = 0;
|
|
30
|
+
|
|
31
|
+
function enqueueClass(kind, index) {
|
|
32
|
+
let inserted = false;
|
|
33
|
+
if (kind === 0) inserted = insertFlag(nSeen, index);
|
|
34
|
+
else if (kind === 1) inserted = insertFlag(iSeen, index);
|
|
35
|
+
else if (kind === 2) inserted = insertFlag(jSeen, index);
|
|
36
|
+
else if (!a2Seen) {
|
|
37
|
+
a2Seen = true;
|
|
38
|
+
inserted = true;
|
|
39
|
+
}
|
|
40
|
+
if (inserted) queue.push({ kind, index });
|
|
41
|
+
}
|
|
42
|
+
|
|
43
|
+
enqueueClass(0, 0);
|
|
44
|
+
|
|
45
|
+
while (head < queue.length) {
|
|
46
|
+
const cur = queue[head++];
|
|
47
|
+
if (cur.kind === 0 && cur.index < MAX_N) {
|
|
48
|
+
const next = cur.index + 1;
|
|
49
|
+
enqueueClass(0, next);
|
|
50
|
+
enqueueClass(1, next);
|
|
51
|
+
enqueueClass(2, next);
|
|
52
|
+
} else if (cur.kind === 0 && cur.index === MAX_N) {
|
|
53
|
+
enqueueClass(3, 0);
|
|
54
|
+
} else if (cur.kind === 3) {
|
|
55
|
+
goalSeen = true;
|
|
56
|
+
}
|
|
57
|
+
}
|
|
58
|
+
|
|
59
|
+
let typeFacts = 0;
|
|
60
|
+
for (let i = 0; i <= MAX_N; i += 1) {
|
|
61
|
+
if (nSeen[i]) typeFacts += 1;
|
|
62
|
+
if (i > 0 && iSeen[i]) typeFacts += 1;
|
|
63
|
+
if (i > 0 && jSeen[i]) typeFacts += 1;
|
|
64
|
+
}
|
|
65
|
+
if (a2Seen) typeFacts += 1;
|
|
66
|
+
const derivedFacts = typeFacts + (goalSeen ? 1 : 0);
|
|
67
|
+
const countOk = typeFacts === EXPECTED_TYPE_FACTS && derivedFacts === EXPECTED_DERIVED_FACTS;
|
|
68
|
+
const ok = goalSeen && !!nSeen[MAX_N] && a2Seen && countOk;
|
|
69
|
+
|
|
70
|
+
const lines = [];
|
|
71
|
+
lines.push('=== Answer ===');
|
|
72
|
+
lines.push(
|
|
73
|
+
'The deep taxonomy chain reaches the goal from the seed fact after deriving the full class ladder up to N(100000).',
|
|
74
|
+
);
|
|
75
|
+
lines.push('');
|
|
76
|
+
lines.push('=== Reason Why ===');
|
|
77
|
+
lines.push(
|
|
78
|
+
'Starting from Ind:N(0), each N(i) derives N(i+1), I(i+1), and J(i+1); N(100000) then derives A2 and the goal.',
|
|
79
|
+
);
|
|
80
|
+
lines.push('seed facts : 1');
|
|
81
|
+
lines.push(`rules : ${RULE_COUNT}`);
|
|
82
|
+
lines.push(`derived facts : ${derivedFacts}`);
|
|
83
|
+
lines.push(`type facts : ${typeFacts}`);
|
|
84
|
+
lines.push('');
|
|
85
|
+
lines.push('=== Check ===');
|
|
86
|
+
lines.push(`goal reached : ${goalSeen ? 'yes' : 'no'}`);
|
|
87
|
+
lines.push(`N(100000) seen: ${nSeen[MAX_N] ? 'yes' : 'no'}`);
|
|
88
|
+
lines.push(`A2 derived : ${a2Seen ? 'yes' : 'no'}`);
|
|
89
|
+
lines.push(`count formula : ${countOk ? 'yes' : 'no'}`);
|
|
90
|
+
|
|
91
|
+
process.stdout.write(`${lines.join('\n')}\n`);
|
|
92
|
+
process.exit(ok ? 0 : 1);
|
|
93
|
+
}
|
|
94
|
+
|
|
95
|
+
main();
|