diskeyval 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,31 @@
1
+ name: Node.js Test Runner
2
+
3
+ on:
4
+ pull_request:
5
+ push:
6
+ branches:
7
+ - main
8
+ - master
9
+
10
+ jobs:
11
+ build:
12
+ runs-on: ubuntu-latest
13
+
14
+ strategy:
15
+ matrix:
16
+ node-version: [22.x, 24.x]
17
+
18
+ steps:
19
+ - uses: actions/checkout@v4
20
+
21
+ - name: Use Node.js ${{ matrix.node-version }}
22
+ uses: actions/setup-node@v4
23
+ with:
24
+ node-version: ${{ matrix.node-version }}
25
+ cache: npm
26
+
27
+ - run: npm ci
28
+ - run: npm run build --if-present
29
+ - run: npm test
30
+ env:
31
+ CI: true
package/LICENSE ADDED
@@ -0,0 +1,7 @@
1
+ Copyright 2022 Mark Wylde
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4
+
5
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6
+
7
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
package/PLAN.md ADDED
@@ -0,0 +1,169 @@
1
+ # diskeyval Implementation Plan
2
+
3
+ ## Goal
4
+ Build a fast distributed key/value store with majority-based semantics:
5
+
6
+ - `set(key, value)` resolves after majority commit.
7
+ - `get(key)` resolves from majority-committed state.
8
+ - Node-to-node trust is established with CA-signed mTLS certificates.
9
+ - Raft logic is isolated into its own module so it can later be extracted as a standalone package.
10
+ - Test surface is intentionally larger than implementation surface.
11
+
12
+ ## Testing-First Policy
13
+
14
+ - More tests than implementation code until v1 semantics are stable.
15
+ - New behavior must land with tests in `unit` plus `sim` or `e2e`.
16
+ - Every production bug gets a regression test before fix merge.
17
+ - CI blocks merges when required suites fail.
18
+
19
+ Target ratios (early project):
20
+
21
+ - Unit test count >= 2x exported Raft/API methods.
22
+ - Total test scenarios >= 90 before first feature-complete milestone.
23
+ - Safety invariants must be asserted in every simulation/e2e run.
24
+
25
+ ## Architecture
26
+
27
+ ### Module A: `raft-core` (isolated consensus module)
28
+ Responsibilities:
29
+
30
+ - Leader election (follower/candidate/leader roles)
31
+ - Heartbeats and election timeouts
32
+ - Log replication (`AppendEntries`)
33
+ - Vote requests (`RequestVote`)
34
+ - Commit index advancement after majority replication
35
+ - Apply committed entries through a state machine callback
36
+ - Read path support for linearizable reads (leader read barrier / read index)
37
+
38
+ Public API target:
39
+
40
+ - `start()`, `stop()`
41
+ - `propose(command): Promise<CommitResult>`
42
+ - `readBarrier(): Promise<void>`
43
+ - events: `leader`, `commit`, `role-change`
44
+
45
+ Non-goals for v1:
46
+
47
+ - Dynamic cluster reconfiguration
48
+ - Snapshot streaming
49
+ - Disk-backed WAL compaction
50
+
51
+ ### Module B: `diskeyval` (store + user API)
52
+ Responsibilities:
53
+
54
+ - Public API (`start`, `set`, `get`, `end`)
55
+ - In-memory materialized map (`state`)
56
+ - Eventing (`change`, `leader`)
57
+ - Command encoding/decoding for the Raft state machine
58
+ - Routing client writes to leader (or forwarding)
59
+
60
+ ### Module C: transport/security
61
+ Responsibilities:
62
+
63
+ - mTLS transport between nodes
64
+ - Certificate validation against cluster CA
65
+ - Peer identity checks (`nodeId` matches certificate identity)
66
+ - RPC framing and message serialization
67
+
68
+ ## Delivery Phases
69
+
70
+ ## Phase 0: Test Harness First
71
+ - Establish suite layout: `test/unit`, `test/sim`, `test/e2e`, `test/soak`.
72
+ - Define initial scenario matrix and acceptance gates.
73
+ - Build deterministic simulation harness interfaces before protocol implementation.
74
+
75
+ Exit criteria:
76
+ - Test runners exist for each suite and scenario inventory is committed.
77
+
78
+ ## Phase 1: Local Raft Core Skeleton
79
+ - Create `lib/raft/` module boundaries and types.
80
+ - Implement role machine + timers + term transitions.
81
+ - Add message handlers for `RequestVote` and empty `AppendEntries`.
82
+ - Unit test election safety basics and timer behavior.
83
+
84
+ Exit criteria:
85
+ - Leader election stabilizes in a 3-node in-memory simulated cluster.
86
+
87
+ ## Phase 2: Replicated Log + Majority Commit
88
+ - Add log entries and replication indexes (`nextIndex`, `matchIndex`).
89
+ - Implement majority commit rule.
90
+ - Apply committed entries in order to state machine callback.
91
+ - Wire `diskeyval.set` to `raft.propose`.
92
+
93
+ Exit criteria:
94
+ - `set` resolves only after majority commit in 3-node simulation.
95
+
96
+ ## Phase 3: Read Correctness
97
+ - Implement linearizable reads via `readBarrier()` on leader.
98
+ - Wire `diskeyval.get` through read barrier.
99
+ - Add tests for stale-leader prevention.
100
+
101
+ Exit criteria:
102
+ - `get` never returns uncommitted or stale values after leader changes.
103
+
104
+ ## Phase 4: Real Transport + mTLS
105
+ - Replace in-memory transport with socket RPC transport.
106
+ - Add mTLS handshake and cert validation.
107
+ - Enforce `nodeId`/certificate identity mapping.
108
+
109
+ Exit criteria:
110
+ - Multi-process local cluster can elect leader and commit writes over mTLS.
111
+
112
+ ## Phase 5: Reliability + Performance
113
+ - Add retry/backoff for replication RPCs.
114
+ - Add batching/pipelining for append entries.
115
+ - Add metrics hooks (latency, commit lag, election count).
116
+ - Start WAL/snapshot design for restart durability.
117
+
118
+ Exit criteria:
119
+ - Stable behavior under node restarts/network jitter in integration tests.
120
+
121
+ ## Testing Strategy
122
+
123
+ - Unit tests for term/vote/commit invariants and edge cases.
124
+ - Simulation tests for partitions, recoveries, delays, drops, and reorderings.
125
+ - Integration/e2e tests for mTLS, multi-process cluster behavior, crash/restart.
126
+ - Property tests for log matching and monotonic commit index.
127
+ - Soak/chaos tests for long-running stability and latency/lag budgets.
128
+
129
+ ## Merge Gates
130
+
131
+ - Protocol code change: requires `test:unit` and `test:sim`.
132
+ - Transport/security change: requires `test:unit` and `test:e2e`.
133
+ - Release candidate: requires `test:all` plus soak run report.
134
+
135
+ ## Key Invariants (must always hold)
136
+
137
+ - At most one leader per term.
138
+ - Committed entries never roll back.
139
+ - A node applies entries in log order only.
140
+ - `set` success implies majority persistence in memory/log.
141
+ - `get` is served from majority-committed state.
142
+
143
+ ## Milestone Sequence
144
+
145
+ 1. Raft core skeleton in-process simulation
146
+ 2. Majority commit + `set` semantics
147
+ 3. Linearizable `get`
148
+ 4. mTLS transport
149
+ 5. Performance passes and durability
150
+
151
+ ## Status
152
+
153
+ - Phase 0: completed
154
+ - Phase 1: completed
155
+ - Phase 2: completed
156
+ - Phase 3: completed
157
+ - Phase 4: completed
158
+ - Phase 5: completed
159
+
160
+ Verification:
161
+
162
+ - Real test suites are executable (no placeholder todo tests).
163
+ - Unit + simulation + mTLS e2e + soak pass in CI command path (`npm test`).
164
+
165
+ ## Notes
166
+
167
+ - Keep `raft-core` free of app/store concerns so it can become its own package.
168
+ - Prefer small, composable interfaces between raft, transport, and store.
169
+ - Start with in-memory first for speed, then layer network/durability.
package/README.md ADDED
@@ -0,0 +1,119 @@
1
+ # diskeyval
2
+
3
+ ## Installation
4
+ ```bash
5
+ npm install --save diskeyval
6
+ ```
7
+
8
+ ## Overview
9
+ `diskeyval` is a distributed in-memory key/value store with Raft consensus.
10
+
11
+ - `set(key, value)` resolves only after majority commit.
12
+ - `get(key)` resolves from majority-committed state.
13
+ - Dynamic cluster reconfiguration is implemented via `reconfigure(peers)`.
14
+ - Inter-node communication uses mTLS with CA-signed certs.
15
+ - Implementation style is functional/context-based (no classes or interfaces).
16
+
17
+ ## API Example
18
+ ```typescript
19
+ import diskeyval from 'diskeyval';
20
+
21
+ const node1 = diskeyval({
22
+ nodeId: 'node-1',
23
+ host: '127.0.0.1',
24
+ port: 8050,
25
+ peers: [
26
+ { nodeId: 'node-2', host: '127.0.0.1', port: 8051 },
27
+ { nodeId: 'node-3', host: '127.0.0.1', port: 8052 }
28
+ ],
29
+ tls: {
30
+ cert: process.env.NODE1_CERT_PEM!,
31
+ key: process.env.NODE1_KEY_PEM!,
32
+ ca: process.env.CLUSTER_CA_PEM!
33
+ }
34
+ });
35
+
36
+ const node2 = diskeyval({
37
+ nodeId: 'node-2',
38
+ host: '127.0.0.1',
39
+ port: 8051,
40
+ peers: [
41
+ { nodeId: 'node-1', host: '127.0.0.1', port: 8050 },
42
+ { nodeId: 'node-3', host: '127.0.0.1', port: 8052 }
43
+ ],
44
+ tls: {
45
+ cert: process.env.NODE2_CERT_PEM!,
46
+ key: process.env.NODE2_KEY_PEM!,
47
+ ca: process.env.CLUSTER_CA_PEM!
48
+ }
49
+ });
50
+
51
+ await node1.start();
52
+ await node2.start();
53
+
54
+ await node1.set('testkey', 'testvalue1');
55
+ const value = await node1.get('testkey');
56
+
57
+ node1.on('change', ({ key, value }) => {
58
+ console.log('changed', key, value);
59
+ });
60
+
61
+ await node1.end();
62
+ await node2.end();
63
+ ```
64
+
65
+ ## Options
66
+ ```typescript
67
+ type DiskeyvalOptions = {
68
+ nodeId: string;
69
+
70
+ // Required for built-in TLS transport:
71
+ host?: string;
72
+ port?: number;
73
+ peers: string[] | Array<{ nodeId: string; host: string; port: number }>;
74
+ tls?: {
75
+ cert: string;
76
+ key: string;
77
+ ca: string;
78
+ };
79
+
80
+ // Optional custom transport (used by simulation/in-memory tests):
81
+ transport?: RaftTransport<SetCommand>;
82
+ persistence?: {
83
+ dir: string;
84
+ compactEvery?: number;
85
+ };
86
+
87
+ electionTimeoutMs?: number;
88
+ heartbeatMs?: number;
89
+ proposalTimeoutMs?: number;
90
+ rpcTimeoutMs?: number;
91
+ };
92
+ ```
93
+
94
+ ## Methods
95
+ - `start(): Promise<void>`
96
+ - `set(key: string, value: unknown): Promise<void>`
97
+ - `get<T = unknown>(key: string): Promise<T | undefined>`
98
+ - `reconfigure(peers: ClusterPeer[]): Promise<void>`
99
+ - `getPeers(): ClusterPeer[]`
100
+ - `end(): Promise<void>`
101
+ - `isLeader(): boolean`
102
+ - `leaderId(): string | null`
103
+ - `getMetrics(): RaftMetrics`
104
+
105
+ ## Dynamic Membership
106
+ - Reconfiguration is consensus-committed through the Raft log.
107
+ - Nodes apply committed membership updates at the same log index as other commands.
108
+ - New members catch up from the leader after being added.
109
+
110
+ ## Events
111
+ - `change` with payload `{ key: string; value: unknown }`
112
+ - `leader` with payload `{ nodeId: string | null }`
113
+
114
+ ## Guarantees
115
+ - Majority-acknowledged writes.
116
+ - Majority-consistent reads.
117
+ - Single committed global write order through Raft.
118
+ - mTLS-authenticated inter-node RPC with cert identity checks (CN/SAN).
119
+ - File-backed durability via WAL + periodic snapshot compaction when `persistence` is enabled.
package/TESTING.md ADDED
@@ -0,0 +1,94 @@
1
+ # Testing Strategy
2
+
3
+ This project is intentionally test-heavy. The target is to maintain more test code and scenarios than implementation code during early phases.
4
+
5
+ ## Principles
6
+
7
+ - Test first for every consensus behavior.
8
+ - Prefer deterministic simulation over flaky timing-only checks.
9
+ - Every bug gets a regression test before the fix is merged.
10
+ - No protocol change without unit + simulation + e2e coverage.
11
+
12
+ ## Suite Layers
13
+
14
+ 1. Unit tests (`test/unit`)
15
+ - Pure protocol/state-machine behavior.
16
+ - No real sockets.
17
+ - Deterministic clocks/network fakes.
18
+
19
+ 2. Simulation tests (`test/sim`)
20
+ - Multi-node in-memory cluster.
21
+ - Partition, drop, delay, reordering, and clock-skew scenarios.
22
+
23
+ 3. End-to-end tests (`test/e2e`)
24
+ - Real processes, real sockets, real mTLS certs.
25
+ - Crash/restart and leadership churn.
26
+
27
+ 4. Soak/chaos tests (`test/soak`)
28
+ - Long-running stability and throughput checks.
29
+ - Randomized failures and recoveries.
30
+
31
+ ## Coverage Targets
32
+
33
+ - Raft invariants: 100% scenario coverage for leader election, log matching, and commit monotonicity.
34
+ - API semantics: 100% coverage for majority-commit `set` and linearizable `get` behavior.
35
+ - Transport/auth: certificate validation, identity binding, and rejection cases.
36
+
37
+ ## Scenario Backlog (High Priority)
38
+
39
+ ### Election and leadership
40
+ - Single leader elected in a stable 3-node cluster.
41
+ - No dual leaders in the same term.
42
+ - Follower timeout causes candidacy.
43
+ - Split vote retries with term bump.
44
+ - Outdated term candidate is rejected.
45
+ - Leader steps down on higher term append.
46
+
47
+ ### Log replication and commit
48
+ - Leader appends and replicates entry to majority.
49
+ - Entry is committed only after majority.
50
+ - Follower log conflict is truncated and corrected.
51
+ - Commit index advances monotonically.
52
+ - Applied index never exceeds commit index.
53
+ - Old leader cannot commit after losing quorum.
54
+
55
+ ### Read correctness
56
+ - Leader read barrier returns committed value.
57
+ - Stale leader read is rejected or redirected.
58
+ - Read after write is linearizable.
59
+ - Read during leadership change preserves linearizability.
60
+
61
+ ### Failure handling
62
+ - Minority partition cannot commit writes.
63
+ - Majority partition continues serving writes.
64
+ - Rejoined node catches up from log.
65
+ - Node crash and restart rejoin behavior.
66
+ - Network delay and reordering resilience.
67
+
68
+ ### Security and transport
69
+ - Node with unknown CA is rejected.
70
+ - Node with invalid cert signature is rejected.
71
+ - Node ID mismatch to cert identity is rejected.
72
+ - Expired cert is rejected.
73
+ - Valid mTLS peers can form cluster.
74
+
75
+ ### API behavior
76
+ - `set` resolves only after majority commit.
77
+ - `set` times out cleanly when quorum unavailable.
78
+ - `get` returns majority-committed state.
79
+ - `change` event emits once per committed update.
80
+ - `leader` event emits on leadership transitions.
81
+
82
+ ## Merge Gates
83
+
84
+ A change should not merge unless:
85
+
86
+ - Relevant unit tests are present and passing.
87
+ - At least one simulation or e2e case covers the behavior.
88
+ - Regression tests exist for any fixed defect.
89
+
90
+ ## CI Plan
91
+
92
+ - PR: `unit + sim` mandatory.
93
+ - Main branch: `unit + sim + e2e` mandatory.
94
+ - Nightly: `soak` and extended chaos scenarios.