@blinklabs/dingo 0.21.0 → 0.23.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/AGENTS.md +115 -0
- package/ARCHITECTURE.md +839 -0
- package/README.md +431 -44
- package/RELEASE_NOTES.md +91 -0
- package/{devnet.sh → devmode.sh} +5 -5
- package/dingo.yaml.example +107 -2
- package/docker-compose.yml +51 -0
- package/package.json +1 -1
package/ARCHITECTURE.md
ADDED
|
@@ -0,0 +1,839 @@
|
|
|
1
|
+
# Architecture
|
|
2
|
+
|
|
3
|
+
Dingo is a high-performance Cardano blockchain node implementation in Go. This document describes its architecture, core components, and design patterns.
|
|
4
|
+
|
|
5
|
+
## Table of Contents
|
|
6
|
+
|
|
7
|
+
- [Overview](#overview)
|
|
8
|
+
- [Directory Structure](#directory-structure)
|
|
9
|
+
- [Core Node Structure](#core-node-structure)
|
|
10
|
+
- [Event-Driven Communication](#event-driven-communication)
|
|
11
|
+
- [Storage Architecture](#storage-architecture)
|
|
12
|
+
- [Blockchain State Management](#blockchain-state-management)
|
|
13
|
+
- [Chain Management](#chain-management)
|
|
14
|
+
- [Network and Protocol Handling](#network-and-protocol-handling)
|
|
15
|
+
- [Peer Governance](#peer-governance)
|
|
16
|
+
- [Transaction Mempool](#transaction-mempool)
|
|
17
|
+
- [Block Production](#block-production)
|
|
18
|
+
- [Mithril Bootstrap](#mithril-bootstrap)
|
|
19
|
+
- [API Servers](#api-servers)
|
|
20
|
+
- [Design Patterns](#design-patterns)
|
|
21
|
+
- [Threading and Concurrency](#threading-and-concurrency)
|
|
22
|
+
- [Configuration](#configuration)
|
|
23
|
+
- [Stake Snapshots](#stake-snapshots)
|
|
24
|
+
|
|
25
|
+
## Overview
|
|
26
|
+
|
|
27
|
+
Dingo's architecture is built on several key principles:
|
|
28
|
+
|
|
29
|
+
1. Modular component design using dependency injection and composition
|
|
30
|
+
2. Event-driven communication via EventBus rather than direct coupling
|
|
31
|
+
3. Pluggable storage backends with a dual-layer database architecture (blob + metadata)
|
|
32
|
+
4. Full Ouroboros protocol support for Node-to-Node and Node-to-Client
|
|
33
|
+
5. Multi-peer chain synchronization with Ouroboros Praos chain selection
|
|
34
|
+
6. Block production with VRF leader election and stake snapshots
|
|
35
|
+
7. Graceful shutdown with phased resource cleanup
|
|
36
|
+
|
|
37
|
+
## Directory Structure
|
|
38
|
+
|
|
39
|
+
```
|
|
40
|
+
dingo/
|
|
41
|
+
├── cmd/dingo/ # CLI entry points
|
|
42
|
+
│ ├── main.go # Cobra CLI setup, plugin management
|
|
43
|
+
│ ├── serve.go # Node server command
|
|
44
|
+
│ ├── load.go # Block loading from ImmutableDB/Mithril
|
|
45
|
+
│ ├── mithril.go # Mithril bootstrap subcommand
|
|
46
|
+
│ └── version.go # Version information
|
|
47
|
+
├── chain/ # Blockchain state and validation
|
|
48
|
+
│ ├── chain.go # Chain struct, block management
|
|
49
|
+
│ ├── manager.go # ChainManager, fork handling
|
|
50
|
+
│ ├── event.go # Chain events (update, fork)
|
|
51
|
+
│ ├── iter.go # ChainIterator for sequential block access
|
|
52
|
+
│ └── errors.go # Chain-specific errors
|
|
53
|
+
├── chainselection/ # Multi-peer chain comparison
|
|
54
|
+
│ ├── selector.go # ChainSelector struct
|
|
55
|
+
│ ├── comparison.go # Ouroboros Praos chain selection rules
|
|
56
|
+
│ ├── event.go # Selection events
|
|
57
|
+
│ ├── peer_tip.go # Peer tip tracking
|
|
58
|
+
│ └── vrf.go # VRF verification
|
|
59
|
+
├── chainsync/ # Block synchronization protocol state
|
|
60
|
+
│ └── chainsync.go # Multi-client sync state, stall detection
|
|
61
|
+
├── connmanager/ # Network connection lifecycle
|
|
62
|
+
│ ├── connection_manager.go
|
|
63
|
+
│ └── event.go # Connection events
|
|
64
|
+
├── database/ # Storage abstraction layer
|
|
65
|
+
│ ├── database.go # Database struct, dual-layer design
|
|
66
|
+
│ ├── cbor_cache.go # TieredCborCache implementation
|
|
67
|
+
│ ├── cbor_offset.go # Offset-based CBOR references
|
|
68
|
+
│ ├── hot_cache.go # Hot cache for frequently accessed data
|
|
69
|
+
│ ├── block_lru_cache.go # Block-level LRU cache
|
|
70
|
+
│ ├── immutable/ # ImmutableDB chunk reader
|
|
71
|
+
│ ├── models/ # Database models
|
|
72
|
+
│ ├── types/ # Database types
|
|
73
|
+
│ ├── sops/ # Storage operations
|
|
74
|
+
│ └── plugin/ # Storage plugin system
|
|
75
|
+
│ ├── plugin.go # Plugin registry and interfaces
|
|
76
|
+
│ ├── blob/ # Blob storage plugins
|
|
77
|
+
│ │ ├── badger/ # Badger (default local storage)
|
|
78
|
+
│ │ ├── aws/ # AWS S3
|
|
79
|
+
│ │ └── gcs/ # Google Cloud Storage
|
|
80
|
+
│ └── metadata/ # Metadata plugins
|
|
81
|
+
│ ├── sqlite/ # SQLite (default)
|
|
82
|
+
│ ├── postgres/# PostgreSQL
|
|
83
|
+
│ └── mysql/ # MySQL
|
|
84
|
+
├── event/ # Event bus for decoupled communication
|
|
85
|
+
│ ├── event.go # EventBus, async delivery
|
|
86
|
+
│ ├── epoch.go # Epoch transition events
|
|
87
|
+
│ └── metrics.go # Event metrics
|
|
88
|
+
├── ledger/ # Ledger state, validation, block production
|
|
89
|
+
│ ├── state.go # LedgerState, UTXO tracking
|
|
90
|
+
│ ├── view.go # Ledger view queries
|
|
91
|
+
│ ├── queries.go # State queries
|
|
92
|
+
│ ├── validation.go # Transaction validation (Phase 1 UTXO rules)
|
|
93
|
+
│ ├── verify_header.go # Block header validation (VRF/KES/OpCert)
|
|
94
|
+
│ ├── chainsync.go # Epoch nonce calculation, rollback handling
|
|
95
|
+
│ ├── candidate_nonce.go # Candidate nonce computation
|
|
96
|
+
│ ├── certs.go # Certificate processing
|
|
97
|
+
│ ├── governance.go # Governance action processing
|
|
98
|
+
│ ├── delta.go # State delta tracking
|
|
99
|
+
│ ├── block_event.go # Block event processing
|
|
100
|
+
│ ├── slot_clock.go # Wall-clock slot timing
|
|
101
|
+
│ ├── metrics.go # Ledger metrics
|
|
102
|
+
│ ├── peer_provider.go # Ledger-based peer discovery
|
|
103
|
+
│ ├── era_summary.go # Era transition handling
|
|
104
|
+
│ ├── eras/ # Era-specific validation rules
|
|
105
|
+
│ │ ├── byron.go # Byron era
|
|
106
|
+
│ │ ├── shelley.go # Shelley era
|
|
107
|
+
│ │ ├── allegra.go # Allegra era
|
|
108
|
+
│ │ ├── mary.go # Mary era
|
|
109
|
+
│ │ ├── alonzo.go # Alonzo era
|
|
110
|
+
│ │ ├── babbage.go # Babbage era
|
|
111
|
+
│ │ └── conway.go # Conway era
|
|
112
|
+
│ ├── forging/ # Block production
|
|
113
|
+
│ │ ├── forger.go # BlockForger, slot-based forging loop
|
|
114
|
+
│ │ ├── builder.go # DefaultBlockBuilder, block assembly
|
|
115
|
+
│ │ ├── keys.go # PoolCredentials (VRF/KES/OpCert)
|
|
116
|
+
│ │ ├── slot_tracker.go # Slot battle detection
|
|
117
|
+
│ │ ├── events.go # Forging events
|
|
118
|
+
│ │ └── metrics.go # Forging metrics
|
|
119
|
+
│ ├── leader/ # Leader election
|
|
120
|
+
│ │ ├── election.go # Ouroboros Praos leader checks
|
|
121
|
+
│ │ └── schedule.go # Epoch leader schedule computation
|
|
122
|
+
│ └── snapshot/ # Stake snapshot management
|
|
123
|
+
│ ├── manager.go # Snapshot manager, event-driven capture
|
|
124
|
+
│ ├── calculator.go# Stake distribution calculation
|
|
125
|
+
│ └── rotation.go # Mark/Set/Go rotation
|
|
126
|
+
├── ledgerstate/ # Low-level ledger state import
|
|
127
|
+
│ ├── cbor_decode.go # CBOR decoding for large structures
|
|
128
|
+
│ ├── mempack.go # Memory-packed state representation
|
|
129
|
+
│ ├── snapshot.go # Snapshot parsing
|
|
130
|
+
│ ├── import.go # Ledger state import
|
|
131
|
+
│ ├── utxo.go # UTXO state handling
|
|
132
|
+
│ └── certstate.go # Certificate state handling
|
|
133
|
+
├── mempool/ # Transaction pool
|
|
134
|
+
│ ├── mempool.go # Mempool, validation, capacity
|
|
135
|
+
│ └── consumer.go # Per-consumer transaction tracking
|
|
136
|
+
├── ouroboros/ # Ouroboros protocol handlers
|
|
137
|
+
│ ├── ouroboros.go # N2N and N2C protocol management
|
|
138
|
+
│ ├── chainsync.go # Chain synchronization
|
|
139
|
+
│ ├── blockfetch.go # Block fetching
|
|
140
|
+
│ ├── txsubmission.go # TX submission (N2N)
|
|
141
|
+
│ ├── localtxsubmission.go # TX submission (N2C)
|
|
142
|
+
│ ├── localtxmonitor.go # Mempool monitoring
|
|
143
|
+
│ ├── localstatequery.go # Ledger queries
|
|
144
|
+
│ └── peersharing.go # Peer discovery
|
|
145
|
+
├── peergov/ # Peer selection and governance
|
|
146
|
+
│ ├── peergov.go # PeerGovernor
|
|
147
|
+
│ ├── churn.go # Peer rotation
|
|
148
|
+
│ ├── quotas.go # Per-source quotas
|
|
149
|
+
│ ├── score.go # Peer scoring
|
|
150
|
+
│ ├── ledger.go # Ledger-based peer discovery
|
|
151
|
+
│ └── event.go # Peer events
|
|
152
|
+
├── topology/ # Network topology handling
|
|
153
|
+
│ └── topology.go # Topology configuration
|
|
154
|
+
├── blockfrost/ # Blockfrost-compatible REST API
|
|
155
|
+
│ ├── blockfrost.go # Server lifecycle
|
|
156
|
+
│ ├── adapter.go # Node state adapter
|
|
157
|
+
│ ├── handlers.go # HTTP handlers
|
|
158
|
+
│ ├── pagination.go # Cursor-based pagination
|
|
159
|
+
│ └── types.go # API response types
|
|
160
|
+
├── mesh/ # Mesh (Rosetta) API
|
|
161
|
+
│ ├── mesh.go # Server lifecycle
|
|
162
|
+
│ ├── network.go # /network/* endpoints
|
|
163
|
+
│ ├── account.go # /account/* endpoints
|
|
164
|
+
│ ├── block.go # /block/* endpoints
|
|
165
|
+
│ ├── construction.go # /construction/* endpoints
|
|
166
|
+
│ ├── mempool_api.go # /mempool/* endpoints
|
|
167
|
+
│ ├── operations.go # Cardano operation mapping
|
|
168
|
+
│ └── convert.go # Type conversion utilities
|
|
169
|
+
├── utxorpc/ # UTxO RPC gRPC server
|
|
170
|
+
│ ├── utxorpc.go # Server setup
|
|
171
|
+
│ ├── query.go # Query service
|
|
172
|
+
│ ├── submit.go # Submit service
|
|
173
|
+
│ ├── sync.go # Sync service
|
|
174
|
+
│ └── watch.go # Watch service
|
|
175
|
+
├── bark/ # Bark archive block storage
|
|
176
|
+
│ ├── bark.go # HTTP server for block archive access
|
|
177
|
+
│ ├── archive.go # Archive blob store interface
|
|
178
|
+
│ └── blob.go # Blob store adapter with security window
|
|
179
|
+
├── mithril/ # Mithril snapshot bootstrap
|
|
180
|
+
│ ├── bootstrap.go # Bootstrap orchestration
|
|
181
|
+
│ ├── client.go # Mithril aggregator client
|
|
182
|
+
│ └── download.go # Snapshot download and extraction
|
|
183
|
+
├── keystore/ # Key management
|
|
184
|
+
│ ├── keystore.go # Key store interface
|
|
185
|
+
│ ├── keyfile.go # Key file parsing
|
|
186
|
+
│ ├── keyfile_unix.go # Unix file permissions
|
|
187
|
+
│ ├── keyfile_windows.go # Windows ACL permissions
|
|
188
|
+
│ └── evolution.go # KES key evolution
|
|
189
|
+
├── config/cardano/ # Embedded Cardano network configurations
|
|
190
|
+
├── internal/
|
|
191
|
+
│ ├── config/ # Configuration parsing
|
|
192
|
+
│ ├── integration/ # Integration tests
|
|
193
|
+
│ ├── node/ # Node orchestration (CLI wiring)
|
|
194
|
+
│ │ ├── node.go # Run(), signal handling, metrics server
|
|
195
|
+
│ │ └── load.go # Block loading implementation
|
|
196
|
+
│ ├── test/ # Test utilities
|
|
197
|
+
│ │ ├── conformance/ # Amaru conformance tests
|
|
198
|
+
│ │ ├── devnet/ # DevNet end-to-end tests
|
|
199
|
+
│ │ └── testutil/ # Shared test helpers
|
|
200
|
+
│ └── version/ # Version information
|
|
201
|
+
├── node.go # Node struct definition, Run(), shutdown
|
|
202
|
+
├── config.go # Configuration management (functional options)
|
|
203
|
+
└── tracing.go # OpenTelemetry tracing
|
|
204
|
+
```
|
|
205
|
+
|
|
206
|
+
## Core Node Structure
|
|
207
|
+
|
|
208
|
+
The `Node` struct (defined in `node.go`) orchestrates all major components:
|
|
209
|
+
|
|
210
|
+
```go
|
|
211
|
+
type Node struct {
|
|
212
|
+
connManager *connmanager.ConnectionManager // Network connections
|
|
213
|
+
peerGov *peergov.PeerGovernor // Peer selection/governance
|
|
214
|
+
chainsyncState *chainsync.State // Multi-peer sync state
|
|
215
|
+
chainSelector *chainselection.ChainSelector // Chain comparison
|
|
216
|
+
eventBus *event.EventBus // Event routing
|
|
217
|
+
mempool *mempool.Mempool // Transaction pool
|
|
218
|
+
chainManager *chain.ChainManager // Blockchain state
|
|
219
|
+
db *database.Database // Storage layer
|
|
220
|
+
ledgerState *ledger.LedgerState // UTXO/state tracking
|
|
221
|
+
snapshotMgr *snapshot.Manager // Stake snapshot capture
|
|
222
|
+
utxorpc *utxorpc.Utxorpc // UTxO RPC server
|
|
223
|
+
bark *bark.Bark // Block archive server
|
|
224
|
+
blockfrostAPI *blockfrost.Blockfrost // Blockfrost REST API
|
|
225
|
+
meshAPI *mesh.Server // Mesh (Rosetta) API
|
|
226
|
+
ouroboros *ouroboros.Ouroboros // Protocol handlers
|
|
227
|
+
blockForger *forging.BlockForger // Block production
|
|
228
|
+
leaderElection *leader.Election // Slot leader checks
|
|
229
|
+
}
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
### Initialization Flow
|
|
233
|
+
|
|
234
|
+
When `Node.Run()` is called, components are initialized in this order:
|
|
235
|
+
|
|
236
|
+
```
|
|
237
|
+
1. EventBus creation
|
|
238
|
+
2. Database loading (blob + metadata plugins)
|
|
239
|
+
3. ChainManager initialization
|
|
240
|
+
4. Ouroboros protocol handler creation
|
|
241
|
+
5. LedgerState creation (UTXO tracking, validation)
|
|
242
|
+
6. Bark blob store adapter (if configured)
|
|
243
|
+
7. LedgerState start
|
|
244
|
+
8. Snapshot manager start (captures genesis snapshot)
|
|
245
|
+
9. Mempool setup
|
|
246
|
+
10. ChainsyncState (multi-client tracking, stall detection)
|
|
247
|
+
11. ChainSelector (Ouroboros Praos chain comparison)
|
|
248
|
+
12. ConnectionManager (listeners)
|
|
249
|
+
13. Stalled client recycler (background goroutine)
|
|
250
|
+
14. PeerGovernor (topology + churn + ledger peers)
|
|
251
|
+
15. UTxO RPC server (if port configured)
|
|
252
|
+
16. Bark archive server (if port configured)
|
|
253
|
+
17. Blockfrost API (if port configured)
|
|
254
|
+
18. Mesh API (if port configured)
|
|
255
|
+
19. Block forger + leader election (if block producer mode)
|
|
256
|
+
20. Wait for shutdown signal
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
### Shutdown Flow
|
|
260
|
+
|
|
261
|
+
Graceful shutdown proceeds in phases:
|
|
262
|
+
|
|
263
|
+
```
|
|
264
|
+
Phase 1: Stop accepting new work
|
|
265
|
+
Block forger, leader election, chain selector,
|
|
266
|
+
peer governor, snapshot manager, UTxO RPC,
|
|
267
|
+
Bark, Blockfrost API, Mesh API
|
|
268
|
+
|
|
269
|
+
Phase 2: Drain and close connections
|
|
270
|
+
Mempool, ConnectionManager
|
|
271
|
+
|
|
272
|
+
Phase 3: Flush state and close database
|
|
273
|
+
LedgerState, Database
|
|
274
|
+
|
|
275
|
+
Phase 4: Cleanup resources
|
|
276
|
+
Registered shutdown functions, EventBus
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
## Event-Driven Communication
|
|
280
|
+
|
|
281
|
+
Components communicate via the `EventBus` (`event/event.go`) rather than direct coupling:
|
|
282
|
+
|
|
283
|
+
```
|
|
284
|
+
Publisher ---publish---> EventBus ---deliver---> Subscribers
|
|
285
|
+
|
|
|
286
|
+
| async
|
|
287
|
+
v
|
|
288
|
+
Worker Pool
|
|
289
|
+
(4 workers)
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
### Key Event Types
|
|
293
|
+
|
|
294
|
+
All event types follow the `subsystem.snake_case_name` convention.
|
|
295
|
+
|
|
296
|
+
| Event | Source | Purpose |
|
|
297
|
+
|-------|--------|---------|
|
|
298
|
+
| `chain.update` | ChainManager | Block added to chain |
|
|
299
|
+
| `chain.fork_detected` | ChainManager | Fork detected |
|
|
300
|
+
| `chainselection.peer_tip_update` | ChainSelector | Peer tip updated |
|
|
301
|
+
| `chainselection.chain_switch` | ChainSelector | Active peer changed |
|
|
302
|
+
| `chainselection.selection` | ChainSelector | Chain selection made |
|
|
303
|
+
| `chainselection.peer_evicted` | ChainSelector | Peer evicted |
|
|
304
|
+
| `chainsync.client_added` | ChainsyncState | Client tracking added |
|
|
305
|
+
| `chainsync.client_removed` | ChainsyncState | Client tracking removed |
|
|
306
|
+
| `chainsync.client_synced` | ChainsyncState | Client caught up |
|
|
307
|
+
| `chainsync.client_stalled` | ChainsyncState | Client stall detected |
|
|
308
|
+
| `chainsync.fork_detected` | ChainsyncState | Chainsync fork detected |
|
|
309
|
+
| `chainsync.client_remove_requested` | Node | Stalled client removal |
|
|
310
|
+
| `chainsync.resync` | LedgerState | Chainsync resync request |
|
|
311
|
+
| `connmanager.inbound_conn` | ConnManager | Inbound connection |
|
|
312
|
+
| `connmanager.conn_closed` | ConnManager | Connection closed |
|
|
313
|
+
| `connmanager.connection_recycle_requested` | ConnManager | Connection recycling |
|
|
314
|
+
| `mempool.add_tx` | Mempool | Transaction added |
|
|
315
|
+
| `mempool.remove_tx` | Mempool | Transaction removed |
|
|
316
|
+
| `ledger.block` | LedgerState | Block applied or rolled back |
|
|
317
|
+
| `ledger.tx` | LedgerState | Transaction processed |
|
|
318
|
+
| `ledger.error` | LedgerState | Ledger error occurred |
|
|
319
|
+
| `ledger.blockfetch` | Ouroboros | Block fetch event received |
|
|
320
|
+
| `ledger.chainsync` | Ouroboros | Chainsync event received |
|
|
321
|
+
| `ledger.pool_restored` | LedgerState | Pool state restored after rollback |
|
|
322
|
+
| `epoch.transition` | LedgerState | Epoch boundary crossed |
|
|
323
|
+
| `hardfork.transition` | LedgerState | Hard fork transition |
|
|
324
|
+
| `block.forged` | BlockForger | Block successfully forged |
|
|
325
|
+
| `forging.slot_battle` | SlotTracker | Competing blocks at same slot |
|
|
326
|
+
| `peergov.outbound_conn` | PeerGov | Outbound connection initiated |
|
|
327
|
+
| `peergov.peer_demoted` | PeerGov | Peer demoted |
|
|
328
|
+
| `peergov.peer_promoted` | PeerGov | Peer promoted |
|
|
329
|
+
| `peergov.peer_removed` | PeerGov | Peer removed |
|
|
330
|
+
| `peergov.peer_added` | PeerGov | Peer added |
|
|
331
|
+
| `peergov.peer_churn` | PeerGov | Peer rotation event |
|
|
332
|
+
| `peergov.quota_status` | PeerGov | Quota status update |
|
|
333
|
+
| `peergov.bootstrap_exited` | PeerGov | Exited bootstrap mode |
|
|
334
|
+
| `peergov.bootstrap_recovery` | PeerGov | Bootstrap recovery |
|
|
335
|
+
|
|
336
|
+
### EventBus Features
|
|
337
|
+
|
|
338
|
+
- Asynchronous delivery via worker pool (4 workers, 1000-entry queue)
|
|
339
|
+
- Buffered channels with timeout protection to prevent blocking
|
|
340
|
+
- Prometheus metrics for event delivery tracking and latency
|
|
341
|
+
|
|
342
|
+
## Storage Architecture
|
|
343
|
+
|
|
344
|
+
Dingo uses a dual-layer storage architecture with pluggable backends:
|
|
345
|
+
|
|
346
|
+
```
|
|
347
|
+
Database
|
|
348
|
+
-------------------------------------------------
|
|
349
|
+
| Blob Store | Metadata Store |
|
|
350
|
+
| (blocks, UTxOs, txs) | (indexes, state)|
|
|
351
|
+
-------------------------------------------------
|
|
352
|
+
| Plugins: | Plugins: |
|
|
353
|
+
| - Badger (default) | - SQLite (default)|
|
|
354
|
+
| - AWS S3 | - PostgreSQL |
|
|
355
|
+
| - Google Cloud Storage | - MySQL |
|
|
356
|
+
-------------------------------------------------
|
|
357
|
+
```
|
|
358
|
+
|
|
359
|
+
### Storage Modes
|
|
360
|
+
|
|
361
|
+
Dingo supports two storage modes, configured via `storageMode`:
|
|
362
|
+
|
|
363
|
+
- `core` (default): Minimal storage for chain following and block production.
|
|
364
|
+
- `api`: Extended storage with transaction indexes, address lookups, and asset tracking. Required when any API server (Blockfrost, Mesh, UTxO RPC) is enabled.
|
|
365
|
+
|
|
366
|
+
### Tiered CBOR Cache
|
|
367
|
+
|
|
368
|
+
Instead of storing full CBOR data redundantly, Dingo uses offset-based references with a tiered cache:
|
|
369
|
+
|
|
370
|
+
```
|
|
371
|
+
CBOR Data Request
|
|
372
|
+
|
|
|
373
|
+
v
|
|
374
|
+
Tier 1: Hot Cache (in-memory)
|
|
375
|
+
- UTxO entries: configurable count (HotUtxoEntries)
|
|
376
|
+
- Transaction entries: configurable count + byte limit
|
|
377
|
+
- O(1) access, LRU eviction
|
|
378
|
+
| miss
|
|
379
|
+
v
|
|
380
|
+
Tier 2: Block LRU Cache
|
|
381
|
+
- Recently accessed blocks with pre-computed indexes
|
|
382
|
+
- Fast extraction without blob store access
|
|
383
|
+
| miss
|
|
384
|
+
v
|
|
385
|
+
Tier 3: Cold Extraction
|
|
386
|
+
- Fetch block from blob store
|
|
387
|
+
- Extract CBOR at stored offset
|
|
388
|
+
```
|
|
389
|
+
|
|
390
|
+
### CborOffset Structure
|
|
391
|
+
|
|
392
|
+
Each CBOR reference is a fixed 52-byte `CborOffset` struct with magic prefix:
|
|
393
|
+
|
|
394
|
+
| Field | Size | Purpose |
|
|
395
|
+
|-------|------|---------|
|
|
396
|
+
| Magic | 4 bytes | "DOFF" prefix to identify offset storage |
|
|
397
|
+
| BlockSlot | 8 bytes | Block slot number |
|
|
398
|
+
| BlockHash | 32 bytes | Block hash |
|
|
399
|
+
| ByteOffset | 4 bytes | Offset within block CBOR |
|
|
400
|
+
| ByteLength | 4 bytes | Length of CBOR data |
|
|
401
|
+
|
|
402
|
+
### Plugin System
|
|
403
|
+
|
|
404
|
+
Plugins are registered via a global registry (`database/plugin/plugin.go`):
|
|
405
|
+
|
|
406
|
+
```go
|
|
407
|
+
plugin.SetPluginOption() -> plugin.GetPlugin() -> plugin.Start() -> Use interface
|
|
408
|
+
```
|
|
409
|
+
|
|
410
|
+
Interfaces:
|
|
411
|
+
- `BlobStore` - Block/transaction storage operations
|
|
412
|
+
- `MetadataStore` - Index and query operations
|
|
413
|
+
|
|
414
|
+
### Database Models
|
|
415
|
+
|
|
416
|
+
Key models in `database/models/`:
|
|
417
|
+
|
|
418
|
+
| Model | Purpose |
|
|
419
|
+
|-------|---------|
|
|
420
|
+
| `Block` | Block metadata (slot, hash, height, era) |
|
|
421
|
+
| `Transaction` | Transaction records |
|
|
422
|
+
| `Utxo` | UTXO set entries |
|
|
423
|
+
| `Account` | Stake account registrations and delegations |
|
|
424
|
+
| `Pool` | Stake pool registrations |
|
|
425
|
+
| `Drep` | DRep registrations |
|
|
426
|
+
| `Epoch` | Epoch metadata and nonces |
|
|
427
|
+
| `PoolStakeSnapshot` | Per-pool stake at epoch boundary |
|
|
428
|
+
| `EpochSummary` | Network-wide aggregates per epoch |
|
|
429
|
+
| `BackfillCheckpoint` | Mithril backfill progress tracking |
|
|
430
|
+
| `NetworkState` | Network-wide state tracking |
|
|
431
|
+
| `GovernanceAction` | Governance proposals |
|
|
432
|
+
| `CommitteeMember` | Constitutional committee members |
|
|
433
|
+
|
|
434
|
+
## Blockchain State Management
|
|
435
|
+
|
|
436
|
+
The `LedgerState` (`ledger/state.go`) manages UTXO tracking and validation:
|
|
437
|
+
|
|
438
|
+
```
|
|
439
|
+
LedgerState
|
|
440
|
+
-------------------------------------------------
|
|
441
|
+
| - UTXO tracking and lookup |
|
|
442
|
+
| - Protocol parameter management |
|
|
443
|
+
| - Certificate processing (pools, stakes, DReps)|
|
|
444
|
+
| - Transaction validation (Phase 1: UTXO rules) |
|
|
445
|
+
| - Plutus script execution (Phase 2) |
|
|
446
|
+
| - Block header validation (VRF/KES/OpCert) |
|
|
447
|
+
| - Epoch nonce computation |
|
|
448
|
+
| - Governance action processing |
|
|
449
|
+
| - State restoration on rollback |
|
|
450
|
+
| - Ledger-based peer discovery |
|
|
451
|
+
-------------------------------------------------
|
|
452
|
+
| Database Worker Pool |
|
|
453
|
+
| - Async database operations |
|
|
454
|
+
| - Configurable pool size (default: 5 workers) |
|
|
455
|
+
| - Fire-and-forget or result-waiting |
|
|
456
|
+
-------------------------------------------------
|
|
457
|
+
```
|
|
458
|
+
|
|
459
|
+
### Era-Specific Validation
|
|
460
|
+
|
|
461
|
+
The `ledger/eras/` package provides era-specific validation rules for each Cardano era (Byron through Conway). Each era implements protocol parameter extraction, fee calculation, and era-specific transaction rules.
|
|
462
|
+
|
|
463
|
+
### Block Header Validation
|
|
464
|
+
|
|
465
|
+
`ledger/verify_header.go` performs cryptographic validation of block headers:
|
|
466
|
+
- VRF proof verification against the epoch nonce
|
|
467
|
+
- KES signature verification with period checks
|
|
468
|
+
- Operational certificate chain validation
|
|
469
|
+
- Slot leader eligibility checking
|
|
470
|
+
|
|
471
|
+
### Epoch Nonce Computation
|
|
472
|
+
|
|
473
|
+
`ledger/chainsync.go` and `ledger/candidate_nonce.go` implement the Ouroboros Praos nonce evolution:
|
|
474
|
+
- Evolving nonce: accumulated from each block's VRF output
|
|
475
|
+
- Candidate nonce: frozen at the stability window cutoff
|
|
476
|
+
- Epoch nonce: derived from candidate nonce and previous epoch's last block hash
|
|
477
|
+
|
|
478
|
+
### Ledger View
|
|
479
|
+
|
|
480
|
+
The `LedgerView` interface provides query access to ledger state:
|
|
481
|
+
- UTXO lookups by address or output reference
|
|
482
|
+
- Protocol parameter queries
|
|
483
|
+
- Stake distribution queries
|
|
484
|
+
- Account registration checks
|
|
485
|
+
|
|
486
|
+
## Chain Management
|
|
487
|
+
|
|
488
|
+
The `ChainManager` (`chain/manager.go`) manages multiple chains:
|
|
489
|
+
|
|
490
|
+
```
|
|
491
|
+
ChainManager
|
|
492
|
+
-------------------------------------------------
|
|
493
|
+
| Primary Chain |
|
|
494
|
+
| Persistent chain loaded from database |
|
|
495
|
+
| |
|
|
496
|
+
| Fork Chains |
|
|
497
|
+
| Temporary chains for peer synchronization |
|
|
498
|
+
| |
|
|
499
|
+
| Block Cache |
|
|
500
|
+
| In-memory cache for quick access |
|
|
501
|
+
| |
|
|
502
|
+
| Rollback Support |
|
|
503
|
+
| Reverts chain to previous point (up to K |
|
|
504
|
+
| blocks), emits rollback events, restores |
|
|
505
|
+
| account/pool/DRep state |
|
|
506
|
+
-------------------------------------------------
|
|
507
|
+
```
|
|
508
|
+
|
|
509
|
+
### Chain Selection (Ouroboros Praos)
|
|
510
|
+
|
|
511
|
+
The `ChainSelector` (`chainselection/`) implements Ouroboros Praos rules:
|
|
512
|
+
|
|
513
|
+
1. Higher block number wins (longer chain)
|
|
514
|
+
2. At equal block number, lower slot wins (denser chain)
|
|
515
|
+
|
|
516
|
+
The selector tracks tips from all connected peers and switches the active chainsync connection when a better chain is found.
|
|
517
|
+
|
|
518
|
+
## Network and Protocol Handling
|
|
519
|
+
|
|
520
|
+
### Ouroboros Protocol Stack
|
|
521
|
+
|
|
522
|
+
The `Ouroboros` struct (`ouroboros/ouroboros.go`) manages all protocol handlers:
|
|
523
|
+
|
|
524
|
+
```
|
|
525
|
+
Ouroboros Protocols
|
|
526
|
+
-------------------------------------------
|
|
527
|
+
| Node-to-Node (N2N) | Node-to-Client (N2C)|
|
|
528
|
+
|---------------------|---------------------|
|
|
529
|
+
| ChainSync | ChainSync |
|
|
530
|
+
| Block sync | Wallet sync |
|
|
531
|
+
| | |
|
|
532
|
+
| BlockFetch | LocalTxMonitor |
|
|
533
|
+
| Block retrieval | Mempool queries |
|
|
534
|
+
| | |
|
|
535
|
+
| TxSubmission2 | LocalTxSubmission |
|
|
536
|
+
| Transaction share | Transaction submit|
|
|
537
|
+
| | |
|
|
538
|
+
| PeerSharing | LocalStateQuery |
|
|
539
|
+
| Peer discovery | Ledger queries |
|
|
540
|
+
-------------------------------------------
|
|
541
|
+
```
|
|
542
|
+
|
|
543
|
+
### Connection Management
|
|
544
|
+
|
|
545
|
+
The `ConnectionManager` (`connmanager/connection_manager.go`) handles connection lifecycle:
|
|
546
|
+
|
|
547
|
+
```
|
|
548
|
+
ConnectionManager
|
|
549
|
+
-------------------------------------------------
|
|
550
|
+
| Inbound Listeners |
|
|
551
|
+
| TCP N2N (default: 3001) |
|
|
552
|
+
| TCP N2C (configurable) |
|
|
553
|
+
| Unix socket N2C |
|
|
554
|
+
| |
|
|
555
|
+
| Outbound Clients |
|
|
556
|
+
| Source port selection |
|
|
557
|
+
| |
|
|
558
|
+
| Connection Tracking |
|
|
559
|
+
| Per-peer connection state |
|
|
560
|
+
| Duplex detection (bidirectional connections) |
|
|
561
|
+
| Stalled connection recycling |
|
|
562
|
+
-------------------------------------------------
|
|
563
|
+
```
|
|
564
|
+
|
|
565
|
+
### Multi-Client Chainsync
|
|
566
|
+
|
|
567
|
+
The `chainsync.State` tracks multiple concurrent chainsync clients:
|
|
568
|
+
- Configurable max client count
|
|
569
|
+
- Stall detection with configurable timeout
|
|
570
|
+
- Grace period before recycling stalled connections
|
|
571
|
+
- Cooldown to prevent rapid reconnection flapping
|
|
572
|
+
- Plateau detection: if the local tip stops advancing while peers are ahead, the active chainsync connection is recycled
|
|
573
|
+
|
|
574
|
+
## Peer Governance
|
|
575
|
+
|
|
576
|
+
The `PeerGovernor` (`peergov/peergov.go`) manages peer selection and topology:
|
|
577
|
+
|
|
578
|
+
```
|
|
579
|
+
PeerGovernor
|
|
580
|
+
-------------------------------------------------
|
|
581
|
+
| Peer Targets |
|
|
582
|
+
| Known peers: 150 |
|
|
583
|
+
| Established peers: 50 |
|
|
584
|
+
| Active peers: 20 |
|
|
585
|
+
| |
|
|
586
|
+
| Per-Source Quotas |
|
|
587
|
+
| Topology quota: 3 peers |
|
|
588
|
+
| Gossip quota: 12 peers |
|
|
589
|
+
| Ledger quota: 5 peers |
|
|
590
|
+
| |
|
|
591
|
+
| Peer Churn |
|
|
592
|
+
| Gossip churn: 5 min interval, 20% |
|
|
593
|
+
| Public root churn: 30 min interval, 20% |
|
|
594
|
+
| |
|
|
595
|
+
| Peer Scoring |
|
|
596
|
+
| Performance-based peer ranking |
|
|
597
|
+
| |
|
|
598
|
+
| Ledger Peer Discovery |
|
|
599
|
+
| Discovers peers from stake pool relays |
|
|
600
|
+
| Activated after UseLedgerAfterSlot |
|
|
601
|
+
| |
|
|
602
|
+
| Denied List |
|
|
603
|
+
| Prevents reconnection to bad peers |
|
|
604
|
+
| (30 min timeout) |
|
|
605
|
+
-------------------------------------------------
|
|
606
|
+
```
|
|
607
|
+
|
|
608
|
+
## Transaction Mempool
|
|
609
|
+
|
|
610
|
+
The `Mempool` (`mempool/mempool.go`) manages pending transactions:
|
|
611
|
+
|
|
612
|
+
```
|
|
613
|
+
Mempool
|
|
614
|
+
-------------------------------------------------
|
|
615
|
+
| Transaction Management |
|
|
616
|
+
| Validation on add (Phase 1 + Phase 2) |
|
|
617
|
+
| Capacity limits (configurable) |
|
|
618
|
+
| Watermark-based eviction and rejection |
|
|
619
|
+
| Automatic purging on chain updates |
|
|
620
|
+
| |
|
|
621
|
+
| Consumer Tracking |
|
|
622
|
+
| Per-consumer state for TX distribution |
|
|
623
|
+
| |
|
|
624
|
+
| Metrics |
|
|
625
|
+
| Transaction count, total size, |
|
|
626
|
+
| validation statistics |
|
|
627
|
+
-------------------------------------------------
|
|
628
|
+
```
|
|
629
|
+
|
|
630
|
+
## Block Production
|
|
631
|
+
|
|
632
|
+
When running as a stake pool operator, Dingo can produce blocks. This involves three subsystems under `ledger/`:
|
|
633
|
+
|
|
634
|
+
### Leader Election (`ledger/leader/`)
|
|
635
|
+
|
|
636
|
+
`Election` subscribes to epoch transition events and pre-computes a leader schedule for each epoch. For each slot, it checks whether the pool's VRF output meets the threshold determined by the pool's relative stake (from the "go" snapshot, 2 epochs old).
|
|
637
|
+
|
|
638
|
+
### Block Forging (`ledger/forging/`)
|
|
639
|
+
|
|
640
|
+
`BlockForger` runs a slot-based loop that:
|
|
641
|
+
1. Waits for the next slot boundary using the wall-clock slot timer
|
|
642
|
+
2. Checks leader eligibility via the `Election`
|
|
643
|
+
3. Assembles a block from mempool transactions using `DefaultBlockBuilder`
|
|
644
|
+
4. Broadcasts the forged block through the chain manager
|
|
645
|
+
|
|
646
|
+
The forger tracks slot battles (competing blocks at the same slot) and skips forging when the node is not sufficiently synced, controlled by `forgeSyncToleranceSlots` and `forgeStaleGapThresholdSlots`.
|
|
647
|
+
|
|
648
|
+
### Pool Credentials (`ledger/forging/keys.go`, `keystore/`)
|
|
649
|
+
|
|
650
|
+
VRF signing keys, KES signing keys, and operational certificates are loaded from files at startup. The `keystore` package handles platform-specific file permission checks (Unix file modes, Windows ACLs) and KES key evolution.
|
|
651
|
+
|
|
652
|
+
## Mithril Bootstrap
|
|
653
|
+
|
|
654
|
+
The `mithril/` package enables fast initial sync by downloading and importing a Mithril snapshot rather than syncing from genesis:
|
|
655
|
+
|
|
656
|
+
1. `client.go` queries the Mithril aggregator for the latest certified snapshot
|
|
657
|
+
2. `download.go` downloads and extracts the snapshot archive
|
|
658
|
+
3. `bootstrap.go` orchestrates the import into Dingo's database
|
|
659
|
+
|
|
660
|
+
This is exposed via the `dingo mithril` CLI subcommand and the `dingo load` command.
|
|
661
|
+
|
|
662
|
+
## API Servers
|
|
663
|
+
|
|
664
|
+
Dingo provides multiple API interfaces, all optional and gated by port configuration. All require `storageMode: api`.
|
|
665
|
+
|
|
666
|
+
### Blockfrost API (`blockfrost/`)
|
|
667
|
+
|
|
668
|
+
A Blockfrost-compatible REST API that provides read access to chain data. Uses an adapter pattern to translate between Dingo's internal state and Blockfrost response types. Supports cursor-based pagination.
|
|
669
|
+
|
|
670
|
+
### Mesh API (`mesh/`)
|
|
671
|
+
|
|
672
|
+
Implements the Mesh (formerly Rosetta) API specification for wallet integration and chain analysis. Provides endpoints for network status, account balances, block queries, transaction construction, and mempool access.
|
|
673
|
+
|
|
674
|
+
### UTxO RPC (`utxorpc/`)
|
|
675
|
+
|
|
676
|
+
A gRPC server implementing the UTxO RPC specification with query, submit, sync, and watch services. Supports optional TLS.
|
|
677
|
+
|
|
678
|
+
### Bark (`bark/`)
|
|
679
|
+
|
|
680
|
+
An HTTP server for block archive access. Also acts as a blob store adapter with a configurable security window, allowing Dingo to fetch historical blocks from a remote Bark instance instead of storing them locally.
|
|
681
|
+
|
|
682
|
+
## Design Patterns
|
|
683
|
+
|
|
684
|
+
### Dependency Injection
|
|
685
|
+
|
|
686
|
+
The `Node` creates and injects dependencies into components during initialization. Components receive their dependencies through constructors rather than creating them internally.
|
|
687
|
+
|
|
688
|
+
### Interface Segregation
|
|
689
|
+
|
|
690
|
+
Small, focused interfaces allow swapping implementations:
|
|
691
|
+
- `BlobStore` for blob storage
|
|
692
|
+
- `MetadataStore` for metadata storage
|
|
693
|
+
- Protocol handler interfaces for Ouroboros
|
|
694
|
+
- `forging.LeaderChecker`, `forging.BlockBroadcaster`, `forging.SlotClockProvider` for block production
|
|
695
|
+
|
|
696
|
+
### Plugin Architecture
|
|
697
|
+
|
|
698
|
+
Storage backends are loaded dynamically through a plugin registry, allowing extension without modifying core code.
|
|
699
|
+
|
|
700
|
+
### Adapter Pattern
|
|
701
|
+
|
|
702
|
+
The block production system uses adapters (`mempoolAdapter`, `stakeDistributionAdapter`, `epochInfoAdapter`, `slotClockAdapter`) to decouple forging interfaces from concrete implementations.
|
|
703
|
+
|
|
704
|
+
### Observer Pattern
|
|
705
|
+
|
|
706
|
+
The `EventBus` implements publisher/subscriber communication, decoupling components that produce events from those that consume them.
|
|
707
|
+
|
|
708
|
+
### Iterator Pattern
|
|
709
|
+
|
|
710
|
+
`ChainIterator` provides sequential access to blocks without exposing internal chain structure.
|
|
711
|
+
|
|
712
|
+
### Manager Pattern
|
|
713
|
+
|
|
714
|
+
`ChainManager`, `PeerGovernor`, and `snapshot.Manager` orchestrate related operations and maintain consistent state across multiple entities.
|
|
715
|
+
|
|
716
|
+
### Worker Pool Pattern
|
|
717
|
+
|
|
718
|
+
Database operations and event delivery use worker pools for controlled concurrency and backpressure.
|
|
719
|
+
|
|
720
|
+
## Threading and Concurrency
|
|
721
|
+
|
|
722
|
+
| Pattern | Usage |
|
|
723
|
+
|---------|-------|
|
|
724
|
+
| Goroutine Management | Tracked WaitGroups for clean shutdown |
|
|
725
|
+
| Mutex Protection | RWMutex for read-heavy operations |
|
|
726
|
+
| Atomic Operations | Atomic types for metrics counters |
|
|
727
|
+
| Channel Communication | EventBus async delivery |
|
|
728
|
+
| Context Cancellation | Graceful shutdown signals |
|
|
729
|
+
| Worker Pools | Database operations and event delivery |
|
|
730
|
+
| sync.Once | Ensure single shutdown execution |
|
|
731
|
+
|
|
732
|
+
## Configuration
|
|
733
|
+
|
|
734
|
+
Configuration priority (highest to lowest):
|
|
735
|
+
|
|
736
|
+
1. CLI flags
|
|
737
|
+
2. Environment variables
|
|
738
|
+
3. YAML config file (`dingo.yaml`)
|
|
739
|
+
4. Hardcoded defaults
|
|
740
|
+
|
|
741
|
+
Key configuration areas:
|
|
742
|
+
- Network selection (preview, preprod, mainnet)
|
|
743
|
+
- Storage mode (`core` or `api`)
|
|
744
|
+
- Database path and plugins
|
|
745
|
+
- Listen addresses and ports
|
|
746
|
+
- Mempool capacity and watermarks
|
|
747
|
+
- Peer targets and quotas
|
|
748
|
+
- CBOR cache sizing (hot entries, block LRU)
|
|
749
|
+
- Chainsync client limits and stall timeout
|
|
750
|
+
- Block producer credentials (VRF key, KES key, operational certificate)
|
|
751
|
+
- API server ports (Blockfrost, Mesh, UTxO RPC, Bark)
|
|
752
|
+
|
|
753
|
+
## Stake Snapshots
|
|
754
|
+
|
|
755
|
+
Stake snapshots capture the stake distribution at epoch boundaries for use in Ouroboros Praos leader election. The block producer must know the stake distribution from 2 epochs ago to determine if it is the slot leader.
|
|
756
|
+
|
|
757
|
+
### Ouroboros Praos Snapshot Model
|
|
758
|
+
|
|
759
|
+
```
|
|
760
|
+
Epoch N-2 Epoch N-1 Epoch N (current)
|
|
761
|
+
| | |
|
|
762
|
+
v v v
|
|
763
|
+
[Go Snapshot] <- [Set Snapshot] <- [Mark Snapshot]
|
|
764
|
+
| |
|
|
765
|
+
Used for leader election Captured at
|
|
766
|
+
in current epoch epoch boundary
|
|
767
|
+
```
|
|
768
|
+
|
|
769
|
+
- Mark Snapshot: Captured at the end of epoch N, becomes Set at epoch N+1
|
|
770
|
+
- Set Snapshot: Previous Mark, becomes Go at epoch N+1
|
|
771
|
+
- Go Snapshot: Active snapshot used for leader election (2 epochs old)
|
|
772
|
+
|
|
773
|
+
### Stake Snapshot Components
|
|
774
|
+
|
|
775
|
+
```
|
|
776
|
+
Block Processing
|
|
777
|
+
|
|
|
778
|
+
v
|
|
779
|
+
LedgerState --> Epoch Transition --> EventBus (EpochTransitionEvent)
|
|
780
|
+
Detection |
|
|
781
|
+
v
|
|
782
|
+
SnapshotManager
|
|
783
|
+
(Subscribe)
|
|
784
|
+
|
|
|
785
|
+
-----------------------------------+------
|
|
786
|
+
| | |
|
|
787
|
+
v v v
|
|
788
|
+
Calculate Stake Rotate Snapshots Cleanup
|
|
789
|
+
Distribution Mark -> Set -> Go
|
|
790
|
+
| |
|
|
791
|
+
v v
|
|
792
|
+
Database
|
|
793
|
+
PoolStakeSnapshot EpochSummary
|
|
794
|
+
```
|
|
795
|
+
|
|
796
|
+
### Database Models
|
|
797
|
+
|
|
798
|
+
| Model | Purpose |
|
|
799
|
+
|-------|---------|
|
|
800
|
+
| `PoolStakeSnapshot` | Per-pool stake at epoch boundary (epoch, type, pool hash, stake, delegator count) |
|
|
801
|
+
| `EpochSummary` | Network-wide aggregates (total stake, pool count, delegator count, epoch nonce) |
|
|
802
|
+
|
|
803
|
+
Snapshot types: `"mark"`, `"set"`, `"go"`
|
|
804
|
+
|
|
805
|
+
### Query Interface
|
|
806
|
+
|
|
807
|
+
The `LedgerView` provides stake distribution queries:
|
|
808
|
+
|
|
809
|
+
```go
|
|
810
|
+
// Get full stake distribution for leader election
|
|
811
|
+
dist, err := ledgerView.GetStakeDistribution(epoch)
|
|
812
|
+
|
|
813
|
+
// Get stake for a specific pool
|
|
814
|
+
poolStake, err := ledgerView.GetPoolStake(epoch, poolKeyHash)
|
|
815
|
+
|
|
816
|
+
// Get total active stake
|
|
817
|
+
totalStake, err := ledgerView.GetTotalActiveStake(epoch)
|
|
818
|
+
```
|
|
819
|
+
|
|
820
|
+
### Event-Driven Capture
|
|
821
|
+
|
|
822
|
+
`EpochTransitionEvent` triggers snapshot capture:
|
|
823
|
+
|
|
824
|
+
```go
|
|
825
|
+
type EpochTransitionEvent struct {
|
|
826
|
+
PreviousEpoch uint64
|
|
827
|
+
NewEpoch uint64
|
|
828
|
+
BoundarySlot uint64
|
|
829
|
+
EpochNonce []byte
|
|
830
|
+
ProtocolVersion uint
|
|
831
|
+
SnapshotSlot uint64 // Typically boundary - 1
|
|
832
|
+
}
|
|
833
|
+
```
|
|
834
|
+
|
|
835
|
+
### Rollback Support
|
|
836
|
+
|
|
837
|
+
On chain rollback past an epoch boundary:
|
|
838
|
+
- Delete snapshots for epochs after rollback point
|
|
839
|
+
- Recalculate affected snapshots on forward replay
|