@blinklabs/dingo 0.20.0 → 0.23.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,91 @@
1
+ # Release Notes
2
+
3
+ ## v0.22.1 (March 8, 2026)
4
+
5
+ **Title:** Stability updates and polish
6
+
7
+ **Date:** March 8, 2026
8
+
9
+ **Version:** v0.22.1
10
+
11
+ Hi folks! Here’s what we shipped in v0.22.1.
12
+
13
+ ### ✨ What's New
14
+
15
+ - **Release notes:** We added v0.22.0 release notes to `RELEASE_NOTES.md` so you can scan changes in one place.
16
+
17
+ ### 💪 Improvements
18
+
19
+ - **Transaction validation:** Transaction validation is more consistent because Conway UTxO validation now runs even when a transaction is marked invalid, while script evaluation is still skipped.
20
+ - **Epoch processing:** Epoch processing recovers more gracefully because nonce recomputation falls back to recomputing from epoch start when an anchor block nonce is missing.
21
+ - **Queue handling:** Queue handling is more solid under load because the main event queue size increased from 1,000 to 10,000 and the header queue size is now clamped to at least the default.
22
+ - **Implausible-tip checks:** Implausible-tip checks are safer across edge cases because the logic now uses peer-based reference blocks with overflow-safe arithmetic.
23
+ - **Publish workflow login:** Publishing is more rock-solid because the publish workflow was tweaked to log in using `docker/login-action@v4`.
24
+ - **Publish workflow runtime:** Automation stays more up to date because Node.js `24.x` was rolled out for the publish workflow.
25
+
26
+ ### 🔧 Fixes
27
+
28
+ - **Epoch cache rollbacks:** Epoch cache handling is safer during concurrent rollbacks because `advanceEpochCache` now guards against empty caches and validates the tail before appending a new epoch.
29
+ - **Test timing:** Tests are less flaky on slower machines because `TestSchedulerRunFailFunc` timing parameters were relaxed.
30
+
31
+ ### 📋 What You Need to Know
32
+
33
+ - **API bind address:** Config validation no longer defaults the API bind address to `0.0.0.0`, so set it explicitly if you need it.
34
+ - **CI and publishing scripts:** If you maintain custom publishing or CI scripts, give them a quick check for compatibility with Node.js `24.x` and `docker/login-action@v4`.
35
+
36
+ ### 🙏 Thank You
37
+
38
+ Thank you for trying!
39
+
40
+ ---
41
+
42
+ ## v0.22.0 (March 7, 2026)
43
+
44
+ **Title:** Mithril bootstrap, built-in APIs, and block production
45
+
46
+ **Date:** March 7, 2026
47
+
48
+ **Version:** v0.22.0
49
+
50
+ Hi folks! Here’s what we shipped in v0.22.0.
51
+
52
+ ### ✨ What's New
53
+
54
+ - **Mithril bootstrap:** Node operators can now bootstrap a Dingo node from a Mithril snapshot and have the ledger state imported automatically (see “Fast Bootstrapping with Mithril” in `README.md`).
55
+ - **Built-in HTTP APIs:** You can run Dingo with built-in, configurable HTTP APIs for common ecosystem compatibility.
56
+ - **Block production:** Block production is now supported with Praos leader election and keystore-backed key management.
57
+ - **Lifecycle events:** The node now emits richer on-chain lifecycle events that applications can subscribe to.
58
+ - **Conway governance metadata:** Governance and Conway-era features are now available in the on-chain metadata pipeline.
59
+ - **Leios mode:** A new “leios” mode is available for early experimentation with Leios protocols.
60
+ - **Stake snapshots:** Stake snapshots and stake distribution are now available with persistence and querying.
61
+
62
+ ### 💪 Improvements
63
+
64
+ - **Faster catch-up validation:** Syncing is now faster and safer by reducing unnecessary validation during initial catch-up.
65
+ - **Tiered storage and caching:** Block and transaction storage can now be tuned for performance and cost with tiered storage and caching options.
66
+ - **Rollback and resync:** Rollback and resync behavior is more robust under forks and stalled peers.
67
+ - **Peer management:** Network and peer management now scales more predictably under load.
68
+ - **Mempool resilience:** Transaction processing is more resilient and configurable under pressure.
69
+ - **Observability:** Observability has been expanded across sync, forging, and storage.
70
+ - **Time and nonce handling:** Epoch, slot, and nonce handling better matches Cardano semantics and edge cases.
71
+ - **Docs:** Developer and operator documentation has been significantly expanded.
72
+
73
+ ### 🔧 Fixes
74
+
75
+ - **Chainsync stability:** Chainsync is more reliable around header/block fetching and coordination.
76
+ - **Concurrency safety:** Several concurrency and event-delivery deadlocks and goroutine leak risks have been eliminated.
77
+ - **Security hardening:** Security hardening has been added for configuration, filesystem usage, and key material.
78
+ - **Protocol validation:** Cryptographic and protocol-validation correctness has been tightened across eras.
79
+ - **Object-store key encoding:** Storage key encoding issues in object stores have been resolved.
80
+
81
+ ### 📋 What You Need to Know
82
+
83
+ - **Event consumers:** If you rely on event type strings, update consumers to match the renamed event type strings.
84
+ - **API storage backfill:** If you enable API storage or new tiered storage modes, run a metadata backfill so queries return complete results.
85
+ - **Forging setup:** If you enable forging, double-check VRF/KES/OpCert key paths and permissions first.
86
+
87
+ ### 🙏 Thank You
88
+
89
+ Thank you for trying!
90
+
91
+ ---
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env bash
2
2
 
3
- # Copyright 2025 Blink Labs Software
3
+ # Copyright 2026 Blink Labs Software
4
4
  #
5
5
  # Licensed under the Apache License, Version 2.0 (the "License");
6
6
  # you may not use this file except in compliance with the License.
@@ -21,11 +21,11 @@ export CARDANO_DEV_MODE=true
21
21
 
22
22
  DEBUG=${DEBUG:-false}
23
23
 
24
- conf=$(dirname $CARDANO_CONFIG)
24
+ conf="$(dirname "$CARDANO_CONFIG")"
25
25
  now=$(date -u +%s)
26
- echo setting start time in $conf to $now
27
- sed -i -e "s/startTime\": .*,/startTime\": $now,/" $conf/byron-genesis.json
28
- sed -i -e "s/systemStart\": .*,/systemStart\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ --date=@$now)\",/" $conf/shelley-genesis.json
26
+ echo "setting start time in $conf to $now"
27
+ sed -i -e "s/startTime\": .*,/startTime\": $now,/" "$conf/byron-genesis.json"
28
+ sed -i -e "s/systemStart\": .*,/systemStart\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ --date=@$now)\",/" "$conf/shelley-genesis.json"
29
29
 
30
30
  echo resetting .devnet
31
31
  rm -rf .devnet/*
@@ -33,6 +33,8 @@ database:
33
33
  # Path prefix within the bucket
34
34
  prefix: ""
35
35
  s3:
36
+ # AWS Endpoint
37
+ endpoint: ""
36
38
  # AWS S3 bucket name
37
39
  bucket: ""
38
40
  # AWS region
@@ -46,12 +48,50 @@ database:
46
48
 
47
49
  # Metadata storage plugin configuration
48
50
  metadata:
49
- # Plugin to use for metadata storage (sqlite)
51
+ # Plugin to use for metadata storage (sqlite, postgres, mysql)
50
52
  plugin: "sqlite"
51
53
  # Configuration options for each plugin
52
54
  sqlite:
53
- # Data directory for SQLite database file
55
+ # Path to SQLite database file
54
56
  data-dir: ".dingo/metadata.db"
57
+ postgres:
58
+ # NOTE: These are example values for local development only.
59
+ # Do not use these credentials in production environments.
60
+ # Postgres host
61
+ host: "localhost"
62
+ # Postgres port
63
+ port: 5432
64
+ # Postgres user
65
+ user: "postgres"
66
+ # Postgres password (required - no default)
67
+ password: ""
68
+ # Postgres database name
69
+ database: "postgres"
70
+ # Postgres sslmode
71
+ ssl-mode: "disable"
72
+ # Postgres TimeZone
73
+ timezone: "UTC"
74
+ # Full Postgres DSN (overrides other options when set)
75
+ dsn: ""
76
+ mysql:
77
+ # NOTE: These are example values for local development only.
78
+ # Do not use these credentials in production environments.
79
+ # MySQL host
80
+ host: "localhost"
81
+ # MySQL port
82
+ port: 3306
83
+ # MySQL user
84
+ user: "root"
85
+ # MySQL password (required - no default)
86
+ password: ""
87
+ # MySQL database name
88
+ database: "mysql"
89
+ # MySQL TLS mode (mapped to tls= in DSN)
90
+ ssl-mode: ""
91
+ # MySQL time zone location
92
+ timezone: "UTC"
93
+ # Full MySQL DSN (overrides other options when set)
94
+ dsn: ""
55
95
 
56
96
  # Path to the UNIX domain socket file used by the server
57
97
  socketPath: "dingo.socket"
@@ -86,8 +126,64 @@ privatePort: 3002
86
126
  # Can be overridden with the port environment variable
87
127
  relayPort: 3001
88
128
 
89
- # TCP port to bind for listening for UTxO RPC
90
- utxorpcPort: 9090
129
+ # Storage mode controls how much data is persisted.
130
+ # - "core": Only consensus data (UTxOs, certs, pools, pparams). Suitable for
131
+ # block producers and relay nodes with no APIs enabled.
132
+ # - "api": Core data plus full transaction metadata (witnesses, scripts, datums,
133
+ # redeemers). Required when any API is enabled.
134
+ # APIs will refuse to start if storage mode is not "api".
135
+ #
136
+ # Can be overridden with DINGO_STORAGE_MODE
137
+ storageMode: "core"
138
+
139
+ # API ports (0 = disabled, default)
140
+ # All APIs require storageMode: "api"
141
+ #
142
+ # UTxO RPC gRPC API port
143
+ # Can be overridden with DINGO_UTXORPC_PORT
144
+ utxorpcPort: 0
145
+ # Blockfrost-compatible REST API port
146
+ # Can be overridden with DINGO_BLOCKFROST_PORT
147
+ blockfrostPort: 0
148
+ # Mesh (Coinbase Rosetta) REST API port
149
+ # Can be overridden with DINGO_MESH_PORT
150
+ meshPort: 0
151
+
152
+ # base url of bark archive server
153
+ barkBaseUrl: ""
154
+
155
+ # number of slots from tip within which blocks
156
+ # will not be found in the bark archive
157
+ barkSecurityWindow: 10000
158
+
159
+ # TCP port bind for listening for bark RPC calls
160
+ barkPort: 0
161
+
162
+ # ---
163
+ # Deployment pattern examples (uncomment one block):
164
+ #
165
+ # Relay node (core storage, no APIs):
166
+ # storageMode: "core"
167
+ # blockProducer: false
168
+ #
169
+ # Data node (API storage, one or more API ports enabled):
170
+ # storageMode: "api"
171
+ # utxorpcPort: 9090
172
+ # blockfrostPort: 3100
173
+ #
174
+ # Validator / block producer (core storage, no APIs):
175
+ # storageMode: "core"
176
+ # blockProducer: true
177
+ # shelleyVrfKey: "/keys/vrf.skey"
178
+ # shelleyKesKey: "/keys/kes.skey"
179
+ # shelleyOperationalCertificate: "/keys/opcert.cert"
180
+ # # Optional forging tolerances (0 = defaults)
181
+ # forgeSyncToleranceSlots: 0
182
+ # forgeStaleGapThresholdSlots: 0
183
+ #
184
+ # Dev mode (isolated, forge blocks, no outbound):
185
+ # runMode: "dev"
186
+ # ---
91
187
 
92
188
  # Ignore prior chain history and start from current tip (default: false)
93
189
  # This is experimental and may break — use with caution
@@ -98,9 +194,91 @@ intersectTip: false
98
194
  # Default: 1048576 (1 MB)
99
195
  mempoolCapacity: 1048576
100
196
 
101
- # Enable development mode which prevents outbound connections
102
- # Default: false
103
- devMode: false
197
+ # Forging tolerances (0 = defaults)
198
+ # forgeSyncToleranceSlots controls how far the local chain can lag the upstream
199
+ # tip before forging is skipped. forgeStaleGapThresholdSlots controls when to
200
+ # log an error if the chain tip is far ahead of the slot clock.
201
+ forgeSyncToleranceSlots: 0
202
+ forgeStaleGapThresholdSlots: 0
203
+
204
+ # Operational mode: "serve" (default), "load", or "dev"
205
+ # - serve: Full node with network connectivity (default)
206
+ # - load: Batch import from ImmutableDB (requires immutableDbPath)
207
+ # - dev: Development mode (forge blocks, disable outbound, skip topology)
208
+ # Note: CLI commands (serve, load) take priority over this setting
209
+ #
210
+ # Can be overridden with the DINGO_RUN_MODE environment variable
211
+ runMode: "serve"
212
+
213
+ # Path to ImmutableDB for batch import (used when runMode is "load")
214
+ # Can also be provided as argument to 'dingo load' command
215
+ #
216
+ # Can be overridden with the DINGO_IMMUTABLE_DB_PATH environment variable
217
+ immutableDbPath: ""
104
218
 
105
219
  # Validate historical blocks during ledger processing (default: false)
106
220
  validateHistorical: false
221
+
222
+ # Peer targets - target number of peers in each state
223
+ # These are goals the system works toward, not hard limits.
224
+ # Use 0 for default values, -1 for unlimited
225
+ # Default: known=150, established=50, active=20
226
+ #
227
+ # Target number of known (cold) peers
228
+ targetNumberOfKnownPeers: 0
229
+ # Target number of established (warm) peers
230
+ targetNumberOfEstablishedPeers: 0
231
+ # Target number of active (hot) peers
232
+ targetNumberOfActivePeers: 0
233
+
234
+ # Per-source quotas for active peers
235
+ # These specify how active peer slots are distributed by source.
236
+ # Use 0 for default values.
237
+ # Default: topology=3, gossip=12, ledger=5
238
+ #
239
+ # Active peer slots for topology sources (local + public roots)
240
+ activePeersTopologyQuota: 0
241
+ # Active peer slots for gossip sources
242
+ activePeersGossipQuota: 0
243
+ # Active peer slots for ledger sources
244
+ activePeersLedgerQuota: 0
245
+
246
+ # Cache configuration for tiered CBOR cache system
247
+ # This controls the in-memory caching of UTxO and transaction CBOR data
248
+ cache:
249
+ # Number of entries in the hot UTxO cache (LFU eviction)
250
+ # Default: 50000
251
+ hotUtxoEntries: 50000
252
+ # Number of entries in the hot transaction cache (LFU eviction)
253
+ # Default: 10000
254
+ hotTxEntries: 10000
255
+ # Maximum memory in bytes for hot transaction cache
256
+ # Default: 268435456 (256 MB)
257
+ hotTxMaxBytes: 268435456
258
+ # Number of blocks to keep in the block LRU cache
259
+ # Default: 500
260
+ blockLruEntries: 500
261
+ # Number of recent blocks to warm up on startup
262
+ # Default: 1000
263
+ warmupBlocks: 1000
264
+ # Wait for cache warmup to complete before serving requests
265
+ # Default: true
266
+ warmupSync: true
267
+
268
+ # Mithril snapshot bootstrap configuration
269
+ # Use 'dingo mithril sync' or 'dingo sync --mithril' to bootstrap from a snapshot
270
+ mithril:
271
+ # Enable Mithril integration (default: true)
272
+ enabled: true
273
+ # Override aggregator URL (auto-detected from network if empty)
274
+ # Supported networks: mainnet, preprod, preview
275
+ aggregatorUrl: ""
276
+ # Directory for downloading snapshot archives
277
+ # If empty, a randomized temporary directory is created automatically
278
+ # downloadDir: ""
279
+ # Remove temporary files after loading completes
280
+ # Default: true
281
+ cleanupAfterLoad: true
282
+ # Verify the Mithril certificate chain from snapshot back to genesis
283
+ # Default: true
284
+ verifyCertificates: true
@@ -0,0 +1,51 @@
1
+ # Dingo Cardano Node - Docker Compose Configuration
2
+ #
3
+ # Usage:
4
+ # docker compose up -d # Start with defaults (preview network)
5
+ # CARDANO_NETWORK=mainnet docker compose up -d # Start on mainnet
6
+ #
7
+ # With Mithril snapshot bootstrap (first run only):
8
+ # RESTORE_SNAPSHOT=1 docker compose up -d
9
+
10
+ services:
11
+ # --------------------------------------------------------------------------
12
+ # Basic relay node
13
+ # --------------------------------------------------------------------------
14
+ dingo:
15
+ build:
16
+ context: .
17
+ dockerfile: Dockerfile
18
+ image: ghcr.io/blinklabs-io/dingo:latest
19
+ restart: unless-stopped
20
+ environment:
21
+ - CARDANO_NETWORK=${CARDANO_NETWORK:-preview}
22
+ - CARDANO_DATABASE_PATH=/data/db
23
+ - DINGO_SOCKET_PATH=/ipc/dingo.socket
24
+ # Uncomment to enable debug logging
25
+ # - DINGO_DEBUG=1
26
+ # Uncomment to bootstrap from Mithril snapshot on first run
27
+ # - RESTORE_SNAPSHOT=1
28
+ ports:
29
+ # Ouroboros Node-to-Node (relay port)
30
+ - "3001:3001"
31
+ # Prometheus metrics (localhost only by default)
32
+ - "127.0.0.1:12798:12798"
33
+ # UTxO RPC (gRPC, localhost only by default)
34
+ - "127.0.0.1:9090:9090"
35
+ volumes:
36
+ # Persistent database storage
37
+ - dingo-data:/data/db
38
+ # Unix socket for cardano-cli and other NtC clients
39
+ - dingo-ipc:/ipc
40
+ healthcheck:
41
+ test: ["CMD", "wget", "-qO-", "http://127.0.0.1:12798/metrics"]
42
+ interval: 30s
43
+ timeout: 5s
44
+ start_period: 60s
45
+ retries: 3
46
+
47
+ volumes:
48
+ dingo-data:
49
+ driver: local
50
+ dingo-ipc:
51
+ driver: local
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@blinklabs/dingo",
3
- "version": "0.20.0",
3
+ "version": "0.23.0",
4
4
  "description": "Dingo is a Cardano blockchain data node",
5
5
  "main": "index.js",
6
6
  "bin": {
@@ -1,306 +0,0 @@
1
- # Plugin Development Guide
2
-
3
- This guide explains how to develop plugins for Dingo's storage system.
4
-
5
- ## Overview
6
-
7
- Dingo supports pluggable storage backends through a registration-based plugin system. Plugins can extend the system with new blob storage (blocks, transactions) and metadata storage (indexes, state) implementations.
8
-
9
- ## Plugin Types
10
-
11
- ### Blob Storage Plugins
12
- Store blockchain data (blocks, transactions, etc.). Examples:
13
- - `badger` - Local BadgerDB key-value store
14
- - `gcs` - Google Cloud Storage
15
- - `s3` - AWS S3
16
-
17
- #### Iterator lifetime note
18
- Blob plugins expose iterators via `NewIterator(txn, opts)`. Items returned by the
19
- iterator's `Item()` must only be accessed while the transaction used to create
20
- the iterator is still active — implementations may validate transaction state at
21
- access time and will return errors if the transaction has been committed or
22
- rolled back. See `database/types/types.go` `BlobIterator` for details.
23
-
24
- ### Metadata Storage Plugins
25
- Store metadata and indexes. Examples:
26
- - `sqlite` - SQLite relational database
27
-
28
- ## Plugin Interface
29
-
30
- All plugins must implement the `plugin.Plugin` interface:
31
-
32
- ```go
33
- type Plugin interface {
34
- Start() error
35
- Stop() error
36
- }
37
- ```
38
-
39
- ## Plugin Registration
40
-
41
- Plugins register themselves during package initialization using the `plugin.Register()` function:
42
-
43
- ```go
44
- func init() {
45
- plugin.Register(plugin.PluginEntry{
46
- Type: plugin.PluginTypeBlob, // or PluginTypeMetadata
47
- Name: "myplugin",
48
- Description: "My custom storage plugin",
49
- NewFromOptionsFunc: NewFromCmdlineOptions,
50
- Options: []plugin.PluginOption{
51
- // Plugin-specific options
52
- },
53
- })
54
- }
55
- ```
56
-
57
- ## Plugin Options
58
-
59
- Plugins define configuration options using the `PluginOption` struct:
60
-
61
- ```go
62
- plugin.PluginOption{
63
- Name: "data-dir", // Option name
64
- Type: plugin.PluginOptionTypeString, // Data type
65
- Description: "Data directory path", // Help text
66
- DefaultValue: "/tmp/data", // Default value
67
- Dest: &cmdlineOptions.dataDir, // Destination variable
68
- }
69
- ```
70
-
71
- Supported option types:
72
- - `PluginOptionTypeString`
73
- - `PluginOptionTypeBool`
74
- - `PluginOptionTypeInt`
75
- - `PluginOptionTypeUint`
76
-
77
- ## Environment Variables
78
-
79
- Plugins automatically support environment variables with the pattern:
80
- `DINGO_DATABASE_{TYPE}_{PLUGIN}_{OPTION}`
81
-
82
- Examples:
83
- - `DINGO_DATABASE_BLOB_BADGER_DATA_DIR=/data`
84
- - `DINGO_DATABASE_METADATA_SQLITE_DATA_DIR=/metadata.db`
85
-
86
- ## YAML Configuration
87
-
88
- Plugins can be configured in `dingo.yaml`:
89
-
90
- ```yaml
91
- database:
92
- blob:
93
- plugin: "myplugin"
94
- myplugin:
95
- option1: "value1"
96
- option2: 42
97
- metadata:
98
- plugin: "sqlite"
99
- sqlite:
100
- data-dir: "/data/metadata.db"
101
- ```
102
-
103
- ## Configuration Precedence
104
-
105
- 1. Command-line flags (highest priority)
106
- 2. Environment variables
107
- 3. YAML configuration
108
- 4. Default values (lowest priority)
109
-
110
- ## Command Line Options
111
-
112
- Plugins support command-line flags with the pattern:
113
- `--{type}-{plugin}-{option}`
114
-
115
- Examples:
116
- - `--blob-badger-data-dir /data`
117
- - `--metadata-sqlite-data-dir /metadata.db`
118
-
119
- ## Plugin Development Steps
120
-
121
- ### 1. Create Plugin Structure
122
-
123
- ```text
124
- database/plugin/{type}/{name}/
125
- ├── plugin.go # Registration and options
126
- ├── options.go # Option functions
127
- ├── database.go # Core implementation
128
- └── options_test.go # Unit tests
129
- ```
130
-
131
- ### 2. Implement Core Plugin
132
-
133
- Create the main plugin struct that implements `plugin.Plugin`:
134
-
135
- ```go
136
- type MyPlugin struct {
137
- // Fields
138
- }
139
-
140
- func (p *MyPlugin) Start() error {
141
- // Initialize resources
142
- return nil
143
- }
144
-
145
- func (p *MyPlugin) Stop() error {
146
- // Clean up resources
147
- return nil
148
- }
149
- ```
150
-
151
- ### 3. Define Options
152
-
153
- Create option functions following the pattern:
154
-
155
- ```go
156
- func WithOptionName(value Type) OptionFunc {
157
- return func(p *MyPlugin) {
158
- p.field = value
159
- }
160
- }
161
- ```
162
-
163
- ### 4. Implement Constructors
164
-
165
- Provide both options-based and legacy constructors:
166
-
167
- ```go
168
- func NewWithOptions(opts ...OptionFunc) (*MyPlugin, error) {
169
- p := &MyPlugin{}
170
- for _, opt := range opts {
171
- opt(p)
172
- }
173
- return p, nil
174
- }
175
-
176
- func New(legacyParam1, legacyParam2) (*MyPlugin, error) {
177
- // For backward compatibility
178
- return NewWithOptions(
179
- WithOption1(legacyParam1),
180
- WithOption2(legacyParam2),
181
- )
182
- }
183
- ```
184
-
185
- ### 5. Register Plugin
186
-
187
- In `plugin.go`, register during initialization:
188
-
189
- ```go
190
- var cmdlineOptions struct {
191
- option1 string
192
- option2 int
193
- }
194
-
195
- func init() {
196
- plugin.Register(plugin.PluginEntry{
197
- Type: plugin.PluginTypeBlob,
198
- Name: "myplugin",
199
- Description: "My custom plugin",
200
- NewFromOptionsFunc: NewFromCmdlineOptions,
201
- Options: []plugin.PluginOption{
202
- {
203
- Name: "option1",
204
- Type: plugin.PluginOptionTypeString,
205
- Description: "First option",
206
- DefaultValue: "default",
207
- Dest: &cmdlineOptions.option1,
208
- },
209
- // More options...
210
- },
211
- })
212
- }
213
-
214
- func NewFromCmdlineOptions() plugin.Plugin {
215
- p, err := NewWithOptions(
216
- WithOption1(cmdlineOptions.option1),
217
- WithOption2(cmdlineOptions.option2),
218
- )
219
- if err != nil {
220
- panic(err)
221
- }
222
- return p
223
- }
224
- ```
225
-
226
- ### 6. Add Tests
227
-
228
- Create comprehensive tests:
229
-
230
- ```go
231
- func TestOptions(t *testing.T) {
232
- // Test option functions
233
- }
234
-
235
- func TestLifecycle(t *testing.T) {
236
- p, err := NewWithOptions(WithOption1("test"))
237
- // Test Start/Stop
238
- }
239
- ```
240
-
241
- ### 7. Update Imports
242
-
243
- Add your plugin to the import list in the appropriate store file:
244
- - `database/plugin/blob/blob.go` for blob plugins
245
- - `database/plugin/metadata/metadata.go` for metadata plugins
246
-
247
- ## Example: Complete Plugin
248
-
249
- See the existing plugins for complete examples:
250
- - `database/plugin/blob/badger/` - BadgerDB implementation
251
- - `database/plugin/metadata/sqlite/` - SQLite implementation
252
- - `database/plugin/blob/gcs/` - Google Cloud Storage implementation
253
- - `database/plugin/blob/aws/` - AWS S3 implementation
254
-
255
- ## Best Practices
256
-
257
- 1. **Error Handling**: Always return descriptive errors
258
- 2. **Resource Management**: Properly implement Start/Stop for resource lifecycle
259
- 3. **Thread Safety**: Ensure plugins are safe for concurrent use
260
- 4. **Configuration Validation**: Validate configuration during construction
261
- 5. **Backward Compatibility**: Maintain compatibility with existing deployments
262
- 6. **Documentation**: Document all options and their effects
263
- 7. **Testing**: Provide comprehensive unit and integration tests
264
-
265
- ## Testing Your Plugin
266
-
267
- ### Unit Tests
268
- Test individual components and option functions.
269
-
270
- ### Integration Tests
271
- Test the complete plugin lifecycle and interaction with the plugin system.
272
-
273
- ### CLI Testing
274
- Use the CLI to test plugin listing and selection:
275
-
276
- ```bash
277
- ./dingo --blob list
278
- ./dingo --metadata list
279
- ```
280
-
281
- ### Configuration Testing
282
- Test environment variables and YAML configuration:
283
-
284
- ```bash
285
- DINGO_DATABASE_BLOB_MYPLUGIN_OPTION1=value ./dingo --blob myplugin
286
- ```
287
-
288
- ## Programmatic Option Overrides (for tests)
289
-
290
- When writing tests or programmatically constructing database instances you can override plugin options
291
- without importing plugin implementation packages directly by using the plugin registry helper:
292
-
293
- ```go
294
- // Set data-dir for the blob plugin to a per-test temp directory
295
- plugin.SetPluginOption(plugin.PluginTypeBlob, "badger", "data-dir", t.TempDir())
296
-
297
- // Set data-dir for the metadata plugin
298
- plugin.SetPluginOption(plugin.PluginTypeMetadata, "sqlite", "data-dir", t.TempDir())
299
- ```
300
-
301
- The helper sets the plugin option's destination variable in the registry before plugin instantiation.
302
- If the requested option is not defined by the targeted plugin the call is non-fatal and returns nil,
303
- allowing tests to run regardless of which plugin implementation is selected.
304
-
305
- Using `t.TempDir()` guarantees each test uses its own on-disk path and prevents concurrent tests from
306
- colliding on shared directories (for example the default `.dingo` Badger directory).