bulletin-deploy 0.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,170 @@
1
+ # bulletin-deploy
2
+
3
+ Deploy static sites and apps to the Polkadot Triangle network with decentralized storage and human-readable `.dot` domains.
4
+
5
+ ## Quick Start
6
+
7
+ ```bash
8
+ npm install -g github:paritytech/bulletin-deploy
9
+ # Build your app (e.g. npm run build)
10
+ bulletin-deploy ./dist my-app00.dot
11
+ ```
12
+
13
+ Your site is live at `https://my-app00.dot.li`
14
+
15
+ ## Prerequisites
16
+
17
+ - **Node.js 22+**
18
+ - **IPFS Kubo** (for merkleizing directories)
19
+
20
+ ```bash
21
+ # macOS
22
+ brew install ipfs
23
+ ipfs init
24
+
25
+ # Linux
26
+ wget https://dist.ipfs.tech/kubo/v0.33.0/kubo_v0.33.0_linux-amd64.tar.gz
27
+ tar -xvzf kubo_v0.33.0_linux-amd64.tar.gz
28
+ sudo bash kubo/install.sh
29
+ ipfs init
30
+ ```
31
+
32
+ ## CLI Usage
33
+
34
+ ### Deploy an app
35
+
36
+ ```bash
37
+ bulletin-deploy <build-dir> <domain.dot>
38
+ ```
39
+
40
+ Examples:
41
+
42
+ ```bash
43
+ # Basic deploy
44
+ bulletin-deploy ./dist my-app00.dot
45
+
46
+ # With DotNS owner mnemonic
47
+ MNEMONIC="word1 word2 ..." bulletin-deploy ./dist my-app00.dot
48
+
49
+ # Custom RPC endpoint
50
+ bulletin-deploy --rpc wss://custom-bulletin.example.com ./dist my-app00.dot
51
+ ```
52
+
53
+ ### All options
54
+
55
+ ```
56
+ Options:
57
+ --mnemonic "..." DotNS owner mnemonic (or set MNEMONIC env var)
58
+ --rpc wss://... Bulletin RPC (or set BULLETIN_RPC env var)
59
+ --help Show help
60
+ ```
61
+
62
+ ## GitHub Actions
63
+
64
+ 1. Copy `workflows/deploy-on-pr.yml` to your repo's `.github/workflows/` directory
65
+ 2. Set the `MNEMONIC` secret in your repo settings (the mnemonic that owns your `.dot` domain)
66
+ 3. Customize the **Build** step for your framework (Vite, Next.js, etc.)
67
+ 4. Push and watch the deploy
68
+
69
+ The template workflow:
70
+ - Deploys on push to main and on PRs
71
+ - Uses `nick-fields/retry@v3` for automatic retries on transient failures
72
+ - Posts a comment on PRs with the live URL
73
+ - Generates domain names as `<repo>-<branch>00.dot`
74
+
75
+ ## Programmatic API
76
+
77
+ ```javascript
78
+ import { deploy, DotNS } from "@paritytech/bulletin-deploy";
79
+
80
+ // Deploy a directory
81
+ const result = await deploy("./dist", "my-app00.dot");
82
+ console.log(result.cid, result.domainName);
83
+ ```
84
+
85
+ ## Environment Variables
86
+
87
+ | Variable | Default | Description |
88
+ |---|---|---|
89
+ | `BULLETIN_RPC` | `wss://paseo-bulletin-rpc.polkadot.io` | Bulletin chain WebSocket RPC |
90
+ | `BULLETIN_POOL_SIZE` | `10` | Number of pool accounts to derive |
91
+ | `BULLETIN_POOL_MNEMONIC` | Dev phrase (Alice) | Mnemonic for pool account derivation |
92
+ | `MNEMONIC` | Dev phrase | DotNS domain owner mnemonic |
93
+ | `BULLETIN_DEPLOY_TELEMETRY` | `1` (enabled) | Set to `0` to disable Sentry telemetry |
94
+ | `IPFS_CID` | _(none)_ | Skip storage, use pre-existing CID |
95
+ | `GITHUB_OUTPUT` | _(none)_ | GitHub Actions output file (set automatically in CI) |
96
+
97
+ ## How It Works
98
+
99
+ ```
100
+ Build output ──> IPFS merkleize ──> CAR file ──> Chunk upload ──> DotNS
101
+ ./dist ipfs add .car Bulletin Asset Hub
102
+ Storage Registry
103
+ ```
104
+
105
+ 1. **Merkleize** your build directory with IPFS to produce a content-addressed CAR file
106
+ 2. **Chunk and upload** the CAR file to Bulletin's TransactionStorage (1MB chunks, 2 per batch)
107
+ 3. **Store the DAG root** that links all chunks together under a single CID
108
+ 4. **Register or update** your `.dot` domain on Asset Hub with the new contenthash
109
+
110
+ Your site is immediately accessible at `https://your-domain.dot.li`
111
+
112
+ ## Resilience Features
113
+
114
+ ### Chunk-level retry
115
+
116
+ Each batch of chunks is submitted with `Promise.allSettled`. Failed chunks are retried up to 3 times with a fresh nonce, serialized to avoid nonce conflicts. If a chunk fails all retries, the deploy aborts with a clear error.
117
+
118
+ ### Account pool
119
+
120
+ Instead of using a single account for all storage transactions, bulletin-deploy derives a pool of accounts from a mnemonic. Each deploy selects the account with the most remaining authorization capacity. This prevents nonce conflicts between concurrent deploys and distributes the storage authorization budget.
121
+
122
+ ### Auto-authorization
123
+
124
+ When a pool account's authorization drops below thresholds (50 transactions or 50MB), bulletin-deploy automatically tops it up by submitting an `authorize_account` transaction from Alice.
125
+
126
+ ## Telemetry
127
+
128
+ Sentry telemetry is enabled by default for deploy observability.
129
+
130
+ What's tracked:
131
+ - Deploy duration and success/failure
132
+ - Storage phase timing (merkleize, chunk upload, root node)
133
+ - DotNS phase timing (registration, contenthash update)
134
+ - Pool account selection
135
+ - Chunk retry counts
136
+ - Source metadata (repo, branch, PR number, CI vs local)
137
+
138
+ Telemetry is enabled by default. Set `BULLETIN_DEPLOY_TELEMETRY=0` to disable.
139
+
140
+ Dashboard: https://polkadot-community-foundation.sentry.io/dashboards/92523/
141
+
142
+ ## Troubleshooting
143
+
144
+ | Error | Solution |
145
+ |---|---|
146
+ | `Payment` or authorization error | Pool account needs storage authorization — auto-authorization should handle this |
147
+ | `Stale` or dropped from best chain | Bulletin chain reorg. Automatic retry handles this. |
148
+ | `IPFS CLI not installed` | Install Kubo: `brew install ipfs && ipfs init` |
149
+ | `CommitmentNotFound` | DotNS timing issue during registration. Retry the deploy. |
150
+ | `All pool accounts exhausted` | Auto-authorization will top up the best available account |
151
+ | `File exceeds 8MB limit` | File is automatically chunked. This shouldn't appear for directories. |
152
+ | `fetchNonce timed out` | Bulletin RPC may be down. Check endpoint or try a different one. |
153
+
154
+ ## Configuration for Different Chains
155
+
156
+ By default, bulletin-deploy targets the **Paseo testnet**:
157
+ - Bulletin: `wss://paseo-bulletin-rpc.polkadot.io`
158
+ - Asset Hub: `wss://asset-hub-paseo.dotters.network`
159
+
160
+ To point at a different chain, set the `BULLETIN_RPC` environment variable:
161
+
162
+ ```bash
163
+ # Local development
164
+ BULLETIN_RPC=ws://127.0.0.1:9944 bulletin-deploy ./dist my-app00.dot
165
+
166
+ # Custom endpoint
167
+ BULLETIN_RPC=wss://your-bulletin-rpc.example.com bulletin-deploy ./dist my-app00.dot
168
+ ```
169
+
170
+ The Asset Hub RPC endpoints are configured in `src/dotns.js` and support automatic failover across multiple providers.
package/benchmark.js ADDED
@@ -0,0 +1,163 @@
1
+ import * as fs from "fs";
2
+ import * as path from "path";
3
+ import { createClient as createPolkadotClient } from "polkadot-api";
4
+ import { Binary } from "@polkadot-api/substrate-bindings";
5
+ import { getPolkadotSigner } from "polkadot-api/signer";
6
+ import { getWsProvider } from "polkadot-api/ws-provider";
7
+ import { withPolkadotSdkCompat } from "polkadot-api/polkadot-sdk-compat";
8
+ import { sr25519CreateDerive } from "@polkadot-labs/hdkd";
9
+ import { DEV_PHRASE, entropyToMiniSecret, mnemonicToEntropy } from "@polkadot-labs/hdkd-helpers";
10
+ import * as dagPB from "@ipld/dag-pb";
11
+ import { UnixFS } from "ipfs-unixfs";
12
+ import { fetchNonce, TX_TIMEOUT_MS } from "./src/dotns.js";
13
+ import { createCID, merkleize, chunk } from "./src/deploy.js";
14
+
15
+ const BULLETIN_RPC = process.env.BULLETIN_RPC || "wss://paseo-bulletin-rpc.polkadot.io";
16
+ const ALICE_SS58 = "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY";
17
+
18
+ function toHashingEnum(mhCode) {
19
+ switch (mhCode) {
20
+ case 0x12: return { type: "Sha2_256", value: undefined };
21
+ case 0xb220: return { type: "Blake2b256", value: undefined };
22
+ default: throw new Error(`Unhandled: ${mhCode}`);
23
+ }
24
+ }
25
+
26
+ function createSigner() {
27
+ const entropy = mnemonicToEntropy(DEV_PHRASE);
28
+ const miniSecret = entropyToMiniSecret(entropy);
29
+ const derive = sr25519CreateDerive(miniSecret);
30
+ const keyPair = derive("//Alice");
31
+ return getPolkadotSigner(keyPair.publicKey, "Sr25519", keyPair.sign);
32
+ }
33
+
34
+ function watchTx(tx, signer, txOpts, label) {
35
+ return new Promise((resolve, reject) => {
36
+ let settled = false;
37
+ const settle = (fn) => (...args) => { if (!settled) { settled = true; clearTimeout(timer); try { sub.unsubscribe(); } catch {} fn(...args); } };
38
+ const timer = setTimeout(() => settle(reject)(new Error(`${label} timed out`)), TX_TIMEOUT_MS);
39
+ const sub = tx.signSubmitAndWatch(signer, txOpts).subscribe({
40
+ next: (event) => {
41
+ if (event.type === "txBestBlocksState" && event.found) {
42
+ if (event.ok) settle(resolve)();
43
+ else settle(reject)(new Error(`${label} dispatch error`));
44
+ }
45
+ },
46
+ error: (e) => settle(reject)(new Error(`${label}: ${e?.message?.slice(0, 100) || e}`)),
47
+ });
48
+ });
49
+ }
50
+
51
+ async function benchmarkConfig(carBuffer, chunkSize, batchSize, label) {
52
+ const chunks = chunk(carBuffer, chunkSize);
53
+ const rounds = Math.ceil(chunks.length / batchSize);
54
+ const concurrentMB = (chunkSize * batchSize / 1024 / 1024).toFixed(1);
55
+
56
+ console.log(`\n--- ${label} | ${chunks.length} chunks | ${rounds} rounds | ${concurrentMB}MB/round ---`);
57
+
58
+ const client = createPolkadotClient(withPolkadotSdkCompat(getWsProvider(BULLETIN_RPC)));
59
+ const unsafeApi = client.getUnsafeApi();
60
+ const signer = createSigner();
61
+
62
+ const start = Date.now();
63
+ try {
64
+ const startNonce = await fetchNonce(BULLETIN_RPC, ALICE_SS58);
65
+ const stored = [];
66
+
67
+ for (let b = 0; b < chunks.length; b += batchSize) {
68
+ const batch = chunks.slice(b, b + batchSize);
69
+ const batchStart = Date.now();
70
+ const batchPromises = batch.map((chunkData, j) => {
71
+ const i = b + j;
72
+ const nonce = startNonce + i;
73
+ const hashCode = 0x12;
74
+ const cid = createCID(chunkData, 0x55, hashCode);
75
+ const tx = unsafeApi.tx.TransactionStorage.store_with_cid_config({
76
+ cid: { codec: BigInt(0x55), hashing: toHashingEnum(hashCode) },
77
+ data: Binary.fromBytes(chunkData),
78
+ });
79
+ return watchTx(tx, signer, { mortality: { mortal: true, period: 256 }, nonce }, `chunk-${i}`)
80
+ .then(() => ({ cid, len: chunkData.length }));
81
+ });
82
+ const batchResults = await Promise.all(batchPromises);
83
+ stored.push(...batchResults);
84
+ const batchElapsed = ((Date.now() - batchStart) / 1000).toFixed(1);
85
+ console.log(` Batch ${Math.floor(b / batchSize) + 1}/${rounds}: ${batch.length} chunks in ${batchElapsed}s`);
86
+ }
87
+
88
+ // Store root DAG node
89
+ const fileData = new UnixFS({ type: "file", blockSizes: stored.map((c) => BigInt(c.len)) });
90
+ const dagNode = dagPB.prepare({ Data: fileData.marshal(), Links: stored.map((c) => ({ Name: "", Tsize: c.len, Hash: c.cid })) });
91
+ const dagBytes = dagPB.encode(dagNode);
92
+ const rootCid = createCID(dagBytes, 0x70, 0x12);
93
+ const rootNonce = startNonce + chunks.length;
94
+ const rootTx = unsafeApi.tx.TransactionStorage.store_with_cid_config({
95
+ cid: { codec: BigInt(0x70), hashing: toHashingEnum(0x12) },
96
+ data: Binary.fromBytes(dagBytes),
97
+ });
98
+ await watchTx(rootTx, signer, { mortality: { mortal: true, period: 256 }, nonce: rootNonce }, "root");
99
+
100
+ const elapsed = ((Date.now() - start) / 1000).toFixed(1);
101
+ console.log(` ROOT OK | Total: ${elapsed}s`);
102
+ client.destroy();
103
+ return { label, chunks: chunks.length, rounds, concurrentMB, elapsed, success: true };
104
+ } catch (e) {
105
+ const elapsed = ((Date.now() - start) / 1000).toFixed(1);
106
+ const error = e.message?.slice(0, 80);
107
+ console.log(` FAILED (${elapsed}s): ${error}`);
108
+ client.destroy();
109
+ return { label, chunks: chunks.length, rounds, concurrentMB, elapsed, success: false, error };
110
+ }
111
+ }
112
+
113
+ // --- Main ---
114
+ const buildDir = process.argv[2];
115
+ if (!buildDir || !fs.existsSync(buildDir)) {
116
+ console.error("Usage: node benchmark.js <build-dir>");
117
+ process.exit(1);
118
+ }
119
+
120
+ console.log("Preparing CAR file...");
121
+ const carPath = path.join(path.dirname(buildDir), `${path.basename(buildDir)}.car`);
122
+ const { cid: ipfsCid } = await merkleize(buildDir, carPath);
123
+ const carBuffer = fs.readFileSync(carPath);
124
+ console.log(`CAR: ${(carBuffer.length / 1024 / 1024).toFixed(2)} MB | CID: ${ipfsCid}\n`);
125
+
126
+ const configs = [
127
+ [2 * 1024 * 1024, 1, "2MB x1"],
128
+ [1 * 1024 * 1024, 2, "1MB x2"],
129
+ [1 * 1024 * 1024, 1, "1MB x1"],
130
+ [512 * 1024, 3, "512K x3"],
131
+ [512 * 1024, 4, "512K x4"],
132
+ [256 * 1024, 4, "256K x4"],
133
+ [256 * 1024, 6, "256K x6"],
134
+ ];
135
+
136
+ const results = [];
137
+ for (const [chunkSize, batchSize, label] of configs) {
138
+ const r = await benchmarkConfig(carBuffer, chunkSize, batchSize, label);
139
+ results.push(r);
140
+ // Wait between runs
141
+ if (label !== configs[configs.length - 1][2]) {
142
+ console.log(" Cooling 10s...");
143
+ await new Promise(r => setTimeout(r, 10000));
144
+ }
145
+ }
146
+
147
+ console.log(`\n\n${"=".repeat(70)}`);
148
+ console.log("RESULTS");
149
+ console.log(`${"=".repeat(70)}`);
150
+ console.log(`| Config | Chunks | Rounds | MB/round | Time | Result |`);
151
+ console.log(`|------------|--------|--------|----------|---------|--------|`);
152
+ for (const r of results) {
153
+ const status = r.success ? "OK" : `FAIL`;
154
+ console.log(`| ${r.label.padEnd(10)} | ${String(r.chunks).padEnd(6)} | ${String(r.rounds).padEnd(6)} | ${r.concurrentMB.padEnd(8)} | ${(r.elapsed + "s").padEnd(7)} | ${status.padEnd(6)} |`);
155
+ }
156
+ if (results.some(r => !r.success)) {
157
+ console.log("\nFailed configs:");
158
+ for (const r of results.filter(r => !r.success)) {
159
+ console.log(` ${r.label}: ${r.error}`);
160
+ }
161
+ }
162
+
163
+ process.exit(0);
@@ -0,0 +1,62 @@
1
+ #!/usr/bin/env node
2
+
3
+ import { deploy } from "../src/deploy.js";
4
+ import { bootstrapPool } from "../src/pool.js";
5
+ import * as fs from "fs";
6
+
7
+ const args = process.argv.slice(2);
8
+
9
+ const flags = {};
10
+ const positional = [];
11
+ for (let i = 0; i < args.length; i++) {
12
+ if (args[i] === "--bootstrap") { flags.bootstrap = true; }
13
+ else if (args[i] === "--pool-size") { flags.poolSize = parseInt(args[++i], 10); }
14
+ else if (args[i] === "--mnemonic") { flags.mnemonic = args[++i]; }
15
+ else if (args[i] === "--rpc") { flags.rpc = args[++i]; }
16
+ else if (args[i] === "--help" || args[i] === "-h") { flags.help = true; }
17
+ else { positional.push(args[i]); }
18
+ }
19
+
20
+ if (flags.help || (positional.length === 0 && !flags.bootstrap)) {
21
+ console.log(`Usage:
22
+ bulletin-deploy <build-dir> <domain.dot> Deploy an app
23
+ bulletin-deploy --bootstrap Initialize pool accounts
24
+
25
+ Options:
26
+ --mnemonic "..." DotNS owner mnemonic (or set MNEMONIC env var)
27
+ --rpc wss://... Bulletin RPC (or set BULLETIN_RPC env var)
28
+ --pool-size N Number of pool accounts (default: 10)
29
+ --help Show this help`);
30
+ process.exit(0);
31
+ }
32
+
33
+ if (flags.rpc) process.env.BULLETIN_RPC = flags.rpc;
34
+ if (flags.poolSize) process.env.BULLETIN_POOL_SIZE = String(flags.poolSize);
35
+
36
+ try {
37
+ if (flags.bootstrap) {
38
+ const rpc = process.env.BULLETIN_RPC || "wss://paseo-bulletin-rpc.polkadot.io";
39
+ const poolSize = parseInt(process.env.BULLETIN_POOL_SIZE || "10", 10);
40
+ await bootstrapPool(rpc, poolSize);
41
+ } else {
42
+ const [buildDir, domain] = positional;
43
+ if (!buildDir) { console.error("Error: build directory required"); process.exit(1); }
44
+ if (!domain) { console.error("Error: domain required (e.g. my-app.dot)"); process.exit(1); }
45
+ if (!fs.existsSync(buildDir)) { console.error(`Error: ${buildDir} does not exist`); process.exit(1); }
46
+
47
+ const result = await deploy(buildDir, domain);
48
+
49
+ const output = process.env.GITHUB_OUTPUT;
50
+ if (output) {
51
+ fs.appendFileSync(output, `cid=${result.cid}\n`);
52
+ fs.appendFileSync(output, `domain=${result.domainName}\n`);
53
+ }
54
+
55
+ console.log(`CID: ${result.cid}`);
56
+ console.log(`Domain: ${result.domainName}`);
57
+ }
58
+ process.exit(0);
59
+ } catch (error) {
60
+ console.error("Deployment failed:", error.message);
61
+ process.exit(1);
62
+ }
package/package.json ADDED
@@ -0,0 +1,41 @@
1
+ {
2
+ "name": "bulletin-deploy",
3
+ "version": "0.4.1",
4
+ "private": false,
5
+ "repository": {
6
+ "type": "git",
7
+ "url": "https://github.com/paritytech/bulletin-deploy.git"
8
+ },
9
+ "publishConfig": {
10
+ "registry": "https://registry.npmjs.org",
11
+ "access": "public"
12
+ },
13
+ "type": "module",
14
+ "bin": {
15
+ "bulletin-deploy": "./bin/bulletin-deploy"
16
+ },
17
+ "exports": {
18
+ ".": "./src/index.js"
19
+ },
20
+ "scripts": {
21
+ "test": "node --test test/test.js",
22
+ "benchmark": "node benchmark.js"
23
+ },
24
+ "dependencies": {
25
+ "@ipld/dag-pb": "^4.1.3",
26
+ "@sentry/node": "^9.14.0",
27
+ "@noble/hashes": "^1.7.2",
28
+ "@polkadot-api/substrate-bindings": "^0.16.5",
29
+ "@polkadot-labs/hdkd": "^0.0.25",
30
+ "@polkadot-labs/hdkd-helpers": "^0.0.26",
31
+ "@polkadot/keyring": "^13.0.0",
32
+ "@polkadot/util-crypto": "^13.0.0",
33
+ "ipfs-unixfs": "^11.2.0",
34
+ "multiformats": "^13.4.1",
35
+ "polkadot-api": "^1.23.1",
36
+ "viem": "^2.30.5"
37
+ },
38
+ "engines": {
39
+ "node": ">=22"
40
+ }
41
+ }