@aztec/archiver 4.0.0-nightly.20250907 → 4.0.0-nightly.20260108
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +27 -6
- package/dest/archiver/archiver.d.ts +127 -84
- package/dest/archiver/archiver.d.ts.map +1 -1
- package/dest/archiver/archiver.js +1150 -382
- package/dest/archiver/archiver_store.d.ts +122 -45
- package/dest/archiver/archiver_store.d.ts.map +1 -1
- package/dest/archiver/archiver_store_test_suite.d.ts +1 -1
- package/dest/archiver/archiver_store_test_suite.d.ts.map +1 -1
- package/dest/archiver/archiver_store_test_suite.js +2013 -343
- package/dest/archiver/config.d.ts +7 -20
- package/dest/archiver/config.d.ts.map +1 -1
- package/dest/archiver/config.js +21 -5
- package/dest/archiver/errors.d.ts +25 -1
- package/dest/archiver/errors.d.ts.map +1 -1
- package/dest/archiver/errors.js +37 -0
- package/dest/archiver/index.d.ts +2 -2
- package/dest/archiver/index.d.ts.map +1 -1
- package/dest/archiver/instrumentation.d.ts +5 -3
- package/dest/archiver/instrumentation.d.ts.map +1 -1
- package/dest/archiver/instrumentation.js +14 -0
- package/dest/archiver/kv_archiver_store/block_store.d.ts +83 -15
- package/dest/archiver/kv_archiver_store/block_store.d.ts.map +1 -1
- package/dest/archiver/kv_archiver_store/block_store.js +396 -73
- package/dest/archiver/kv_archiver_store/contract_class_store.d.ts +2 -2
- package/dest/archiver/kv_archiver_store/contract_class_store.d.ts.map +1 -1
- package/dest/archiver/kv_archiver_store/contract_class_store.js +1 -1
- package/dest/archiver/kv_archiver_store/contract_instance_store.d.ts +2 -2
- package/dest/archiver/kv_archiver_store/contract_instance_store.d.ts.map +1 -1
- package/dest/archiver/kv_archiver_store/kv_archiver_store.d.ts +51 -55
- package/dest/archiver/kv_archiver_store/kv_archiver_store.d.ts.map +1 -1
- package/dest/archiver/kv_archiver_store/kv_archiver_store.js +82 -46
- package/dest/archiver/kv_archiver_store/log_store.d.ts +12 -16
- package/dest/archiver/kv_archiver_store/log_store.d.ts.map +1 -1
- package/dest/archiver/kv_archiver_store/log_store.js +149 -84
- package/dest/archiver/kv_archiver_store/message_store.d.ts +6 -5
- package/dest/archiver/kv_archiver_store/message_store.d.ts.map +1 -1
- package/dest/archiver/kv_archiver_store/message_store.js +15 -14
- package/dest/archiver/l1/bin/retrieve-calldata.d.ts +3 -0
- package/dest/archiver/l1/bin/retrieve-calldata.d.ts.map +1 -0
- package/dest/archiver/l1/bin/retrieve-calldata.js +149 -0
- package/dest/archiver/l1/calldata_retriever.d.ts +112 -0
- package/dest/archiver/l1/calldata_retriever.d.ts.map +1 -0
- package/dest/archiver/l1/calldata_retriever.js +471 -0
- package/dest/archiver/l1/data_retrieval.d.ts +90 -0
- package/dest/archiver/l1/data_retrieval.d.ts.map +1 -0
- package/dest/archiver/l1/data_retrieval.js +331 -0
- package/dest/archiver/l1/debug_tx.d.ts +19 -0
- package/dest/archiver/l1/debug_tx.d.ts.map +1 -0
- package/dest/archiver/l1/debug_tx.js +73 -0
- package/dest/archiver/l1/spire_proposer.d.ts +70 -0
- package/dest/archiver/l1/spire_proposer.d.ts.map +1 -0
- package/dest/archiver/l1/spire_proposer.js +157 -0
- package/dest/archiver/l1/trace_tx.d.ts +97 -0
- package/dest/archiver/l1/trace_tx.d.ts.map +1 -0
- package/dest/archiver/l1/trace_tx.js +91 -0
- package/dest/archiver/l1/types.d.ts +12 -0
- package/dest/archiver/l1/types.d.ts.map +1 -0
- package/dest/archiver/l1/types.js +3 -0
- package/dest/archiver/l1/validate_trace.d.ts +29 -0
- package/dest/archiver/l1/validate_trace.d.ts.map +1 -0
- package/dest/archiver/l1/validate_trace.js +150 -0
- package/dest/archiver/structs/data_retrieval.d.ts +1 -1
- package/dest/archiver/structs/inbox_message.d.ts +4 -4
- package/dest/archiver/structs/inbox_message.d.ts.map +1 -1
- package/dest/archiver/structs/inbox_message.js +6 -5
- package/dest/archiver/structs/published.d.ts +2 -2
- package/dest/archiver/structs/published.d.ts.map +1 -1
- package/dest/archiver/validation.d.ts +10 -4
- package/dest/archiver/validation.d.ts.map +1 -1
- package/dest/archiver/validation.js +66 -44
- package/dest/factory.d.ts +4 -6
- package/dest/factory.d.ts.map +1 -1
- package/dest/factory.js +5 -4
- package/dest/index.d.ts +2 -2
- package/dest/index.d.ts.map +1 -1
- package/dest/index.js +1 -1
- package/dest/rpc/index.d.ts +2 -2
- package/dest/test/index.d.ts +1 -1
- package/dest/test/mock_archiver.d.ts +16 -8
- package/dest/test/mock_archiver.d.ts.map +1 -1
- package/dest/test/mock_archiver.js +19 -14
- package/dest/test/mock_l1_to_l2_message_source.d.ts +7 -6
- package/dest/test/mock_l1_to_l2_message_source.d.ts.map +1 -1
- package/dest/test/mock_l1_to_l2_message_source.js +10 -9
- package/dest/test/mock_l2_block_source.d.ts +31 -20
- package/dest/test/mock_l2_block_source.d.ts.map +1 -1
- package/dest/test/mock_l2_block_source.js +85 -18
- package/dest/test/mock_structs.d.ts +3 -2
- package/dest/test/mock_structs.d.ts.map +1 -1
- package/dest/test/mock_structs.js +9 -8
- package/package.json +18 -17
- package/src/archiver/archiver.ts +990 -481
- package/src/archiver/archiver_store.ts +141 -44
- package/src/archiver/archiver_store_test_suite.ts +2114 -331
- package/src/archiver/config.ts +30 -35
- package/src/archiver/errors.ts +64 -0
- package/src/archiver/index.ts +1 -1
- package/src/archiver/instrumentation.ts +19 -2
- package/src/archiver/kv_archiver_store/block_store.ts +541 -83
- package/src/archiver/kv_archiver_store/contract_class_store.ts +1 -1
- package/src/archiver/kv_archiver_store/contract_instance_store.ts +1 -1
- package/src/archiver/kv_archiver_store/kv_archiver_store.ts +107 -67
- package/src/archiver/kv_archiver_store/log_store.ts +209 -99
- package/src/archiver/kv_archiver_store/message_store.ts +21 -18
- package/src/archiver/l1/README.md +98 -0
- package/src/archiver/l1/bin/retrieve-calldata.ts +182 -0
- package/src/archiver/l1/calldata_retriever.ts +641 -0
- package/src/archiver/l1/data_retrieval.ts +512 -0
- package/src/archiver/l1/debug_tx.ts +99 -0
- package/src/archiver/l1/spire_proposer.ts +160 -0
- package/src/archiver/l1/trace_tx.ts +128 -0
- package/src/archiver/l1/types.ts +13 -0
- package/src/archiver/l1/validate_trace.ts +211 -0
- package/src/archiver/structs/inbox_message.ts +8 -8
- package/src/archiver/structs/published.ts +1 -1
- package/src/archiver/validation.ts +86 -32
- package/src/factory.ts +6 -7
- package/src/index.ts +1 -1
- package/src/test/fixtures/debug_traceTransaction-multicall3.json +88 -0
- package/src/test/fixtures/debug_traceTransaction-multiplePropose.json +153 -0
- package/src/test/fixtures/debug_traceTransaction-proxied.json +122 -0
- package/src/test/fixtures/trace_transaction-multicall3.json +65 -0
- package/src/test/fixtures/trace_transaction-multiplePropose.json +319 -0
- package/src/test/fixtures/trace_transaction-proxied.json +128 -0
- package/src/test/fixtures/trace_transaction-randomRevert.json +216 -0
- package/src/test/mock_archiver.ts +22 -16
- package/src/test/mock_l1_to_l2_message_source.ts +10 -9
- package/src/test/mock_l2_block_source.ts +114 -27
- package/src/test/mock_structs.ts +10 -9
- package/dest/archiver/data_retrieval.d.ts +0 -78
- package/dest/archiver/data_retrieval.d.ts.map +0 -1
- package/dest/archiver/data_retrieval.js +0 -354
- package/src/archiver/data_retrieval.ts +0 -535
|
@@ -1,9 +1,11 @@
|
|
|
1
|
-
import { INITIAL_L2_BLOCK_NUM
|
|
2
|
-
import
|
|
1
|
+
import { INITIAL_L2_BLOCK_NUM } from '@aztec/constants';
|
|
2
|
+
import { BlockNumber } from '@aztec/foundation/branded-types';
|
|
3
|
+
import { Fr } from '@aztec/foundation/curves/bn254';
|
|
3
4
|
import { createLogger } from '@aztec/foundation/log';
|
|
4
5
|
import { BufferReader, numToUInt32BE } from '@aztec/foundation/serialize';
|
|
5
6
|
import type { AztecAsyncKVStore, AztecAsyncMap } from '@aztec/kv-store';
|
|
6
|
-
import type {
|
|
7
|
+
import type { AztecAddress } from '@aztec/stdlib/aztec-address';
|
|
8
|
+
import { L2BlockHash, L2BlockNew } from '@aztec/stdlib/block';
|
|
7
9
|
import type { GetContractClassLogsResponse, GetPublicLogsResponse } from '@aztec/stdlib/interfaces/client';
|
|
8
10
|
import {
|
|
9
11
|
ContractClassLog,
|
|
@@ -11,8 +13,9 @@ import {
|
|
|
11
13
|
ExtendedPublicLog,
|
|
12
14
|
type LogFilter,
|
|
13
15
|
LogId,
|
|
14
|
-
PrivateLog,
|
|
15
16
|
PublicLog,
|
|
17
|
+
type SiloedTag,
|
|
18
|
+
Tag,
|
|
16
19
|
TxScopedL2Log,
|
|
17
20
|
} from '@aztec/stdlib/logs';
|
|
18
21
|
|
|
@@ -22,9 +25,12 @@ import type { BlockStore } from './block_store.js';
|
|
|
22
25
|
* A store for logs
|
|
23
26
|
*/
|
|
24
27
|
export class LogStore {
|
|
25
|
-
|
|
26
|
-
#
|
|
27
|
-
|
|
28
|
+
// `tag` --> private logs
|
|
29
|
+
#privateLogsByTag: AztecAsyncMap<string, Buffer[]>;
|
|
30
|
+
// `{contractAddress}_${tag}` --> public logs
|
|
31
|
+
#publicLogsByContractAndTag: AztecAsyncMap<string, Buffer[]>;
|
|
32
|
+
#privateLogKeysByBlock: AztecAsyncMap<number, string[]>;
|
|
33
|
+
#publicLogKeysByBlock: AztecAsyncMap<number, string[]>;
|
|
28
34
|
#publicLogsByBlock: AztecAsyncMap<number, Buffer>;
|
|
29
35
|
#contractClassLogsByBlock: AztecAsyncMap<number, Buffer>;
|
|
30
36
|
#logsMaxPageSize: number;
|
|
@@ -35,43 +41,107 @@ export class LogStore {
|
|
|
35
41
|
private blockStore: BlockStore,
|
|
36
42
|
logsMaxPageSize: number = 1000,
|
|
37
43
|
) {
|
|
38
|
-
this.#
|
|
39
|
-
this.#
|
|
40
|
-
this.#
|
|
44
|
+
this.#privateLogsByTag = db.openMap('archiver_private_tagged_logs_by_tag');
|
|
45
|
+
this.#publicLogsByContractAndTag = db.openMap('archiver_public_tagged_logs_by_tag');
|
|
46
|
+
this.#privateLogKeysByBlock = db.openMap('archiver_private_log_keys_by_block');
|
|
47
|
+
this.#publicLogKeysByBlock = db.openMap('archiver_public_log_keys_by_block');
|
|
41
48
|
this.#publicLogsByBlock = db.openMap('archiver_public_logs_by_block');
|
|
42
49
|
this.#contractClassLogsByBlock = db.openMap('archiver_contract_class_logs_by_block');
|
|
43
50
|
|
|
44
51
|
this.#logsMaxPageSize = logsMaxPageSize;
|
|
45
52
|
}
|
|
46
53
|
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
54
|
+
/**
|
|
55
|
+
* Extracts tagged logs from a single block, grouping them into private and public maps.
|
|
56
|
+
*
|
|
57
|
+
* @param block - The L2 block to extract logs from.
|
|
58
|
+
* @returns An object containing the private and public tagged logs for the block.
|
|
59
|
+
*/
|
|
60
|
+
#extractTaggedLogsFromBlock(block: L2BlockNew) {
|
|
61
|
+
// SiloedTag (as string) -> array of log buffers.
|
|
62
|
+
const privateTaggedLogs = new Map<string, Buffer[]>();
|
|
63
|
+
// "{contractAddress}_{tag}" (as string) -> array of log buffers.
|
|
64
|
+
const publicTaggedLogs = new Map<string, Buffer[]>();
|
|
65
|
+
|
|
66
|
+
block.body.txEffects.forEach(txEffect => {
|
|
53
67
|
const txHash = txEffect.txHash;
|
|
54
|
-
const dataStartIndexForTx = dataStartIndexForBlock + txIndex * MAX_NOTE_HASHES_PER_TX;
|
|
55
68
|
|
|
56
|
-
txEffect.privateLogs.forEach(
|
|
69
|
+
txEffect.privateLogs.forEach(log => {
|
|
70
|
+
// Private logs use SiloedTag (already siloed by kernel)
|
|
57
71
|
const tag = log.fields[0];
|
|
58
72
|
this.#log.debug(`Found private log with tag ${tag.toString()} in block ${block.number}`);
|
|
59
73
|
|
|
60
|
-
const currentLogs =
|
|
61
|
-
currentLogs.push(
|
|
62
|
-
|
|
74
|
+
const currentLogs = privateTaggedLogs.get(tag.toString()) ?? [];
|
|
75
|
+
currentLogs.push(
|
|
76
|
+
new TxScopedL2Log(
|
|
77
|
+
txHash,
|
|
78
|
+
block.number,
|
|
79
|
+
block.timestamp,
|
|
80
|
+
log.getEmittedFields(),
|
|
81
|
+
txEffect.noteHashes,
|
|
82
|
+
txEffect.nullifiers[0],
|
|
83
|
+
).toBuffer(),
|
|
84
|
+
);
|
|
85
|
+
privateTaggedLogs.set(tag.toString(), currentLogs);
|
|
63
86
|
});
|
|
64
87
|
|
|
65
|
-
txEffect.publicLogs.forEach(
|
|
88
|
+
txEffect.publicLogs.forEach(log => {
|
|
89
|
+
// Public logs use Tag directly (not siloed) and are stored with contract address
|
|
66
90
|
const tag = log.fields[0];
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
91
|
+
const contractAddress = log.contractAddress;
|
|
92
|
+
const key = `${contractAddress.toString()}_${tag.toString()}`;
|
|
93
|
+
this.#log.debug(
|
|
94
|
+
`Found public log with tag ${tag.toString()} from contract ${contractAddress.toString()} in block ${block.number}`,
|
|
95
|
+
);
|
|
96
|
+
|
|
97
|
+
const currentLogs = publicTaggedLogs.get(key) ?? [];
|
|
98
|
+
currentLogs.push(
|
|
99
|
+
new TxScopedL2Log(
|
|
100
|
+
txHash,
|
|
101
|
+
block.number,
|
|
102
|
+
block.timestamp,
|
|
103
|
+
log.getEmittedFields(),
|
|
104
|
+
txEffect.noteHashes,
|
|
105
|
+
txEffect.nullifiers[0],
|
|
106
|
+
).toBuffer(),
|
|
107
|
+
);
|
|
108
|
+
publicTaggedLogs.set(key, currentLogs);
|
|
72
109
|
});
|
|
73
110
|
});
|
|
74
|
-
|
|
111
|
+
|
|
112
|
+
return { privateTaggedLogs, publicTaggedLogs };
|
|
113
|
+
}
|
|
114
|
+
|
|
115
|
+
/**
|
|
116
|
+
* Extracts and aggregates tagged logs from a list of blocks.
|
|
117
|
+
* @param blocks - The blocks to extract logs from.
|
|
118
|
+
* @returns A map from tag (as string) to an array of serialized private logs belonging to that tag, and a map from
|
|
119
|
+
* "{contractAddress}_{tag}" (as string) to an array of serialized public logs belonging to that key.
|
|
120
|
+
*/
|
|
121
|
+
#extractTaggedLogs(blocks: L2BlockNew[]): {
|
|
122
|
+
privateTaggedLogs: Map<string, Buffer[]>;
|
|
123
|
+
publicTaggedLogs: Map<string, Buffer[]>;
|
|
124
|
+
} {
|
|
125
|
+
const taggedLogsInBlocks = blocks.map(block => this.#extractTaggedLogsFromBlock(block));
|
|
126
|
+
|
|
127
|
+
// Now we merge the maps from each block into a single map.
|
|
128
|
+
const privateTaggedLogs = taggedLogsInBlocks.reduce((acc, { privateTaggedLogs }) => {
|
|
129
|
+
for (const [tag, logs] of privateTaggedLogs.entries()) {
|
|
130
|
+
const currentLogs = acc.get(tag) ?? [];
|
|
131
|
+
acc.set(tag, currentLogs.concat(logs));
|
|
132
|
+
}
|
|
133
|
+
return acc;
|
|
134
|
+
}, new Map<string, Buffer[]>());
|
|
135
|
+
|
|
136
|
+
const publicTaggedLogs = taggedLogsInBlocks.reduce((acc, { publicTaggedLogs }) => {
|
|
137
|
+
for (const [key, logs] of publicTaggedLogs.entries()) {
|
|
138
|
+
const currentLogs = acc.get(key) ?? [];
|
|
139
|
+
acc.set(key, currentLogs.concat(logs));
|
|
140
|
+
}
|
|
141
|
+
return acc;
|
|
142
|
+
}, new Map<string, Buffer[]>());
|
|
143
|
+
|
|
144
|
+
return { privateTaggedLogs, publicTaggedLogs };
|
|
75
145
|
}
|
|
76
146
|
|
|
77
147
|
/**
|
|
@@ -79,43 +149,59 @@ export class LogStore {
|
|
|
79
149
|
* @param blocks - The blocks for which to add the logs.
|
|
80
150
|
* @returns True if the operation is successful.
|
|
81
151
|
*/
|
|
82
|
-
addLogs(blocks:
|
|
83
|
-
const
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
const currentLogs = acc.get(tag) ?? [];
|
|
88
|
-
acc.set(tag, currentLogs.concat(logs));
|
|
89
|
-
}
|
|
90
|
-
return acc;
|
|
91
|
-
}, new Map());
|
|
92
|
-
const tagsToUpdate = Array.from(taggedLogsToAdd.keys());
|
|
152
|
+
addLogs(blocks: L2BlockNew[]): Promise<boolean> {
|
|
153
|
+
const { privateTaggedLogs, publicTaggedLogs } = this.#extractTaggedLogs(blocks);
|
|
154
|
+
|
|
155
|
+
const keysOfPrivateLogsToUpdate = Array.from(privateTaggedLogs.keys());
|
|
156
|
+
const keysOfPublicLogsToUpdate = Array.from(publicTaggedLogs.keys());
|
|
93
157
|
|
|
94
158
|
return this.db.transactionAsync(async () => {
|
|
95
|
-
const
|
|
96
|
-
|
|
159
|
+
const currentPrivateTaggedLogs = await Promise.all(
|
|
160
|
+
keysOfPrivateLogsToUpdate.map(async key => ({
|
|
161
|
+
tag: key,
|
|
162
|
+
logBuffers: await this.#privateLogsByTag.getAsync(key),
|
|
163
|
+
})),
|
|
97
164
|
);
|
|
98
|
-
|
|
165
|
+
currentPrivateTaggedLogs.forEach(taggedLogBuffer => {
|
|
99
166
|
if (taggedLogBuffer.logBuffers && taggedLogBuffer.logBuffers.length > 0) {
|
|
100
|
-
|
|
167
|
+
privateTaggedLogs.set(
|
|
101
168
|
taggedLogBuffer.tag,
|
|
102
|
-
taggedLogBuffer.logBuffers!.concat(
|
|
169
|
+
taggedLogBuffer.logBuffers!.concat(privateTaggedLogs.get(taggedLogBuffer.tag)!),
|
|
103
170
|
);
|
|
104
171
|
}
|
|
105
172
|
});
|
|
173
|
+
|
|
174
|
+
const currentPublicTaggedLogs = await Promise.all(
|
|
175
|
+
keysOfPublicLogsToUpdate.map(async key => ({
|
|
176
|
+
key,
|
|
177
|
+
logBuffers: await this.#publicLogsByContractAndTag.getAsync(key),
|
|
178
|
+
})),
|
|
179
|
+
);
|
|
180
|
+
currentPublicTaggedLogs.forEach(taggedLogBuffer => {
|
|
181
|
+
if (taggedLogBuffer.logBuffers && taggedLogBuffer.logBuffers.length > 0) {
|
|
182
|
+
publicTaggedLogs.set(
|
|
183
|
+
taggedLogBuffer.key,
|
|
184
|
+
taggedLogBuffer.logBuffers!.concat(publicTaggedLogs.get(taggedLogBuffer.key)!),
|
|
185
|
+
);
|
|
186
|
+
}
|
|
187
|
+
});
|
|
188
|
+
|
|
106
189
|
for (const block of blocks) {
|
|
107
|
-
const
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
190
|
+
const blockHash = await block.hash();
|
|
191
|
+
|
|
192
|
+
const privateTagsInBlock: string[] = [];
|
|
193
|
+
for (const [tag, logs] of privateTaggedLogs.entries()) {
|
|
194
|
+
await this.#privateLogsByTag.set(tag, logs);
|
|
195
|
+
privateTagsInBlock.push(tag);
|
|
111
196
|
}
|
|
112
|
-
await this.#
|
|
197
|
+
await this.#privateLogKeysByBlock.set(block.number, privateTagsInBlock);
|
|
113
198
|
|
|
114
|
-
const
|
|
115
|
-
|
|
116
|
-
.
|
|
117
|
-
.
|
|
118
|
-
|
|
199
|
+
const publicKeysInBlock: string[] = [];
|
|
200
|
+
for (const [key, logs] of publicTaggedLogs.entries()) {
|
|
201
|
+
await this.#publicLogsByContractAndTag.set(key, logs);
|
|
202
|
+
publicKeysInBlock.push(key);
|
|
203
|
+
}
|
|
204
|
+
await this.#publicLogKeysByBlock.set(block.number, publicKeysInBlock);
|
|
119
205
|
|
|
120
206
|
const publicLogsInBlock = block.body.txEffects
|
|
121
207
|
.map((txEffect, txIndex) =>
|
|
@@ -137,72 +223,82 @@ export class LogStore {
|
|
|
137
223
|
)
|
|
138
224
|
.flat();
|
|
139
225
|
|
|
140
|
-
await this.#publicLogsByBlock.set(block.number,
|
|
141
|
-
await this.#contractClassLogsByBlock.set(
|
|
226
|
+
await this.#publicLogsByBlock.set(block.number, this.#packWithBlockHash(blockHash, publicLogsInBlock));
|
|
227
|
+
await this.#contractClassLogsByBlock.set(
|
|
228
|
+
block.number,
|
|
229
|
+
this.#packWithBlockHash(blockHash, contractClassLogsInBlock),
|
|
230
|
+
);
|
|
142
231
|
}
|
|
143
232
|
|
|
144
233
|
return true;
|
|
145
234
|
});
|
|
146
235
|
}
|
|
147
236
|
|
|
148
|
-
|
|
237
|
+
#packWithBlockHash(blockHash: Fr, data: Buffer<ArrayBufferLike>[]): Buffer<ArrayBufferLike> {
|
|
238
|
+
return Buffer.concat([blockHash.toBuffer(), ...data]);
|
|
239
|
+
}
|
|
240
|
+
|
|
241
|
+
#unpackBlockHash(reader: BufferReader): L2BlockHash {
|
|
242
|
+
const blockHash = reader.remainingBytes() > 0 ? reader.readObject(Fr) : undefined;
|
|
243
|
+
|
|
244
|
+
if (!blockHash) {
|
|
245
|
+
throw new Error('Failed to read block hash from log entry buffer');
|
|
246
|
+
}
|
|
247
|
+
|
|
248
|
+
return L2BlockHash.fromField(blockHash);
|
|
249
|
+
}
|
|
250
|
+
|
|
251
|
+
deleteLogs(blocks: L2BlockNew[]): Promise<boolean> {
|
|
149
252
|
return this.db.transactionAsync(async () => {
|
|
150
|
-
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
253
|
+
await Promise.all(
|
|
254
|
+
blocks.map(async block => {
|
|
255
|
+
// Delete private logs
|
|
256
|
+
const privateKeys = (await this.#privateLogKeysByBlock.getAsync(block.number)) ?? [];
|
|
257
|
+
await Promise.all(privateKeys.map(tag => this.#privateLogsByTag.delete(tag)));
|
|
258
|
+
|
|
259
|
+
// Delete public logs
|
|
260
|
+
const publicKeys = (await this.#publicLogKeysByBlock.getAsync(block.number)) ?? [];
|
|
261
|
+
await Promise.all(publicKeys.map(key => this.#publicLogsByContractAndTag.delete(key)));
|
|
262
|
+
}),
|
|
263
|
+
);
|
|
158
264
|
|
|
159
265
|
await Promise.all(
|
|
160
266
|
blocks.map(block =>
|
|
161
267
|
Promise.all([
|
|
162
|
-
this.#privateLogsByBlock.delete(block.number),
|
|
163
268
|
this.#publicLogsByBlock.delete(block.number),
|
|
164
|
-
this.#
|
|
269
|
+
this.#privateLogKeysByBlock.delete(block.number),
|
|
270
|
+
this.#publicLogKeysByBlock.delete(block.number),
|
|
165
271
|
this.#contractClassLogsByBlock.delete(block.number),
|
|
166
272
|
]),
|
|
167
273
|
),
|
|
168
274
|
);
|
|
169
275
|
|
|
170
|
-
await Promise.all(tagsToDelete.map(tag => this.#logsByTag.delete(tag.toString())));
|
|
171
276
|
return true;
|
|
172
277
|
});
|
|
173
278
|
}
|
|
174
279
|
|
|
175
280
|
/**
|
|
176
|
-
*
|
|
177
|
-
*
|
|
178
|
-
* @param limit - The maximum number of blocks to retrieve logs from.
|
|
179
|
-
* @returns An array of private logs from the specified range of blocks.
|
|
281
|
+
* Gets all private logs that match any of the `tags`. For each tag, an array of matching logs is returned. An empty
|
|
282
|
+
* array implies no logs match that tag.
|
|
180
283
|
*/
|
|
181
|
-
async
|
|
182
|
-
const logs =
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
while (reader.remainingBytes() > 0) {
|
|
186
|
-
logs.push(reader.readObject(PrivateLog));
|
|
187
|
-
}
|
|
188
|
-
}
|
|
189
|
-
return logs;
|
|
284
|
+
async getPrivateLogsByTags(tags: SiloedTag[]): Promise<TxScopedL2Log[][]> {
|
|
285
|
+
const logs = await Promise.all(tags.map(tag => this.#privateLogsByTag.getAsync(tag.toString())));
|
|
286
|
+
|
|
287
|
+
return logs.map(logBuffers => logBuffers?.map(logBuffer => TxScopedL2Log.fromBuffer(logBuffer)) ?? []);
|
|
190
288
|
}
|
|
191
289
|
|
|
192
290
|
/**
|
|
193
|
-
* Gets all logs that match any of the
|
|
194
|
-
*
|
|
195
|
-
* @returns For each received tag, an array of matching logs is returned. An empty array implies no logs match
|
|
196
|
-
* that tag.
|
|
291
|
+
* Gets all public logs that match any of the `tags` from the specified contract. For each tag, an array of matching
|
|
292
|
+
* logs is returned. An empty array implies no logs match that tag.
|
|
197
293
|
*/
|
|
198
|
-
async
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
logBuffers => logBuffers?.slice(0, limitPerTag).map(logBuffer => TxScopedL2Log.fromBuffer(logBuffer)) ?? [],
|
|
294
|
+
async getPublicLogsByTagsFromContract(contractAddress: AztecAddress, tags: Tag[]): Promise<TxScopedL2Log[][]> {
|
|
295
|
+
const logs = await Promise.all(
|
|
296
|
+
tags.map(tag => {
|
|
297
|
+
const key = `${contractAddress.toString()}_${tag.value.toString()}`;
|
|
298
|
+
return this.#publicLogsByContractAndTag.getAsync(key);
|
|
299
|
+
}),
|
|
205
300
|
);
|
|
301
|
+
return logs.map(logBuffers => logBuffers?.map(logBuffer => TxScopedL2Log.fromBuffer(logBuffer)) ?? []);
|
|
206
302
|
}
|
|
207
303
|
|
|
208
304
|
/**
|
|
@@ -233,6 +329,9 @@ export class LogStore {
|
|
|
233
329
|
const buffer = (await this.#publicLogsByBlock.getAsync(blockNumber)) ?? Buffer.alloc(0);
|
|
234
330
|
const publicLogsInBlock: [PublicLog[]] = [[]];
|
|
235
331
|
const reader = new BufferReader(buffer);
|
|
332
|
+
|
|
333
|
+
const blockHash = this.#unpackBlockHash(reader);
|
|
334
|
+
|
|
236
335
|
while (reader.remainingBytes() > 0) {
|
|
237
336
|
const indexOfTx = reader.readNumber();
|
|
238
337
|
const numLogsInTx = reader.readNumber();
|
|
@@ -245,7 +344,7 @@ export class LogStore {
|
|
|
245
344
|
const txLogs = publicLogsInBlock[txIndex];
|
|
246
345
|
|
|
247
346
|
const logs: ExtendedPublicLog[] = [];
|
|
248
|
-
const maxLogsHit = this.#accumulateLogs(logs, blockNumber, txIndex, txLogs, filter);
|
|
347
|
+
const maxLogsHit = this.#accumulateLogs(logs, blockNumber, blockHash, txIndex, txLogs, filter);
|
|
249
348
|
|
|
250
349
|
return { logs, maxLogsHit };
|
|
251
350
|
}
|
|
@@ -268,6 +367,9 @@ export class LogStore {
|
|
|
268
367
|
loopOverBlocks: for await (const [blockNumber, logBuffer] of this.#publicLogsByBlock.entriesAsync({ start, end })) {
|
|
269
368
|
const publicLogsInBlock: [PublicLog[]] = [[]];
|
|
270
369
|
const reader = new BufferReader(logBuffer);
|
|
370
|
+
|
|
371
|
+
const blockHash = this.#unpackBlockHash(reader);
|
|
372
|
+
|
|
271
373
|
while (reader.remainingBytes() > 0) {
|
|
272
374
|
const indexOfTx = reader.readNumber();
|
|
273
375
|
const numLogsInTx = reader.readNumber();
|
|
@@ -278,7 +380,7 @@ export class LogStore {
|
|
|
278
380
|
}
|
|
279
381
|
for (let txIndex = filter.afterLog?.txIndex ?? 0; txIndex < publicLogsInBlock.length; txIndex++) {
|
|
280
382
|
const txLogs = publicLogsInBlock[txIndex];
|
|
281
|
-
maxLogsHit = this.#accumulateLogs(logs, blockNumber, txIndex, txLogs, filter);
|
|
383
|
+
maxLogsHit = this.#accumulateLogs(logs, blockNumber, blockHash, txIndex, txLogs, filter);
|
|
282
384
|
if (maxLogsHit) {
|
|
283
385
|
this.#log.debug(`Max logs hit at block ${blockNumber}`);
|
|
284
386
|
break loopOverBlocks;
|
|
@@ -317,6 +419,8 @@ export class LogStore {
|
|
|
317
419
|
const contractClassLogsInBlock: [ContractClassLog[]] = [[]];
|
|
318
420
|
|
|
319
421
|
const reader = new BufferReader(contractClassLogsBuffer);
|
|
422
|
+
const blockHash = this.#unpackBlockHash(reader);
|
|
423
|
+
|
|
320
424
|
while (reader.remainingBytes() > 0) {
|
|
321
425
|
const indexOfTx = reader.readNumber();
|
|
322
426
|
const numLogsInTx = reader.readNumber();
|
|
@@ -329,7 +433,7 @@ export class LogStore {
|
|
|
329
433
|
const txLogs = contractClassLogsInBlock[txIndex];
|
|
330
434
|
|
|
331
435
|
const logs: ExtendedContractClassLog[] = [];
|
|
332
|
-
const maxLogsHit = this.#accumulateLogs(logs, blockNumber, txIndex, txLogs, filter);
|
|
436
|
+
const maxLogsHit = this.#accumulateLogs(logs, blockNumber, blockHash, txIndex, txLogs, filter);
|
|
333
437
|
|
|
334
438
|
return { logs, maxLogsHit };
|
|
335
439
|
}
|
|
@@ -355,6 +459,7 @@ export class LogStore {
|
|
|
355
459
|
})) {
|
|
356
460
|
const contractClassLogsInBlock: [ContractClassLog[]] = [[]];
|
|
357
461
|
const reader = new BufferReader(logBuffer);
|
|
462
|
+
const blockHash = this.#unpackBlockHash(reader);
|
|
358
463
|
while (reader.remainingBytes() > 0) {
|
|
359
464
|
const indexOfTx = reader.readNumber();
|
|
360
465
|
const numLogsInTx = reader.readNumber();
|
|
@@ -365,7 +470,7 @@ export class LogStore {
|
|
|
365
470
|
}
|
|
366
471
|
for (let txIndex = filter.afterLog?.txIndex ?? 0; txIndex < contractClassLogsInBlock.length; txIndex++) {
|
|
367
472
|
const txLogs = contractClassLogsInBlock[txIndex];
|
|
368
|
-
maxLogsHit = this.#accumulateLogs(logs, blockNumber, txIndex, txLogs, filter);
|
|
473
|
+
maxLogsHit = this.#accumulateLogs(logs, blockNumber, blockHash, txIndex, txLogs, filter);
|
|
369
474
|
if (maxLogsHit) {
|
|
370
475
|
this.#log.debug(`Max logs hit at block ${blockNumber}`);
|
|
371
476
|
break loopOverBlocks;
|
|
@@ -379,9 +484,10 @@ export class LogStore {
|
|
|
379
484
|
#accumulateLogs(
|
|
380
485
|
results: (ExtendedContractClassLog | ExtendedPublicLog)[],
|
|
381
486
|
blockNumber: number,
|
|
487
|
+
blockHash: L2BlockHash,
|
|
382
488
|
txIndex: number,
|
|
383
489
|
txLogs: (ContractClassLog | PublicLog)[],
|
|
384
|
-
filter: LogFilter,
|
|
490
|
+
filter: LogFilter = {},
|
|
385
491
|
): boolean {
|
|
386
492
|
let maxLogsHit = false;
|
|
387
493
|
let logIndex = typeof filter.afterLog?.logIndex === 'number' ? filter.afterLog.logIndex + 1 : 0;
|
|
@@ -389,9 +495,13 @@ export class LogStore {
|
|
|
389
495
|
const log = txLogs[logIndex];
|
|
390
496
|
if (!filter.contractAddress || log.contractAddress.equals(filter.contractAddress)) {
|
|
391
497
|
if (log instanceof ContractClassLog) {
|
|
392
|
-
results.push(
|
|
498
|
+
results.push(
|
|
499
|
+
new ExtendedContractClassLog(new LogId(BlockNumber(blockNumber), blockHash, txIndex, logIndex), log),
|
|
500
|
+
);
|
|
501
|
+
} else if (log instanceof PublicLog) {
|
|
502
|
+
results.push(new ExtendedPublicLog(new LogId(BlockNumber(blockNumber), blockHash, txIndex, logIndex), log));
|
|
393
503
|
} else {
|
|
394
|
-
|
|
504
|
+
throw new Error('Unknown log type');
|
|
395
505
|
}
|
|
396
506
|
|
|
397
507
|
if (results.length >= this.#logsMaxPageSize) {
|
|
@@ -1,6 +1,7 @@
|
|
|
1
|
-
import type { L1BlockId } from '@aztec/ethereum';
|
|
1
|
+
import type { L1BlockId } from '@aztec/ethereum/l1-types';
|
|
2
|
+
import { CheckpointNumber } from '@aztec/foundation/branded-types';
|
|
2
3
|
import { Buffer16, Buffer32 } from '@aztec/foundation/buffer';
|
|
3
|
-
import { Fr } from '@aztec/foundation/
|
|
4
|
+
import { Fr } from '@aztec/foundation/curves/bn254';
|
|
4
5
|
import { toArray } from '@aztec/foundation/iterable';
|
|
5
6
|
import { createLogger } from '@aztec/foundation/log';
|
|
6
7
|
import { BufferReader, serializeToBuffer } from '@aztec/foundation/serialize';
|
|
@@ -113,20 +114,20 @@ export class MessageStore {
|
|
|
113
114
|
);
|
|
114
115
|
}
|
|
115
116
|
|
|
116
|
-
// Check index corresponds to the
|
|
117
|
-
const [expectedStart, expectedEnd] = InboxLeaf.
|
|
117
|
+
// Check index corresponds to the checkpoint number.
|
|
118
|
+
const [expectedStart, expectedEnd] = InboxLeaf.indexRangeForCheckpoint(message.checkpointNumber);
|
|
118
119
|
if (message.index < expectedStart || message.index >= expectedEnd) {
|
|
119
120
|
throw new MessageStoreError(
|
|
120
121
|
`Invalid index ${message.index} for incoming L1 to L2 message ${message.leaf.toString()} ` +
|
|
121
|
-
`at
|
|
122
|
+
`at checkpoint ${message.checkpointNumber} (expected value in range [${expectedStart}, ${expectedEnd}))`,
|
|
122
123
|
message,
|
|
123
124
|
);
|
|
124
125
|
}
|
|
125
126
|
|
|
126
|
-
// Check there are no gaps in the indices within the same
|
|
127
|
+
// Check there are no gaps in the indices within the same checkpoint.
|
|
127
128
|
if (
|
|
128
129
|
lastMessage &&
|
|
129
|
-
message.
|
|
130
|
+
message.checkpointNumber === lastMessage.checkpointNumber &&
|
|
130
131
|
message.index !== lastMessage.index + 1n
|
|
131
132
|
) {
|
|
132
133
|
throw new MessageStoreError(
|
|
@@ -138,12 +139,12 @@ export class MessageStore {
|
|
|
138
139
|
|
|
139
140
|
// Check the first message in a block has the correct index.
|
|
140
141
|
if (
|
|
141
|
-
(!lastMessage || message.
|
|
142
|
-
message.index !==
|
|
142
|
+
(!lastMessage || message.checkpointNumber > lastMessage.checkpointNumber) &&
|
|
143
|
+
message.index !== expectedStart
|
|
143
144
|
) {
|
|
144
145
|
throw new MessageStoreError(
|
|
145
|
-
`Message ${message.leaf.toString()} for
|
|
146
|
-
`${message.index} (expected ${
|
|
146
|
+
`Message ${message.leaf.toString()} for checkpoint ${message.checkpointNumber} has wrong index ` +
|
|
147
|
+
`${message.index} (expected ${expectedStart})`,
|
|
147
148
|
message,
|
|
148
149
|
);
|
|
149
150
|
}
|
|
@@ -184,10 +185,10 @@ export class MessageStore {
|
|
|
184
185
|
return msg ? deserializeInboxMessage(msg) : undefined;
|
|
185
186
|
}
|
|
186
187
|
|
|
187
|
-
public async getL1ToL2Messages(
|
|
188
|
+
public async getL1ToL2Messages(checkpointNumber: CheckpointNumber): Promise<Fr[]> {
|
|
188
189
|
const messages: Fr[] = [];
|
|
189
190
|
|
|
190
|
-
const [startIndex, endIndex] = InboxLeaf.
|
|
191
|
+
const [startIndex, endIndex] = InboxLeaf.indexRangeForCheckpoint(checkpointNumber);
|
|
191
192
|
let lastIndex = startIndex - 1n;
|
|
192
193
|
|
|
193
194
|
for await (const msgBuffer of this.#l1ToL2Messages.valuesAsync({
|
|
@@ -195,8 +196,10 @@ export class MessageStore {
|
|
|
195
196
|
end: this.indexToKey(endIndex),
|
|
196
197
|
})) {
|
|
197
198
|
const msg = deserializeInboxMessage(msgBuffer);
|
|
198
|
-
if (msg.
|
|
199
|
-
throw new Error(
|
|
199
|
+
if (msg.checkpointNumber !== checkpointNumber) {
|
|
200
|
+
throw new Error(
|
|
201
|
+
`L1 to L2 message with index ${msg.index} has invalid checkpoint number ${msg.checkpointNumber}`,
|
|
202
|
+
);
|
|
200
203
|
} else if (msg.index !== lastIndex + 1n) {
|
|
201
204
|
throw new Error(`Expected L1 to L2 message with index ${lastIndex + 1n} but got ${msg.index}`);
|
|
202
205
|
}
|
|
@@ -232,9 +235,9 @@ export class MessageStore {
|
|
|
232
235
|
});
|
|
233
236
|
}
|
|
234
237
|
|
|
235
|
-
public
|
|
236
|
-
this.#log.debug(`Deleting L1 to L2 messages up to target
|
|
237
|
-
const startIndex = InboxLeaf.
|
|
238
|
+
public rollbackL1ToL2MessagesToCheckpoint(targetCheckpointNumber: CheckpointNumber): Promise<void> {
|
|
239
|
+
this.#log.debug(`Deleting L1 to L2 messages up to target checkpoint ${targetCheckpointNumber}`);
|
|
240
|
+
const startIndex = InboxLeaf.smallestIndexForCheckpoint(CheckpointNumber(targetCheckpointNumber + 1));
|
|
238
241
|
return this.removeL1ToL2Messages(startIndex);
|
|
239
242
|
}
|
|
240
243
|
|
|
@@ -0,0 +1,98 @@
|
|
|
1
|
+
# Archiver L1 Data Retrieval
|
|
2
|
+
|
|
3
|
+
Modules and classes to handle data retrieval from L1 for the archiver.
|
|
4
|
+
|
|
5
|
+
## Calldata Retriever
|
|
6
|
+
|
|
7
|
+
The sequencer publisher bundles multiple operations into a single multicall3 transaction for gas
|
|
8
|
+
efficiency. A typical transaction includes:
|
|
9
|
+
|
|
10
|
+
1. Attestation invalidations (if needed): `invalidateBadAttestation`, `invalidateInsufficientAttestations`
|
|
11
|
+
2. Block proposal: `propose` (exactly one per transaction to the rollup contract)
|
|
12
|
+
3. Governance and slashing (if needed): votes, payload creation/execution
|
|
13
|
+
|
|
14
|
+
The archiver needs to extract the `propose` calldata from these bundled transactions to reconstruct
|
|
15
|
+
L2 blocks. This class needs to handle scenarios where the transaction was submitted via multicall3,
|
|
16
|
+
as well as alternative ways for submitting the `propose` call that other clients might use.
|
|
17
|
+
|
|
18
|
+
### Multicall3 Validation and Decoding
|
|
19
|
+
|
|
20
|
+
First attempt to decode the transaction as a multicall3 `aggregate3` call with validation:
|
|
21
|
+
|
|
22
|
+
- Check if transaction is to multicall3 address (`0xcA11bde05977b3631167028862bE2a173976CA11`)
|
|
23
|
+
- Decode as `aggregate3(Call3[] calldata calls)`
|
|
24
|
+
- Allow calls to known addresses and methods (rollup, governance, slashing contracts, etc.)
|
|
25
|
+
- Find the single `propose` call to the rollup contract
|
|
26
|
+
- Verify exactly one `propose` call exists
|
|
27
|
+
- Extract and return the propose calldata
|
|
28
|
+
|
|
29
|
+
This step handles the common case efficiently without requiring expensive trace or debug RPC calls.
|
|
30
|
+
Any validation failure triggers fallback to the next step.
|
|
31
|
+
|
|
32
|
+
### Direct Propose Call
|
|
33
|
+
|
|
34
|
+
Second attempt to decode the transaction as a direct `propose` call to the rollup contract:
|
|
35
|
+
|
|
36
|
+
- Check if transaction is to the rollup address
|
|
37
|
+
- Decode as `propose` function call
|
|
38
|
+
- Verify the function is indeed `propose`
|
|
39
|
+
- Return the transaction input as the propose calldata
|
|
40
|
+
|
|
41
|
+
This handles scenarios where clients submit transactions directly to the rollup contract without
|
|
42
|
+
using multicall3 for bundling. Any validation failure triggers fallback to the next step.
|
|
43
|
+
|
|
44
|
+
### Spire Proposer Call
|
|
45
|
+
|
|
46
|
+
Given existing attempts to route the call via the Spire proposer, we also check if the tx is `to` the
|
|
47
|
+
proposer known address, and if so, we try decoding it as either a multicall3 or a direct call to the
|
|
48
|
+
rollup contract.
|
|
49
|
+
|
|
50
|
+
Similar as with the multicall3 check, we check that there are no other calls in the Spire proposer, so
|
|
51
|
+
we are absolutely sure that the only call is the successful one to the rollup. Any extraneous call would
|
|
52
|
+
imply an unexpected path to calling `propose` in the rollup contract, and since we cannot verify if the
|
|
53
|
+
calldata arguments we extracted are the correct ones (see the section below), we cannot know for sure which
|
|
54
|
+
one is the call that succeeded, so we don't know which calldata to process.
|
|
55
|
+
|
|
56
|
+
Furthermore, since the Spire proposer is upgradeable, we check if the implementation has not changed in
|
|
57
|
+
order to decode. As usual, any validation failure triggers fallback to the next step.
|
|
58
|
+
|
|
59
|
+
### Verifying Multicall3 Arguments
|
|
60
|
+
|
|
61
|
+
**This is NOT implemented for simplicity's sake**
|
|
62
|
+
|
|
63
|
+
If the checks above don't hold, such as when there are multiple calls to `propose`, then we cannot
|
|
64
|
+
reliably extract the `propose` calldata from the multicall3 arguments alone. We can try a best-effort
|
|
65
|
+
where we try all `propose` calls we see and validate them against on-chain data. Note that we can use these
|
|
66
|
+
same strategies if we were to obtain the calldata from another source.
|
|
67
|
+
|
|
68
|
+
#### TempBlockLog Verification
|
|
69
|
+
|
|
70
|
+
Read the stored `TempBlockLog` for the L2 block number from L1 and verify it matches our decoded header hash,
|
|
71
|
+
since the `TempBlockLog` stores the hash of the proposed block header, the payload commitment, and the attestations.
|
|
72
|
+
|
|
73
|
+
However, `TempBlockLog` is only stored temporarily and deleted after proven, so this method only works for recent
|
|
74
|
+
blocks, not for historical data syncing.
|
|
75
|
+
|
|
76
|
+
#### Archive Verification
|
|
77
|
+
|
|
78
|
+
Verify that the archive root in the decoded propose is correct with regard to the block header. This requires
|
|
79
|
+
hashing the block header we have retrieved, inserting it into the archive tree, and checking the resulting root
|
|
80
|
+
against the one we got from L1.
|
|
81
|
+
|
|
82
|
+
However, this requires that the archive keeps a reference to world-state, which is not the case in the current
|
|
83
|
+
system.
|
|
84
|
+
|
|
85
|
+
#### Emit Commitments in Rollup Contract
|
|
86
|
+
|
|
87
|
+
Modify rollup contract to emit commitments to the block header in the `L2BlockProposed` event, allowing us to easily
|
|
88
|
+
verify the calldata we obtained vs the emitted event.
|
|
89
|
+
|
|
90
|
+
However, modifying the rollup contract is out of scope for this change. But we can implement this approach in `v2`.
|
|
91
|
+
|
|
92
|
+
### Debug and Trace Transaction Fallback
|
|
93
|
+
|
|
94
|
+
Last, we use L1 node's trace/debug RPC methods to definitively identify the one successful `propose` call within the tx.
|
|
95
|
+
We can then extract the exact calldata that hit the `propose` function in the rollup contract.
|
|
96
|
+
|
|
97
|
+
This approach requires access to a debug-enabled L1 node, which may be more resource-intensive, so we only
|
|
98
|
+
use it as a fallback when the first step fails, which should be rare in practice.
|