@aztec/archiver 0.0.1-commit.d3ec352c → 0.0.1-commit.fcb71a6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (122) hide show
  1. package/dest/archiver/archiver.d.ts +84 -70
  2. package/dest/archiver/archiver.d.ts.map +1 -1
  3. package/dest/archiver/archiver.js +439 -228
  4. package/dest/archiver/archiver_store.d.ts +95 -43
  5. package/dest/archiver/archiver_store.d.ts.map +1 -1
  6. package/dest/archiver/archiver_store_test_suite.d.ts +1 -1
  7. package/dest/archiver/archiver_store_test_suite.d.ts.map +1 -1
  8. package/dest/archiver/archiver_store_test_suite.js +1847 -366
  9. package/dest/archiver/config.d.ts +5 -4
  10. package/dest/archiver/config.d.ts.map +1 -1
  11. package/dest/archiver/config.js +10 -3
  12. package/dest/archiver/errors.d.ts +25 -1
  13. package/dest/archiver/errors.d.ts.map +1 -1
  14. package/dest/archiver/errors.js +37 -0
  15. package/dest/archiver/index.d.ts +2 -2
  16. package/dest/archiver/index.d.ts.map +1 -1
  17. package/dest/archiver/instrumentation.d.ts +3 -1
  18. package/dest/archiver/instrumentation.d.ts.map +1 -1
  19. package/dest/archiver/instrumentation.js +11 -0
  20. package/dest/archiver/kv_archiver_store/block_store.d.ts +50 -18
  21. package/dest/archiver/kv_archiver_store/block_store.d.ts.map +1 -1
  22. package/dest/archiver/kv_archiver_store/block_store.js +320 -84
  23. package/dest/archiver/kv_archiver_store/contract_class_store.d.ts +2 -2
  24. package/dest/archiver/kv_archiver_store/contract_class_store.d.ts.map +1 -1
  25. package/dest/archiver/kv_archiver_store/contract_class_store.js +1 -1
  26. package/dest/archiver/kv_archiver_store/contract_instance_store.d.ts +2 -2
  27. package/dest/archiver/kv_archiver_store/contract_instance_store.d.ts.map +1 -1
  28. package/dest/archiver/kv_archiver_store/kv_archiver_store.d.ts +40 -51
  29. package/dest/archiver/kv_archiver_store/kv_archiver_store.d.ts.map +1 -1
  30. package/dest/archiver/kv_archiver_store/kv_archiver_store.js +65 -48
  31. package/dest/archiver/kv_archiver_store/log_store.d.ts +12 -16
  32. package/dest/archiver/kv_archiver_store/log_store.d.ts.map +1 -1
  33. package/dest/archiver/kv_archiver_store/log_store.js +148 -84
  34. package/dest/archiver/kv_archiver_store/message_store.d.ts +6 -5
  35. package/dest/archiver/kv_archiver_store/message_store.d.ts.map +1 -1
  36. package/dest/archiver/kv_archiver_store/message_store.js +15 -14
  37. package/dest/archiver/l1/bin/retrieve-calldata.d.ts +3 -0
  38. package/dest/archiver/l1/bin/retrieve-calldata.d.ts.map +1 -0
  39. package/dest/archiver/l1/bin/retrieve-calldata.js +149 -0
  40. package/dest/archiver/l1/calldata_retriever.d.ts +112 -0
  41. package/dest/archiver/l1/calldata_retriever.d.ts.map +1 -0
  42. package/dest/archiver/l1/calldata_retriever.js +471 -0
  43. package/dest/archiver/l1/data_retrieval.d.ts +90 -0
  44. package/dest/archiver/l1/data_retrieval.d.ts.map +1 -0
  45. package/dest/archiver/{data_retrieval.js → l1/data_retrieval.js} +50 -106
  46. package/dest/archiver/l1/debug_tx.d.ts +19 -0
  47. package/dest/archiver/l1/debug_tx.d.ts.map +1 -0
  48. package/dest/archiver/l1/debug_tx.js +73 -0
  49. package/dest/archiver/l1/spire_proposer.d.ts +70 -0
  50. package/dest/archiver/l1/spire_proposer.d.ts.map +1 -0
  51. package/dest/archiver/l1/spire_proposer.js +157 -0
  52. package/dest/archiver/l1/trace_tx.d.ts +97 -0
  53. package/dest/archiver/l1/trace_tx.d.ts.map +1 -0
  54. package/dest/archiver/l1/trace_tx.js +91 -0
  55. package/dest/archiver/l1/types.d.ts +12 -0
  56. package/dest/archiver/l1/types.d.ts.map +1 -0
  57. package/dest/archiver/l1/types.js +3 -0
  58. package/dest/archiver/l1/validate_trace.d.ts +29 -0
  59. package/dest/archiver/l1/validate_trace.d.ts.map +1 -0
  60. package/dest/archiver/l1/validate_trace.js +150 -0
  61. package/dest/archiver/structs/inbox_message.d.ts +4 -4
  62. package/dest/archiver/structs/inbox_message.d.ts.map +1 -1
  63. package/dest/archiver/structs/inbox_message.js +6 -6
  64. package/dest/archiver/structs/published.d.ts +1 -2
  65. package/dest/archiver/structs/published.d.ts.map +1 -1
  66. package/dest/factory.d.ts +1 -1
  67. package/dest/factory.js +1 -1
  68. package/dest/index.d.ts +2 -2
  69. package/dest/index.d.ts.map +1 -1
  70. package/dest/index.js +1 -1
  71. package/dest/test/mock_archiver.d.ts +4 -5
  72. package/dest/test/mock_archiver.d.ts.map +1 -1
  73. package/dest/test/mock_archiver.js +5 -9
  74. package/dest/test/mock_l1_to_l2_message_source.d.ts +5 -6
  75. package/dest/test/mock_l1_to_l2_message_source.d.ts.map +1 -1
  76. package/dest/test/mock_l1_to_l2_message_source.js +7 -11
  77. package/dest/test/mock_l2_block_source.d.ts +11 -4
  78. package/dest/test/mock_l2_block_source.d.ts.map +1 -1
  79. package/dest/test/mock_l2_block_source.js +18 -17
  80. package/dest/test/mock_structs.d.ts +3 -2
  81. package/dest/test/mock_structs.d.ts.map +1 -1
  82. package/dest/test/mock_structs.js +9 -9
  83. package/package.json +15 -14
  84. package/src/archiver/archiver.ts +567 -290
  85. package/src/archiver/archiver_store.ts +104 -42
  86. package/src/archiver/archiver_store_test_suite.ts +1895 -347
  87. package/src/archiver/config.ts +15 -10
  88. package/src/archiver/errors.ts +64 -0
  89. package/src/archiver/index.ts +1 -1
  90. package/src/archiver/instrumentation.ts +14 -0
  91. package/src/archiver/kv_archiver_store/block_store.ts +435 -95
  92. package/src/archiver/kv_archiver_store/contract_class_store.ts +1 -1
  93. package/src/archiver/kv_archiver_store/contract_instance_store.ts +1 -1
  94. package/src/archiver/kv_archiver_store/kv_archiver_store.ts +81 -66
  95. package/src/archiver/kv_archiver_store/log_store.ts +208 -99
  96. package/src/archiver/kv_archiver_store/message_store.ts +21 -18
  97. package/src/archiver/l1/README.md +98 -0
  98. package/src/archiver/l1/bin/retrieve-calldata.ts +182 -0
  99. package/src/archiver/l1/calldata_retriever.ts +641 -0
  100. package/src/archiver/{data_retrieval.ts → l1/data_retrieval.ts} +96 -161
  101. package/src/archiver/l1/debug_tx.ts +99 -0
  102. package/src/archiver/l1/spire_proposer.ts +160 -0
  103. package/src/archiver/l1/trace_tx.ts +128 -0
  104. package/src/archiver/l1/types.ts +13 -0
  105. package/src/archiver/l1/validate_trace.ts +211 -0
  106. package/src/archiver/structs/inbox_message.ts +7 -8
  107. package/src/archiver/structs/published.ts +0 -1
  108. package/src/factory.ts +1 -1
  109. package/src/index.ts +1 -1
  110. package/src/test/fixtures/debug_traceTransaction-multicall3.json +88 -0
  111. package/src/test/fixtures/debug_traceTransaction-multiplePropose.json +153 -0
  112. package/src/test/fixtures/debug_traceTransaction-proxied.json +122 -0
  113. package/src/test/fixtures/trace_transaction-multicall3.json +65 -0
  114. package/src/test/fixtures/trace_transaction-multiplePropose.json +319 -0
  115. package/src/test/fixtures/trace_transaction-proxied.json +128 -0
  116. package/src/test/fixtures/trace_transaction-randomRevert.json +216 -0
  117. package/src/test/mock_archiver.ts +6 -11
  118. package/src/test/mock_l1_to_l2_message_source.ts +6 -11
  119. package/src/test/mock_l2_block_source.ts +22 -18
  120. package/src/test/mock_structs.ts +10 -10
  121. package/dest/archiver/data_retrieval.d.ts +0 -80
  122. package/dest/archiver/data_retrieval.d.ts.map +0 -1
@@ -1,10 +1,11 @@
1
- import { INITIAL_L2_BLOCK_NUM, MAX_NOTE_HASHES_PER_TX } from '@aztec/constants';
1
+ import { INITIAL_L2_BLOCK_NUM } from '@aztec/constants';
2
2
  import { BlockNumber } from '@aztec/foundation/branded-types';
3
- import type { Fr } from '@aztec/foundation/fields';
3
+ import { Fr } from '@aztec/foundation/curves/bn254';
4
4
  import { createLogger } from '@aztec/foundation/log';
5
5
  import { BufferReader, numToUInt32BE } from '@aztec/foundation/serialize';
6
6
  import type { AztecAsyncKVStore, AztecAsyncMap } from '@aztec/kv-store';
7
- import type { L2Block } from '@aztec/stdlib/block';
7
+ import type { AztecAddress } from '@aztec/stdlib/aztec-address';
8
+ import { L2BlockHash, L2BlockNew } from '@aztec/stdlib/block';
8
9
  import type { GetContractClassLogsResponse, GetPublicLogsResponse } from '@aztec/stdlib/interfaces/client';
9
10
  import {
10
11
  ContractClassLog,
@@ -12,8 +13,9 @@ import {
12
13
  ExtendedPublicLog,
13
14
  type LogFilter,
14
15
  LogId,
15
- PrivateLog,
16
16
  PublicLog,
17
+ type SiloedTag,
18
+ Tag,
17
19
  TxScopedL2Log,
18
20
  } from '@aztec/stdlib/logs';
19
21
 
@@ -23,9 +25,12 @@ import type { BlockStore } from './block_store.js';
23
25
  * A store for logs
24
26
  */
25
27
  export class LogStore {
26
- #logsByTag: AztecAsyncMap<string, Buffer[]>;
27
- #logTagsByBlock: AztecAsyncMap<number, string[]>;
28
- #privateLogsByBlock: AztecAsyncMap<number, Buffer>;
28
+ // `tag` --> private logs
29
+ #privateLogsByTag: AztecAsyncMap<string, Buffer[]>;
30
+ // `{contractAddress}_${tag}` --> public logs
31
+ #publicLogsByContractAndTag: AztecAsyncMap<string, Buffer[]>;
32
+ #privateLogKeysByBlock: AztecAsyncMap<number, string[]>;
33
+ #publicLogKeysByBlock: AztecAsyncMap<number, string[]>;
29
34
  #publicLogsByBlock: AztecAsyncMap<number, Buffer>;
30
35
  #contractClassLogsByBlock: AztecAsyncMap<number, Buffer>;
31
36
  #logsMaxPageSize: number;
@@ -36,43 +41,107 @@ export class LogStore {
36
41
  private blockStore: BlockStore,
37
42
  logsMaxPageSize: number = 1000,
38
43
  ) {
39
- this.#logsByTag = db.openMap('archiver_tagged_logs_by_tag');
40
- this.#logTagsByBlock = db.openMap('archiver_log_tags_by_block');
41
- this.#privateLogsByBlock = db.openMap('archiver_private_logs_by_block');
44
+ this.#privateLogsByTag = db.openMap('archiver_private_tagged_logs_by_tag');
45
+ this.#publicLogsByContractAndTag = db.openMap('archiver_public_tagged_logs_by_tag');
46
+ this.#privateLogKeysByBlock = db.openMap('archiver_private_log_keys_by_block');
47
+ this.#publicLogKeysByBlock = db.openMap('archiver_public_log_keys_by_block');
42
48
  this.#publicLogsByBlock = db.openMap('archiver_public_logs_by_block');
43
49
  this.#contractClassLogsByBlock = db.openMap('archiver_contract_class_logs_by_block');
44
50
 
45
51
  this.#logsMaxPageSize = logsMaxPageSize;
46
52
  }
47
53
 
48
- #extractTaggedLogs(block: L2Block) {
49
- const taggedLogs = new Map<string, Buffer[]>();
50
- const dataStartIndexForBlock =
51
- block.header.state.partial.noteHashTree.nextAvailableLeafIndex -
52
- block.body.txEffects.length * MAX_NOTE_HASHES_PER_TX;
53
- block.body.txEffects.forEach((txEffect, txIndex) => {
54
+ /**
55
+ * Extracts tagged logs from a single block, grouping them into private and public maps.
56
+ *
57
+ * @param block - The L2 block to extract logs from.
58
+ * @returns An object containing the private and public tagged logs for the block.
59
+ */
60
+ #extractTaggedLogsFromBlock(block: L2BlockNew) {
61
+ // SiloedTag (as string) -> array of log buffers.
62
+ const privateTaggedLogs = new Map<string, Buffer[]>();
63
+ // "{contractAddress}_{tag}" (as string) -> array of log buffers.
64
+ const publicTaggedLogs = new Map<string, Buffer[]>();
65
+
66
+ block.body.txEffects.forEach(txEffect => {
54
67
  const txHash = txEffect.txHash;
55
- const dataStartIndexForTx = dataStartIndexForBlock + txIndex * MAX_NOTE_HASHES_PER_TX;
56
68
 
57
- txEffect.privateLogs.forEach((log, logIndex) => {
69
+ txEffect.privateLogs.forEach(log => {
70
+ // Private logs use SiloedTag (already siloed by kernel)
58
71
  const tag = log.fields[0];
59
72
  this.#log.debug(`Found private log with tag ${tag.toString()} in block ${block.number}`);
60
73
 
61
- const currentLogs = taggedLogs.get(tag.toString()) ?? [];
62
- currentLogs.push(new TxScopedL2Log(txHash, dataStartIndexForTx, logIndex, block.number, log).toBuffer());
63
- taggedLogs.set(tag.toString(), currentLogs);
74
+ const currentLogs = privateTaggedLogs.get(tag.toString()) ?? [];
75
+ currentLogs.push(
76
+ new TxScopedL2Log(
77
+ txHash,
78
+ block.number,
79
+ block.timestamp,
80
+ log.getEmittedFields(),
81
+ txEffect.noteHashes,
82
+ txEffect.nullifiers[0],
83
+ ).toBuffer(),
84
+ );
85
+ privateTaggedLogs.set(tag.toString(), currentLogs);
64
86
  });
65
87
 
66
- txEffect.publicLogs.forEach((log, logIndex) => {
88
+ txEffect.publicLogs.forEach(log => {
89
+ // Public logs use Tag directly (not siloed) and are stored with contract address
67
90
  const tag = log.fields[0];
68
- this.#log.debug(`Found public log with tag ${tag.toString()} in block ${block.number}`);
69
-
70
- const currentLogs = taggedLogs.get(tag.toString()) ?? [];
71
- currentLogs.push(new TxScopedL2Log(txHash, dataStartIndexForTx, logIndex, block.number, log).toBuffer());
72
- taggedLogs.set(tag.toString(), currentLogs);
91
+ const contractAddress = log.contractAddress;
92
+ const key = `${contractAddress.toString()}_${tag.toString()}`;
93
+ this.#log.debug(
94
+ `Found public log with tag ${tag.toString()} from contract ${contractAddress.toString()} in block ${block.number}`,
95
+ );
96
+
97
+ const currentLogs = publicTaggedLogs.get(key) ?? [];
98
+ currentLogs.push(
99
+ new TxScopedL2Log(
100
+ txHash,
101
+ block.number,
102
+ block.timestamp,
103
+ log.getEmittedFields(),
104
+ txEffect.noteHashes,
105
+ txEffect.nullifiers[0],
106
+ ).toBuffer(),
107
+ );
108
+ publicTaggedLogs.set(key, currentLogs);
73
109
  });
74
110
  });
75
- return taggedLogs;
111
+
112
+ return { privateTaggedLogs, publicTaggedLogs };
113
+ }
114
+
115
+ /**
116
+ * Extracts and aggregates tagged logs from a list of blocks.
117
+ * @param blocks - The blocks to extract logs from.
118
+ * @returns A map from tag (as string) to an array of serialized private logs belonging to that tag, and a map from
119
+ * "{contractAddress}_{tag}" (as string) to an array of serialized public logs belonging to that key.
120
+ */
121
+ #extractTaggedLogs(blocks: L2BlockNew[]): {
122
+ privateTaggedLogs: Map<string, Buffer[]>;
123
+ publicTaggedLogs: Map<string, Buffer[]>;
124
+ } {
125
+ const taggedLogsInBlocks = blocks.map(block => this.#extractTaggedLogsFromBlock(block));
126
+
127
+ // Now we merge the maps from each block into a single map.
128
+ const privateTaggedLogs = taggedLogsInBlocks.reduce((acc, { privateTaggedLogs }) => {
129
+ for (const [tag, logs] of privateTaggedLogs.entries()) {
130
+ const currentLogs = acc.get(tag) ?? [];
131
+ acc.set(tag, currentLogs.concat(logs));
132
+ }
133
+ return acc;
134
+ }, new Map<string, Buffer[]>());
135
+
136
+ const publicTaggedLogs = taggedLogsInBlocks.reduce((acc, { publicTaggedLogs }) => {
137
+ for (const [key, logs] of publicTaggedLogs.entries()) {
138
+ const currentLogs = acc.get(key) ?? [];
139
+ acc.set(key, currentLogs.concat(logs));
140
+ }
141
+ return acc;
142
+ }, new Map<string, Buffer[]>());
143
+
144
+ return { privateTaggedLogs, publicTaggedLogs };
76
145
  }
77
146
 
78
147
  /**
@@ -80,43 +149,59 @@ export class LogStore {
80
149
  * @param blocks - The blocks for which to add the logs.
81
150
  * @returns True if the operation is successful.
82
151
  */
83
- addLogs(blocks: L2Block[]): Promise<boolean> {
84
- const taggedLogsToAdd = blocks
85
- .map(block => this.#extractTaggedLogs(block))
86
- .reduce((acc, val) => {
87
- for (const [tag, logs] of val.entries()) {
88
- const currentLogs = acc.get(tag) ?? [];
89
- acc.set(tag, currentLogs.concat(logs));
90
- }
91
- return acc;
92
- }, new Map());
93
- const tagsToUpdate = Array.from(taggedLogsToAdd.keys());
152
+ addLogs(blocks: L2BlockNew[]): Promise<boolean> {
153
+ const { privateTaggedLogs, publicTaggedLogs } = this.#extractTaggedLogs(blocks);
154
+
155
+ const keysOfPrivateLogsToUpdate = Array.from(privateTaggedLogs.keys());
156
+ const keysOfPublicLogsToUpdate = Array.from(publicTaggedLogs.keys());
94
157
 
95
158
  return this.db.transactionAsync(async () => {
96
- const currentTaggedLogs = await Promise.all(
97
- tagsToUpdate.map(async tag => ({ tag, logBuffers: await this.#logsByTag.getAsync(tag) })),
159
+ const currentPrivateTaggedLogs = await Promise.all(
160
+ keysOfPrivateLogsToUpdate.map(async key => ({
161
+ tag: key,
162
+ logBuffers: await this.#privateLogsByTag.getAsync(key),
163
+ })),
98
164
  );
99
- currentTaggedLogs.forEach(taggedLogBuffer => {
165
+ currentPrivateTaggedLogs.forEach(taggedLogBuffer => {
100
166
  if (taggedLogBuffer.logBuffers && taggedLogBuffer.logBuffers.length > 0) {
101
- taggedLogsToAdd.set(
167
+ privateTaggedLogs.set(
102
168
  taggedLogBuffer.tag,
103
- taggedLogBuffer.logBuffers!.concat(taggedLogsToAdd.get(taggedLogBuffer.tag)!),
169
+ taggedLogBuffer.logBuffers!.concat(privateTaggedLogs.get(taggedLogBuffer.tag)!),
104
170
  );
105
171
  }
106
172
  });
173
+
174
+ const currentPublicTaggedLogs = await Promise.all(
175
+ keysOfPublicLogsToUpdate.map(async key => ({
176
+ key,
177
+ logBuffers: await this.#publicLogsByContractAndTag.getAsync(key),
178
+ })),
179
+ );
180
+ currentPublicTaggedLogs.forEach(taggedLogBuffer => {
181
+ if (taggedLogBuffer.logBuffers && taggedLogBuffer.logBuffers.length > 0) {
182
+ publicTaggedLogs.set(
183
+ taggedLogBuffer.key,
184
+ taggedLogBuffer.logBuffers!.concat(publicTaggedLogs.get(taggedLogBuffer.key)!),
185
+ );
186
+ }
187
+ });
188
+
107
189
  for (const block of blocks) {
108
- const tagsInBlock = [];
109
- for (const [tag, logs] of taggedLogsToAdd.entries()) {
110
- await this.#logsByTag.set(tag, logs);
111
- tagsInBlock.push(tag);
190
+ const blockHash = await block.hash();
191
+
192
+ const privateTagsInBlock: string[] = [];
193
+ for (const [tag, logs] of privateTaggedLogs.entries()) {
194
+ await this.#privateLogsByTag.set(tag, logs);
195
+ privateTagsInBlock.push(tag);
112
196
  }
113
- await this.#logTagsByBlock.set(block.number, tagsInBlock);
197
+ await this.#privateLogKeysByBlock.set(block.number, privateTagsInBlock);
114
198
 
115
- const privateLogsInBlock = block.body.txEffects
116
- .map(txEffect => txEffect.privateLogs)
117
- .flat()
118
- .map(log => log.toBuffer());
119
- await this.#privateLogsByBlock.set(block.number, Buffer.concat(privateLogsInBlock));
199
+ const publicKeysInBlock: string[] = [];
200
+ for (const [key, logs] of publicTaggedLogs.entries()) {
201
+ await this.#publicLogsByContractAndTag.set(key, logs);
202
+ publicKeysInBlock.push(key);
203
+ }
204
+ await this.#publicLogKeysByBlock.set(block.number, publicKeysInBlock);
120
205
 
121
206
  const publicLogsInBlock = block.body.txEffects
122
207
  .map((txEffect, txIndex) =>
@@ -138,72 +223,82 @@ export class LogStore {
138
223
  )
139
224
  .flat();
140
225
 
141
- await this.#publicLogsByBlock.set(block.number, Buffer.concat(publicLogsInBlock));
142
- await this.#contractClassLogsByBlock.set(block.number, Buffer.concat(contractClassLogsInBlock));
226
+ await this.#publicLogsByBlock.set(block.number, this.#packWithBlockHash(blockHash, publicLogsInBlock));
227
+ await this.#contractClassLogsByBlock.set(
228
+ block.number,
229
+ this.#packWithBlockHash(blockHash, contractClassLogsInBlock),
230
+ );
143
231
  }
144
232
 
145
233
  return true;
146
234
  });
147
235
  }
148
236
 
149
- deleteLogs(blocks: L2Block[]): Promise<boolean> {
237
+ #packWithBlockHash(blockHash: Fr, data: Buffer<ArrayBufferLike>[]): Buffer<ArrayBufferLike> {
238
+ return Buffer.concat([blockHash.toBuffer(), ...data]);
239
+ }
240
+
241
+ #unpackBlockHash(reader: BufferReader): L2BlockHash {
242
+ const blockHash = reader.remainingBytes() > 0 ? reader.readObject(Fr) : undefined;
243
+
244
+ if (!blockHash) {
245
+ throw new Error('Failed to read block hash from log entry buffer');
246
+ }
247
+
248
+ return L2BlockHash.fromField(blockHash);
249
+ }
250
+
251
+ deleteLogs(blocks: L2BlockNew[]): Promise<boolean> {
150
252
  return this.db.transactionAsync(async () => {
151
- const tagsToDelete = (
152
- await Promise.all(
153
- blocks.map(async block => {
154
- const tags = await this.#logTagsByBlock.getAsync(block.number);
155
- return tags ?? [];
156
- }),
157
- )
158
- ).flat();
253
+ await Promise.all(
254
+ blocks.map(async block => {
255
+ // Delete private logs
256
+ const privateKeys = (await this.#privateLogKeysByBlock.getAsync(block.number)) ?? [];
257
+ await Promise.all(privateKeys.map(tag => this.#privateLogsByTag.delete(tag)));
258
+
259
+ // Delete public logs
260
+ const publicKeys = (await this.#publicLogKeysByBlock.getAsync(block.number)) ?? [];
261
+ await Promise.all(publicKeys.map(key => this.#publicLogsByContractAndTag.delete(key)));
262
+ }),
263
+ );
159
264
 
160
265
  await Promise.all(
161
266
  blocks.map(block =>
162
267
  Promise.all([
163
- this.#privateLogsByBlock.delete(block.number),
164
268
  this.#publicLogsByBlock.delete(block.number),
165
- this.#logTagsByBlock.delete(block.number),
269
+ this.#privateLogKeysByBlock.delete(block.number),
270
+ this.#publicLogKeysByBlock.delete(block.number),
166
271
  this.#contractClassLogsByBlock.delete(block.number),
167
272
  ]),
168
273
  ),
169
274
  );
170
275
 
171
- await Promise.all(tagsToDelete.map(tag => this.#logsByTag.delete(tag.toString())));
172
276
  return true;
173
277
  });
174
278
  }
175
279
 
176
280
  /**
177
- * Retrieves all private logs from up to `limit` blocks, starting from the block number `start`.
178
- * @param start - The block number from which to begin retrieving logs.
179
- * @param limit - The maximum number of blocks to retrieve logs from.
180
- * @returns An array of private logs from the specified range of blocks.
281
+ * Gets all private logs that match any of the `tags`. For each tag, an array of matching logs is returned. An empty
282
+ * array implies no logs match that tag.
181
283
  */
182
- async getPrivateLogs(start: number, limit: number): Promise<PrivateLog[]> {
183
- const logs = [];
184
- for await (const buffer of this.#privateLogsByBlock.valuesAsync({ start, limit })) {
185
- const reader = new BufferReader(buffer);
186
- while (reader.remainingBytes() > 0) {
187
- logs.push(reader.readObject(PrivateLog));
188
- }
189
- }
190
- return logs;
284
+ async getPrivateLogsByTags(tags: SiloedTag[]): Promise<TxScopedL2Log[][]> {
285
+ const logs = await Promise.all(tags.map(tag => this.#privateLogsByTag.getAsync(tag.toString())));
286
+
287
+ return logs.map(logBuffers => logBuffers?.map(logBuffer => TxScopedL2Log.fromBuffer(logBuffer)) ?? []);
191
288
  }
192
289
 
193
290
  /**
194
- * Gets all logs that match any of the received tags (i.e. logs with their first field equal to a tag).
195
- * @param tags - The tags to filter the logs by.
196
- * @returns For each received tag, an array of matching logs is returned. An empty array implies no logs match
197
- * that tag.
291
+ * Gets all public logs that match any of the `tags` from the specified contract. For each tag, an array of matching
292
+ * logs is returned. An empty array implies no logs match that tag.
198
293
  */
199
- async getLogsByTags(tags: Fr[], limitPerTag?: number): Promise<TxScopedL2Log[][]> {
200
- if (limitPerTag !== undefined && limitPerTag <= 0) {
201
- throw new TypeError('limitPerTag needs to be greater than 0');
202
- }
203
- const logs = await Promise.all(tags.map(tag => this.#logsByTag.getAsync(tag.toString())));
204
- return logs.map(
205
- logBuffers => logBuffers?.slice(0, limitPerTag).map(logBuffer => TxScopedL2Log.fromBuffer(logBuffer)) ?? [],
294
+ async getPublicLogsByTagsFromContract(contractAddress: AztecAddress, tags: Tag[]): Promise<TxScopedL2Log[][]> {
295
+ const logs = await Promise.all(
296
+ tags.map(tag => {
297
+ const key = `${contractAddress.toString()}_${tag.value.toString()}`;
298
+ return this.#publicLogsByContractAndTag.getAsync(key);
299
+ }),
206
300
  );
301
+ return logs.map(logBuffers => logBuffers?.map(logBuffer => TxScopedL2Log.fromBuffer(logBuffer)) ?? []);
207
302
  }
208
303
 
209
304
  /**
@@ -234,6 +329,9 @@ export class LogStore {
234
329
  const buffer = (await this.#publicLogsByBlock.getAsync(blockNumber)) ?? Buffer.alloc(0);
235
330
  const publicLogsInBlock: [PublicLog[]] = [[]];
236
331
  const reader = new BufferReader(buffer);
332
+
333
+ const blockHash = this.#unpackBlockHash(reader);
334
+
237
335
  while (reader.remainingBytes() > 0) {
238
336
  const indexOfTx = reader.readNumber();
239
337
  const numLogsInTx = reader.readNumber();
@@ -246,7 +344,7 @@ export class LogStore {
246
344
  const txLogs = publicLogsInBlock[txIndex];
247
345
 
248
346
  const logs: ExtendedPublicLog[] = [];
249
- const maxLogsHit = this.#accumulateLogs(logs, blockNumber, txIndex, txLogs, filter);
347
+ const maxLogsHit = this.#accumulateLogs(logs, blockNumber, blockHash, txIndex, txLogs, filter);
250
348
 
251
349
  return { logs, maxLogsHit };
252
350
  }
@@ -269,6 +367,9 @@ export class LogStore {
269
367
  loopOverBlocks: for await (const [blockNumber, logBuffer] of this.#publicLogsByBlock.entriesAsync({ start, end })) {
270
368
  const publicLogsInBlock: [PublicLog[]] = [[]];
271
369
  const reader = new BufferReader(logBuffer);
370
+
371
+ const blockHash = this.#unpackBlockHash(reader);
372
+
272
373
  while (reader.remainingBytes() > 0) {
273
374
  const indexOfTx = reader.readNumber();
274
375
  const numLogsInTx = reader.readNumber();
@@ -279,7 +380,7 @@ export class LogStore {
279
380
  }
280
381
  for (let txIndex = filter.afterLog?.txIndex ?? 0; txIndex < publicLogsInBlock.length; txIndex++) {
281
382
  const txLogs = publicLogsInBlock[txIndex];
282
- maxLogsHit = this.#accumulateLogs(logs, blockNumber, txIndex, txLogs, filter);
383
+ maxLogsHit = this.#accumulateLogs(logs, blockNumber, blockHash, txIndex, txLogs, filter);
283
384
  if (maxLogsHit) {
284
385
  this.#log.debug(`Max logs hit at block ${blockNumber}`);
285
386
  break loopOverBlocks;
@@ -318,6 +419,8 @@ export class LogStore {
318
419
  const contractClassLogsInBlock: [ContractClassLog[]] = [[]];
319
420
 
320
421
  const reader = new BufferReader(contractClassLogsBuffer);
422
+ const blockHash = this.#unpackBlockHash(reader);
423
+
321
424
  while (reader.remainingBytes() > 0) {
322
425
  const indexOfTx = reader.readNumber();
323
426
  const numLogsInTx = reader.readNumber();
@@ -330,7 +433,7 @@ export class LogStore {
330
433
  const txLogs = contractClassLogsInBlock[txIndex];
331
434
 
332
435
  const logs: ExtendedContractClassLog[] = [];
333
- const maxLogsHit = this.#accumulateLogs(logs, blockNumber, txIndex, txLogs, filter);
436
+ const maxLogsHit = this.#accumulateLogs(logs, blockNumber, blockHash, txIndex, txLogs, filter);
334
437
 
335
438
  return { logs, maxLogsHit };
336
439
  }
@@ -356,6 +459,7 @@ export class LogStore {
356
459
  })) {
357
460
  const contractClassLogsInBlock: [ContractClassLog[]] = [[]];
358
461
  const reader = new BufferReader(logBuffer);
462
+ const blockHash = this.#unpackBlockHash(reader);
359
463
  while (reader.remainingBytes() > 0) {
360
464
  const indexOfTx = reader.readNumber();
361
465
  const numLogsInTx = reader.readNumber();
@@ -366,7 +470,7 @@ export class LogStore {
366
470
  }
367
471
  for (let txIndex = filter.afterLog?.txIndex ?? 0; txIndex < contractClassLogsInBlock.length; txIndex++) {
368
472
  const txLogs = contractClassLogsInBlock[txIndex];
369
- maxLogsHit = this.#accumulateLogs(logs, blockNumber, txIndex, txLogs, filter);
473
+ maxLogsHit = this.#accumulateLogs(logs, blockNumber, blockHash, txIndex, txLogs, filter);
370
474
  if (maxLogsHit) {
371
475
  this.#log.debug(`Max logs hit at block ${blockNumber}`);
372
476
  break loopOverBlocks;
@@ -380,9 +484,10 @@ export class LogStore {
380
484
  #accumulateLogs(
381
485
  results: (ExtendedContractClassLog | ExtendedPublicLog)[],
382
486
  blockNumber: number,
487
+ blockHash: L2BlockHash,
383
488
  txIndex: number,
384
489
  txLogs: (ContractClassLog | PublicLog)[],
385
- filter: LogFilter,
490
+ filter: LogFilter = {},
386
491
  ): boolean {
387
492
  let maxLogsHit = false;
388
493
  let logIndex = typeof filter.afterLog?.logIndex === 'number' ? filter.afterLog.logIndex + 1 : 0;
@@ -390,9 +495,13 @@ export class LogStore {
390
495
  const log = txLogs[logIndex];
391
496
  if (!filter.contractAddress || log.contractAddress.equals(filter.contractAddress)) {
392
497
  if (log instanceof ContractClassLog) {
393
- results.push(new ExtendedContractClassLog(new LogId(BlockNumber(blockNumber), txIndex, logIndex), log));
498
+ results.push(
499
+ new ExtendedContractClassLog(new LogId(BlockNumber(blockNumber), blockHash, txIndex, logIndex), log),
500
+ );
501
+ } else if (log instanceof PublicLog) {
502
+ results.push(new ExtendedPublicLog(new LogId(BlockNumber(blockNumber), blockHash, txIndex, logIndex), log));
394
503
  } else {
395
- results.push(new ExtendedPublicLog(new LogId(BlockNumber(blockNumber), txIndex, logIndex), log));
504
+ throw new Error('Unknown log type');
396
505
  }
397
506
 
398
507
  if (results.length >= this.#logsMaxPageSize) {
@@ -1,6 +1,7 @@
1
- import type { L1BlockId } from '@aztec/ethereum';
1
+ import type { L1BlockId } from '@aztec/ethereum/l1-types';
2
+ import { CheckpointNumber } from '@aztec/foundation/branded-types';
2
3
  import { Buffer16, Buffer32 } from '@aztec/foundation/buffer';
3
- import { Fr } from '@aztec/foundation/fields';
4
+ import { Fr } from '@aztec/foundation/curves/bn254';
4
5
  import { toArray } from '@aztec/foundation/iterable';
5
6
  import { createLogger } from '@aztec/foundation/log';
6
7
  import { BufferReader, serializeToBuffer } from '@aztec/foundation/serialize';
@@ -113,20 +114,20 @@ export class MessageStore {
113
114
  );
114
115
  }
115
116
 
116
- // Check index corresponds to the L2 block number.
117
- const [expectedStart, expectedEnd] = InboxLeaf.indexRangeFromL2Block(message.l2BlockNumber);
117
+ // Check index corresponds to the checkpoint number.
118
+ const [expectedStart, expectedEnd] = InboxLeaf.indexRangeForCheckpoint(message.checkpointNumber);
118
119
  if (message.index < expectedStart || message.index >= expectedEnd) {
119
120
  throw new MessageStoreError(
120
121
  `Invalid index ${message.index} for incoming L1 to L2 message ${message.leaf.toString()} ` +
121
- `at block ${message.l2BlockNumber} (expected value in range [${expectedStart}, ${expectedEnd}))`,
122
+ `at checkpoint ${message.checkpointNumber} (expected value in range [${expectedStart}, ${expectedEnd}))`,
122
123
  message,
123
124
  );
124
125
  }
125
126
 
126
- // Check there are no gaps in the indices within the same block.
127
+ // Check there are no gaps in the indices within the same checkpoint.
127
128
  if (
128
129
  lastMessage &&
129
- message.l2BlockNumber === lastMessage.l2BlockNumber &&
130
+ message.checkpointNumber === lastMessage.checkpointNumber &&
130
131
  message.index !== lastMessage.index + 1n
131
132
  ) {
132
133
  throw new MessageStoreError(
@@ -138,12 +139,12 @@ export class MessageStore {
138
139
 
139
140
  // Check the first message in a block has the correct index.
140
141
  if (
141
- (!lastMessage || message.l2BlockNumber > lastMessage.l2BlockNumber) &&
142
- message.index !== InboxLeaf.smallestIndexFromL2Block(message.l2BlockNumber)
142
+ (!lastMessage || message.checkpointNumber > lastMessage.checkpointNumber) &&
143
+ message.index !== expectedStart
143
144
  ) {
144
145
  throw new MessageStoreError(
145
- `Message ${message.leaf.toString()} for L2 block ${message.l2BlockNumber} has wrong index ` +
146
- `${message.index} (expected ${InboxLeaf.smallestIndexFromL2Block(message.l2BlockNumber)})`,
146
+ `Message ${message.leaf.toString()} for checkpoint ${message.checkpointNumber} has wrong index ` +
147
+ `${message.index} (expected ${expectedStart})`,
147
148
  message,
148
149
  );
149
150
  }
@@ -184,10 +185,10 @@ export class MessageStore {
184
185
  return msg ? deserializeInboxMessage(msg) : undefined;
185
186
  }
186
187
 
187
- public async getL1ToL2Messages(blockNumber: number): Promise<Fr[]> {
188
+ public async getL1ToL2Messages(checkpointNumber: CheckpointNumber): Promise<Fr[]> {
188
189
  const messages: Fr[] = [];
189
190
 
190
- const [startIndex, endIndex] = InboxLeaf.indexRangeFromL2Block(blockNumber);
191
+ const [startIndex, endIndex] = InboxLeaf.indexRangeForCheckpoint(checkpointNumber);
191
192
  let lastIndex = startIndex - 1n;
192
193
 
193
194
  for await (const msgBuffer of this.#l1ToL2Messages.valuesAsync({
@@ -195,8 +196,10 @@ export class MessageStore {
195
196
  end: this.indexToKey(endIndex),
196
197
  })) {
197
198
  const msg = deserializeInboxMessage(msgBuffer);
198
- if (msg.l2BlockNumber !== blockNumber) {
199
- throw new Error(`L1 to L2 message with index ${msg.index} has invalid block number ${msg.l2BlockNumber}`);
199
+ if (msg.checkpointNumber !== checkpointNumber) {
200
+ throw new Error(
201
+ `L1 to L2 message with index ${msg.index} has invalid checkpoint number ${msg.checkpointNumber}`,
202
+ );
200
203
  } else if (msg.index !== lastIndex + 1n) {
201
204
  throw new Error(`Expected L1 to L2 message with index ${lastIndex + 1n} but got ${msg.index}`);
202
205
  }
@@ -232,9 +235,9 @@ export class MessageStore {
232
235
  });
233
236
  }
234
237
 
235
- public rollbackL1ToL2MessagesToL2Block(targetBlockNumber: number): Promise<void> {
236
- this.#log.debug(`Deleting L1 to L2 messages up to target L2 block ${targetBlockNumber}`);
237
- const startIndex = InboxLeaf.smallestIndexFromL2Block(targetBlockNumber + 1);
238
+ public rollbackL1ToL2MessagesToCheckpoint(targetCheckpointNumber: CheckpointNumber): Promise<void> {
239
+ this.#log.debug(`Deleting L1 to L2 messages up to target checkpoint ${targetCheckpointNumber}`);
240
+ const startIndex = InboxLeaf.smallestIndexForCheckpoint(CheckpointNumber(targetCheckpointNumber + 1));
238
241
  return this.removeL1ToL2Messages(startIndex);
239
242
  }
240
243
 
@@ -0,0 +1,98 @@
1
+ # Archiver L1 Data Retrieval
2
+
3
+ Modules and classes to handle data retrieval from L1 for the archiver.
4
+
5
+ ## Calldata Retriever
6
+
7
+ The sequencer publisher bundles multiple operations into a single multicall3 transaction for gas
8
+ efficiency. A typical transaction includes:
9
+
10
+ 1. Attestation invalidations (if needed): `invalidateBadAttestation`, `invalidateInsufficientAttestations`
11
+ 2. Block proposal: `propose` (exactly one per transaction to the rollup contract)
12
+ 3. Governance and slashing (if needed): votes, payload creation/execution
13
+
14
+ The archiver needs to extract the `propose` calldata from these bundled transactions to reconstruct
15
+ L2 blocks. This class needs to handle scenarios where the transaction was submitted via multicall3,
16
+ as well as alternative ways for submitting the `propose` call that other clients might use.
17
+
18
+ ### Multicall3 Validation and Decoding
19
+
20
+ First attempt to decode the transaction as a multicall3 `aggregate3` call with validation:
21
+
22
+ - Check if transaction is to multicall3 address (`0xcA11bde05977b3631167028862bE2a173976CA11`)
23
+ - Decode as `aggregate3(Call3[] calldata calls)`
24
+ - Allow calls to known addresses and methods (rollup, governance, slashing contracts, etc.)
25
+ - Find the single `propose` call to the rollup contract
26
+ - Verify exactly one `propose` call exists
27
+ - Extract and return the propose calldata
28
+
29
+ This step handles the common case efficiently without requiring expensive trace or debug RPC calls.
30
+ Any validation failure triggers fallback to the next step.
31
+
32
+ ### Direct Propose Call
33
+
34
+ Second attempt to decode the transaction as a direct `propose` call to the rollup contract:
35
+
36
+ - Check if transaction is to the rollup address
37
+ - Decode as `propose` function call
38
+ - Verify the function is indeed `propose`
39
+ - Return the transaction input as the propose calldata
40
+
41
+ This handles scenarios where clients submit transactions directly to the rollup contract without
42
+ using multicall3 for bundling. Any validation failure triggers fallback to the next step.
43
+
44
+ ### Spire Proposer Call
45
+
46
+ Given existing attempts to route the call via the Spire proposer, we also check if the tx is `to` the
47
+ proposer known address, and if so, we try decoding it as either a multicall3 or a direct call to the
48
+ rollup contract.
49
+
50
+ Similar as with the multicall3 check, we check that there are no other calls in the Spire proposer, so
51
+ we are absolutely sure that the only call is the successful one to the rollup. Any extraneous call would
52
+ imply an unexpected path to calling `propose` in the rollup contract, and since we cannot verify if the
53
+ calldata arguments we extracted are the correct ones (see the section below), we cannot know for sure which
54
+ one is the call that succeeded, so we don't know which calldata to process.
55
+
56
+ Furthermore, since the Spire proposer is upgradeable, we check if the implementation has not changed in
57
+ order to decode. As usual, any validation failure triggers fallback to the next step.
58
+
59
+ ### Verifying Multicall3 Arguments
60
+
61
+ **This is NOT implemented for simplicity's sake**
62
+
63
+ If the checks above don't hold, such as when there are multiple calls to `propose`, then we cannot
64
+ reliably extract the `propose` calldata from the multicall3 arguments alone. We can try a best-effort
65
+ where we try all `propose` calls we see and validate them against on-chain data. Note that we can use these
66
+ same strategies if we were to obtain the calldata from another source.
67
+
68
+ #### TempBlockLog Verification
69
+
70
+ Read the stored `TempBlockLog` for the L2 block number from L1 and verify it matches our decoded header hash,
71
+ since the `TempBlockLog` stores the hash of the proposed block header, the payload commitment, and the attestations.
72
+
73
+ However, `TempBlockLog` is only stored temporarily and deleted after proven, so this method only works for recent
74
+ blocks, not for historical data syncing.
75
+
76
+ #### Archive Verification
77
+
78
+ Verify that the archive root in the decoded propose is correct with regard to the block header. This requires
79
+ hashing the block header we have retrieved, inserting it into the archive tree, and checking the resulting root
80
+ against the one we got from L1.
81
+
82
+ However, this requires that the archive keeps a reference to world-state, which is not the case in the current
83
+ system.
84
+
85
+ #### Emit Commitments in Rollup Contract
86
+
87
+ Modify rollup contract to emit commitments to the block header in the `L2BlockProposed` event, allowing us to easily
88
+ verify the calldata we obtained vs the emitted event.
89
+
90
+ However, modifying the rollup contract is out of scope for this change. But we can implement this approach in `v2`.
91
+
92
+ ### Debug and Trace Transaction Fallback
93
+
94
+ Last, we use L1 node's trace/debug RPC methods to definitively identify the one successful `propose` call within the tx.
95
+ We can then extract the exact calldata that hit the `propose` function in the rollup contract.
96
+
97
+ This approach requires access to a debug-enabled L1 node, which may be more resource-intensive, so we only
98
+ use it as a fallback when the first step fails, which should be rare in practice.