-
Notifications
You must be signed in to change notification settings - Fork 4
feat: chain orchestrator #185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements a new ChainOrchestrator
to replace the previous indexer, integrates it throughout the node, watcher, network, and engine, and updates tests and database migrations accordingly.
- Introduces
ChainOrchestrator
in place ofIndexer
, refactorsRollupNodeManager
to consume orchestrator events instead of indexer events. - Adds
Synced
notifications toL1Watcher
and updates engine driver to handle optimistic sync viaChainOrchestrator
. - Refactors configuration (
ScrollRollupNodeConfig
), network manager, and database migrations; adjusts tests to cover the new orchestrator flows.
Reviewed Changes
Copilot reviewed 40 out of 41 changed files in this pull request and generated 1 comment.
Show a summary per file
File | Description |
---|---|
crates/indexer/src/lib.rs | Rename Indexer to ChainOrchestrator and overhaul API flows |
crates/manager/src/manager/mod.rs | Replace indexer usage with ChainOrchestrator in node manager |
crates/node/src/args.rs | Instantiate ChainOrchestrator in ScrollRollupNodeConfig |
crates/watcher/src/lib.rs | Add Synced variant and is_synced flag to L1Watcher |
crates/scroll-wire/src/protocol/proto.rs | Adjust doc comment for NewBlock::new |
crates/node/tests/e2e.rs | Add/revise reorg and sync end-to-end tests |
crates/watcher/tests/reorg.rs | Update tests to skip Synced notifications |
crates/database/db/src/operations.rs | Extend DB ops with L1MessageStart and block-and-batch queries |
crates/database/migration/src/migration_info.rs | Add genesis_hash() to migrations and insert genesis blocks |
crates/network/src/manager.rs | Wire up eth-wire listener and dispatch chain-orchestrator events |
crates/engine/src/driver.rs | Support ChainImport and OptimisticSync futures in engine driver |
Comments suppressed due to low confidence (2)
crates/scroll-wire/src/protocol/proto.rs:33
- The doc comment uses "blocks" (plural) but the constructor takes a single block; change to "block" for accuracy.
/// Returns a [`NewBlock`] instance with the provided signature and blocks.
crates/node/tests/e2e.rs:95
- The
follower_can_reorg
test has no assertions; either add meaningful checks or remove the empty test to maintain coverage.
async fn follower_can_reorg() -> eyre::Result<()> {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of comments and some small nits and leftover code to clean.
if self.is_synced() { | ||
if self.is_synced { | ||
tokio::time::sleep(SLOW_SYNC_INTERVAL).await; | ||
} else if self.current_block_number == self.l1_state.head { | ||
// if we have synced to the head of the L1, notify the channel and set the | ||
// `is_synced`` flag. | ||
if let Err(L1WatcherError::SendError(_)) = self.notify(L1Notification::Synced).await | ||
{ | ||
tracing::warn!(target: "scroll::watcher", "L1 watcher channel closed, stopping the watcher"); | ||
break; | ||
} | ||
self.is_synced = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the current logic suggests the watcher can never transition from is_synced = true -> false
. Is this expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. In the context of the RN, Synced
should mean that we have synced all L1 messages required to validate messages included in unsafe L2 blocks. Given that we only include L1 messages after the corresponding L1 block has been finalized I think this should be fine provided the watcher doesn't start to lag > 2 epochs behind the safe tip then the Synced
status should still remain valid. What do you think about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm but so if we lose a provider for 12 minutes we might enter an edge case we can't exit from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point and given that we have had recent experiences of the L1 provider being down for longer than 12 minutes I think we should cover this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we do this in another PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree let's address this in another PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jonastheis is it fair to say this is captured in this issue #252?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couple of extra comments and questions. Also I see we have a lot more unwrap
s in the code, are all these safe to keep?
// Reverse the new chain headers to have them in the correct order. | ||
received_chain_headers.reverse(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally think we would gain in code clarity if received_chain_headers and current_chain_headers were ordered in the same way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would tend to agree but I think we should attempt to merge as is for now and then refactor this at a later date.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
works for me, should we raise an issue with all improvements listed?
// Purge all pending block imports. | ||
self.chain_imports.clear(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we purge all pending block imports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the optimistic sync supercedes any previous chain imports. Does this seem reasonable to you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also clear the pending futures then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I presume you are referring to engine_future
? If so, we could do but this engine future may be related to an L1 consolidation, in which case we wouldn't want to clear it. I think it's fine to leave as unless I'm missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reviewing l2geth code, I noticed we might be missing checks on L1 messages.
async fn validate_l1_messages( | ||
blocks: &[ScrollBlock], | ||
database: Arc<Database>, | ||
) -> Result<(), ChainOrchestratorError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we might be missing a couple of checks in this function:
https://github.com/scroll-tech/go-ethereum/blob/develop/core/block_validator.go#L104-L109
- Post Euclid V2: check indexes are continuous, at the start of the block and start at the last seen L1 message index.
- Pre Euclid V2: queue index can't decrease, check skipped L1 messages are in DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can omit the check on queue indexes not being decreasing for now, as we will (most likely?) never sync pre euclid v2 blocks via live sync. I will open an issue to track this and we can implement it in the future to support pre euclid v2 sync.
Regarding the start index, how about we introduce an l1_message_index: Arc<Mutex<IndexType>
in memory which is used to determine the cursor start. I think continuity is already covered due to the fact that we assert consistency with the db? Regarding the position in the block I will implement a check for that as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or are you suggesting we do this in the consensus on reth (as I believe you have already in the open PR)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or are you suggesting we do this in the consensus on reth (as I believe you have already in the open PR)?
I'm actually not sure if we should have these checks in Reth and/or Rollup node. What do you think?
Regarding the start index, how about we introduce an l1_message_index: Arc<Mutex in memory which is used to determine the cursor start.
Won't we have an issue with L1 reorgs with this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually not sure if we should have these checks in Reth and/or Rollup node. What do you think?
I think it makes sense to have the L1 message checks in the rollup node.
Won't we have an issue with L1 reorgs with this?
I wouldn't have thought so because L1 messages are only included when they are finalized, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense to have the L1 message checks in the rollup node.
Works for me. I think I will still leave the current ones in Reth, for historical sync in order to still have a minimal check wdyt?
I wouldn't have thought so because L1 messages are only included when they are finalized, right?
As long as the sequencer uses the L1MessageInclusionMode::Finalized
I think that's fine but if we switch to BlockDepth
we might start to have issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should replicate the l2geth implementation as close as possible. This also means the order within the block etc. We should open a separate issue to revisit this after the PR is merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just need to resolve the pending comments and open 1 issues related to L1 watcher sync.
/// A `BatchCommit` event has been indexed returning the batch info and the L2 block info to | ||
/// revert to due to a batch revert. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this comment
/// The chain has been unwound, returning the L1 block number of the new L1 head, | ||
/// the L1 message queue index of the new L1 head, and optionally the L2 head and safe block | ||
/// info if the unwind resulted in a new L2 head or safe block. | ||
ChainUnwound { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what does this mean? Is it a reorg?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming on the events seems a bit inconsistent after just reading the event names and description in this file.
@@ -18,6 +20,11 @@ impl MigrationInfo for () { | |||
fn data_hash() -> Option<B256> { | |||
None | |||
} | |||
|
|||
fn genesis_hash() -> B256 { | |||
// Todo: Update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update to which value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this blanket implementation is improper (impl MigrationInfo for ()
) I will replace it with impl MigrationInfo for ScrollDevMigrationInfo
, and then this is no longer relevant. When we want to support custom chain specs, we will have to revisit this issue as the genesis_hash
will no longer be static.
/// The network arguments. | ||
#[derive(Debug, Clone, clap::Args)] | ||
pub struct NetworkArgs { | ||
/// A bool to represent if new blocks should be bridged from the eth wire protocol to the | ||
/// scroll wire protocol. | ||
#[arg(long = "network.bridge", default_value_t = true)] | ||
#[arg(long = "network.bridge")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why removing the default value here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, there is no way to set this to false via CLI. With booleans, the absence of the argument represents false. However, because we are setting default_value_t = true
, this remains true even in the absence of the CLI argument.
pub enable_eth_scroll_wire_bridge: bool, | ||
/// A bool that represents if the scroll wire protocol should be enabled. | ||
#[arg(long = "network.scroll-wire", default_value_t = true)] | ||
#[arg(long = "network.scroll-wire")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why removing the default value here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, there is no way to set this to false via CLI. With booleans, the absence of the argument represents false. However, because we are setting default_value_t = true, this remains true even in the absence of the CLI argument.
@@ -422,11 +469,19 @@ pub struct NetworkArgs { | |||
value_name = "NETWORK_SEQUENCER_URL" | |||
)] | |||
pub sequencer_url: Option<String>, | |||
/// A bool that represents if blocks should be gossiped over the eth-wire protocol. | |||
#[arg(long = "network.eth-wire-gossip")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this interact with enable_eth_scroll_wire_bridge?
- enable_eth_scroll_wire_bridge is for incoming blocks?
- eth-wire-gossip for outgoing? Shouldn't this be enabled by default as well until we deprecate l2geth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your description of the interaction is correct.
The intention of this flag was to mitigate duplication of block announcements. However, I think a more robust solution would be to handle it on the receiving end as described in your issue #251. I will revert this change.
{ | ||
Some(NetworkManagerEvent::NewBlock(NewBlockWithPeer { peer_id, block, signature })) | ||
} else { | ||
tracing::warn!(target: "scroll::bridge::import", peer_id = %peer_id, "Failed to extract signature from block extra data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we penalize the peer here that sent us the block without the signature on eth wire? This is an invalid message for both l2geth and reth no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch yes we should. will update.
outcome.chain.iter().map(|b| b.into()).collect(), | ||
); | ||
} | ||
self.network.handle().block_import_outcome(outcome.outcome); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean we might announce these blocks even if they are relatively old?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is an old block (i.e. a block we have already seen it) then we will not attempt a block import and the chain orchestrator will return the event BlockAlreadyKnown
. We will only attempt to import blocks that we have not already seen before (i.e. new blocks).
Err(err) => { | ||
error!(target: "scroll::node::manager", ?err, "Error occurred at indexer level") | ||
match &err { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't this be better handled in a function as well?
@@ -9,21 +9,17 @@ pub struct Model { | |||
#[sea_orm(primary_key)] | |||
block_number: i64, | |||
block_hash: Vec<u8>, | |||
batch_index: Option<i64>, | |||
batch_hash: Option<Vec<u8>>, | |||
batch_index: i64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove the option here? What's this value for unsafe L2 blocks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't persist unsafe L2 blocks in the database anymore. They are already persisted in the execution node.
@@ -30,7 +30,7 @@ pub struct NewBlock { | |||
} | |||
|
|||
impl NewBlock { | |||
/// Returns a [`NewBlock`] instance with the provided signature and block. | |||
/// Returns a [`NewBlock`] instance with the provided signature and blocks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
if self.is_synced() { | ||
if self.is_synced { | ||
tokio::time::sleep(SLOW_SYNC_INTERVAL).await; | ||
} else if self.current_block_number == self.l1_state.head { | ||
// if we have synced to the head of the L1, notify the channel and set the | ||
// `is_synced`` flag. | ||
if let Err(L1WatcherError::SendError(_)) = self.notify(L1Notification::Synced).await | ||
{ | ||
tracing::warn!(target: "scroll::watcher", "L1 watcher channel closed, stopping the watcher"); | ||
break; | ||
} | ||
self.is_synced = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree let's address this in another PR
self.syncing = true | ||
} | ||
pub fn handle_chain_import(&mut self, chain_import: ChainImport) { | ||
tracing::trace!(target: "scroll::engine", head = %chain_import.chain.last().unwrap().hash_slow(), "new block import request received"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tracing::trace!(target: "scroll::engine", head = %chain_import.chain.last().unwrap().hash_slow(), "new block import request received"); | |
tracing::trace!(target: "scroll::engine", head = %chain_import.chain.last().unwrap().hash_slow(), "new chain import request received"); |
also update the comment of the function
let parent_hash = | ||
optimistic_headers.first().expect("chain can not be empty").parent_hash; | ||
let header = network_client | ||
.get_header(BlockHashOrNumber::Hash(parent_hash)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will request up to 2000 headers (by default == CHAIN_BUFFER_SIZE
) one by one from other nodes. Is this efficient? Shoudldn't we use GetBlockHeaders
for this? Or why request the headers at all? We could also wait for the EN to finish the sync and then load from our EN?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this efficient? Shouldn't we use GetBlockHeaders for this?
Yes, a more efficient means would be to use GetBlockHeaders
.
We could also wait for the EN to finish the sync and then load from our EN?
I think this is the best solution.
I will add both of these points to the refactoring issue that I create shortly.
// Check if we have already have this block in memory. | ||
if received_block.number <= max_block_number && | ||
received_block.number >= min_block_number && | ||
current_chain_headers.iter().any(|h| h == &received_block.header) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How expensive is this? Do we expect this to become a bottleneck with high throughput?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can improve the implementation to be much more efficient by leveraging the EN. I think we can completely eliminate the in-memory buffer which should make this solution much more efficient. I will write this up in the issue.
} | ||
tracing::trace!(target: "scroll::chain_orchestrator", number = ?(current_chain_headers.front().expect("chain can not be empty").number - 1), "fetching block for current chain"); | ||
if let Some(block) = l2_client | ||
.get_block_by_hash( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the timeout for this operation? What if none of our neighbors supplies this block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If neighbours do not supply the block, we will raise an error and reject the block we are trying to import.
pub(crate) const BLOCK_GAP_TRIGGER: u64 = 100_000; | ||
|
||
/// The number of block headers to keep in the in-memory chain buffer in the chain orchestrator. | ||
pub(crate) const CHAIN_BUFFER_SIZE: usize = 2000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if 2000 is enough? How much memory would this occupy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can increase this considerably if required. The general idea is that we should keep enough blocks in memory to support reorgs. I was thinking that a reorg of > 2000 blocks is unlikely.
h.hash_slow() == | ||
received_chain_headers.last().expect("chain can not be empty").parent_hash | ||
}) { | ||
// If the received fork is older than the current chain, we return an event |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we know here that this is a fork/reorg?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the received chain does not extend the current chain head (pos < current_chain_headers.len() - 1
) it extends from a deeper block.
|
||
// If the current header block number is less than the latest safe block number then | ||
// we should error. | ||
if received_chain_headers.last().expect("chain can not be empty").number <= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we do this quick check already earlier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I've moved the check higher.
return Err(ChainOrchestratorError::L2SafeBlockReorgDetected); | ||
} | ||
|
||
tracing::trace!(target: "scroll::chain_orchestrator", number = ?(received_chain_headers.last().expect("chain can not be empty").number - 1), "fetching block"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't we store received_chain_headers.last()
in a variable somewhere to avoid calling the expect dozens of times?
// If the received fork is older than the current chain, we return an event | ||
// indicating that we have received an old fork. | ||
if (pos < current_chain_headers.len() - 1) && | ||
current_chain_headers.get(pos + 1).expect("chain can not be empty").timestamp > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like quite important: we don't react to reorgs based on timestamp of a block. what if the sequencer's time is faulty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree that this is important. What do you think the rule should be?
|
||
/// Consolidates the chain by reconciling the in-memory chain with the L2 client and database. | ||
/// This is used to ensure that the in-memory chain is consistent with the L2 chain. | ||
async fn consolidate_chain<P: Provider<Scroll> + 'static>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it enough to consolidate only back to the safe head? Is it because the safe block is derived from L1 and the L1 messages until that point are verified through the L1 derivation pipeline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct
|
||
let queue_hash = if chain_spec | ||
.scroll_fork_activation(ScrollHardfork::EuclidV2) | ||
.active_at_timestamp_or_number(block_timestamp, l1_block_number) && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why pass l1_block_number
here? Isn't this L2 fork rules and L1 block info? We might need to check this based on L1MessageQueueV2DeploymentBlock
which is an L1 block height. But even this one I'm not sure. As the actual transition on L1 happened after that.
let event = ChainOrchestratorEvent::BatchCommitIndexed { | ||
batch_info: BatchInfo::new(batch.index, batch.hash), | ||
l1_block_number: batch.block_number, | ||
safe_head: new_safe_head, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is the safe_head
only part of this event in case there was a batch revert? Couldn't it always be part of the event and have a separate field if it was indeed a batch revert (if we even need that information)?
} | ||
|
||
/// Returns the highest finalized block for the provided batch hash. Will return [`None`] if the | ||
/// block number has already been seen by the indexer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also update the comments where indexer
-> chain orchestrator
async fn validate_l1_messages( | ||
blocks: &[ScrollBlock], | ||
database: Arc<Database>, | ||
) -> Result<(), ChainOrchestratorError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should replicate the l2geth implementation as close as possible. This also means the order within the block etc. We should open a separate issue to revisit this after the PR is merged.
closes: #182
closes scroll-tech/reth#243