update celestia client, and fetch chain id from network#40
Conversation
There was a problem hiding this comment.
Pull request overview
This PR updates the Celestia client dependencies (from v0.8.0/v0.9.0 to v0.16.2/v0.20.0) and implements the ability to fetch the chain ID from the network instead of hardcoding it. The changes enable Celestia DA support for persistent mode by storing state updates and submitting them as blobs to Celestia.
Changes:
- Added state_updates table to SQLite storage for persisting Starknet state updates
- Updated Celestia client dependencies and adapted code to new API (Blob creation, TxConfig location, commitment handling)
- Modified Piltover settlement to support optional DA layer info using cainome for proper Cairo serialization
- Implemented dynamic chain ID fetching from the network for sovereign mode
- Added Celestia configuration options to persistent mode CLI
Reviewed changes
Copilot reviewed 14 out of 18 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
| saya/core/src/storage/sql_lite/utils.rs | Added validation for new state_updates table structure |
| saya/core/src/storage/sql_lite/storage.rs | Implemented add_state_update and get_state_update methods |
| saya/core/src/storage/sql_lite/mod.rs | Added state_updates table creation |
| saya/core/src/storage/mod.rs | Added state_update trait methods to PersistantStorage |
| saya/core/src/settlement/piltover.rs | Updated to use cainome serialization and support optional DA layer info |
| saya/core/src/prover/mod.rs | Modified BlockInfo DataAvailabilityPayload to require state_update |
| saya/core/src/prover/mock/layout_bridge.rs | Updated to fetch and include state_update in BlockInfo |
| saya/core/src/prover/atlantic/layout_bridge.rs | Updated to fetch and include state_update in BlockInfo |
| saya/core/src/data_availability/mod.rs | Added state_update field to PersistentPacket |
| saya/core/src/data_availability/celestia.rs | Updated for new Celestia client API with proper commitment handling |
| saya/core/src/block_ingestor/polling.rs | Added state_update fetching and storage during block ingestion |
| saya/core/src/block_ingestor/mod.rs | Added state_update field to BlockInfo and BlobPointer struct |
| bin/saya/src/sovereign.rs | Implemented dynamic chain ID fetching from network |
| bin/saya/src/persistent.rs | Added Celestia configuration options and conditional DA backend selection |
| bin/saya/src/any.rs | Added AnyDataAvailabilityLayer enum to support multiple DA backends |
| saya/core/Cargo.toml | Added piltover and cainome dependencies |
| Cargo.toml | Updated Celestia dependencies and added piltover/cainome |
| Cargo.lock | Reflected all dependency updates |
Comments suppressed due to low confidence (1)
saya/core/src/data_availability/celestia.rs:94
- Multiple
.unwrap()calls are used throughout the Celestia DA backend without proper error handling:
- Line 52:
new_proof.unwrap()- should handle None case gracefully - Line 56-58:
Client::new(...).await.unwrap()- network/auth errors should be handled - Line 68:
ciborium::into_writer(...).unwrap()- serialization errors should be handled - Line 71-77:
Blob::new(...).unwrap()- blob creation errors should be handled - Line 94:
client.blob_submit(...).await.unwrap()- submission errors should be handled
Consider replacing these with proper error handling that logs the error and potentially retries or continues processing, rather than panicking the entire service.
let new_proof = new_proof.unwrap();
debug!("Received new proof");
// TODO: error handling
let client = Client::new(self.rpc_url.as_ref(), Some(&self.auth_token), None, None)
.await
.unwrap();
let packet = new_proof
.clone()
.into_packet(DataAvailabilityPacketContext {
prev: self.last_pointer,
});
// TODO: error handling
let mut serialized_packet: Vec<u8> = Vec::new();
ciborium::into_writer(&packet, &mut serialized_packet).unwrap();
// TODO: error handling
let blob = Blob::new(
self.namespace,
serialized_packet,
None,
AppVersion::latest(),
)
.unwrap();
let commitment = blob.clone().commitment;
let commitment = commitment.hash();
let tx_config = TxConfig {
key_name: self.key_name.clone(),
..Default::default()
};
debug!(
block_number = new_proof.block_number(),
commitment:? = hex::encode(commitment),
blob_bytes_size = blob.data.len();
"Submitting Celestia DA blob.",
);
// TODO: error handling
let celestia_block = client.blob_submit(&[blob], tx_config).await.unwrap();
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 18 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let state_update = self | ||
| .db | ||
| .get_state_update(new_snos_proof.block_number.try_into().unwrap()) | ||
| .await | ||
| .unwrap(); |
There was a problem hiding this comment.
Using unwrap() on get_state_update will panic if the state update is not found in the database. Consider proper error handling with map_err or match to gracefully handle missing state updates and avoid process crashes.
| let state_update = self | |
| .db | |
| .get_state_update(new_snos_proof.block_number.try_into().unwrap()) | |
| .await | |
| .unwrap(); | |
| let state_update = match self | |
| .db | |
| .get_state_update(new_snos_proof.block_number.try_into().unwrap()) | |
| .await | |
| { | |
| Ok(state_update) => state_update, | |
| Err(err) => { | |
| debug!( | |
| "Failed to get state update for block #{}: {}", | |
| new_snos_proof.block_number, | |
| err | |
| ); | |
| continue; | |
| } | |
| }; |
| let state_update = &JsonRpcClient::new(HttpTransport::new(rpc_url.clone())) | ||
| .get_state_update(BlockId::Number(block_number)) | ||
| .await | ||
| .unwrap(); | ||
| let state_update = match state_update { | ||
| starknet::core::types::MaybePreConfirmedStateUpdate::Update(state_update) => { | ||
| state_update | ||
| } | ||
| //TODO: handle this case properly | ||
| starknet::core::types::MaybePreConfirmedStateUpdate::PreConfirmedUpdate(_) => { | ||
| panic!("PreConfirmedStateUpdate not supported") | ||
| } | ||
| }; | ||
|
|
||
| db.add_state_update(block_number.try_into().unwrap(), state_update.clone()) | ||
| .await | ||
| .unwrap(); |
There was a problem hiding this comment.
Multiple unwrap() calls in the worker function will cause panics on errors. The RPC call to get_state_update (line 115), the add_state_update database operation (line 128), and the pattern matching (line 116) all use unwrap(). Consider proper error handling to prevent process crashes on network or database errors.
| let commitment = blob.clone().commitment; | ||
| let commitment = commitment.hash(); |
There was a problem hiding this comment.
The blob is cloned before accessing the commitment field. Consider accessing commitment directly from blob without cloning to improve performance, as cloning the entire blob (which may contain large data) just to access the commitment is inefficient.
| let commitment = blob.clone().commitment; | |
| let commitment = commitment.hash(); | |
| let commitment = blob.commitment.hash(); |
| @@ -68,8 +68,9 @@ where | |||
| ciborium::into_writer(&packet, &mut serialized_packet).unwrap(); | |||
|
|
|||
| // TODO: error handling | |||
There was a problem hiding this comment.
The third parameter (None) passed to Blob::new appears to be undocumented in the diff. Consider adding a comment explaining what this parameter represents in the updated Celestia API, as it wasn't present in the previous version and its purpose is unclear.
| // TODO: error handling | |
| // TODO: error handling | |
| // The third argument is the NMT share version (Option<u8>); `None` means | |
| // "use the default share version" as defined by the Celestia node/API. |
fixes this
#32