Skip to content

zkVM benchmarking for Ethereum

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT
Notifications You must be signed in to change notification settings

eth-act/zkevm-benchmark-workload

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

412 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ZK-EVM Bench

zkEVM Benchmarking Workload

This workspace contains code for benchmarking guest programs within different zkVMs. Although different guest programs are supported, the main use case is benchmarking the Ethereum STF by running benchmarks from the spec tests.

Workspace Structure

  • crates/metrics: Defines common data structures (BenchmarkRun<Metadata>) for storing and serializing benchmark results with generic metadata support.
  • crates/witness-generator: A library that provides functionality for generating benchmark fixture files (BlockAndWitness: individual block + witness pairs) required for stateless block validation by processing standard Ethereum test fixtures, RPC endpoints, or pre-collected raw input files.
  • crates/witness-generator-cli: A standalone binary that uses the witness-generator library to generate fixture files. These are saved in the zkevm-fixtures-input folder.
  • crates/ere-hosts: A standalone binary that runs benchmarks across different zkVM platforms using pre-generated fixture files from zkevm-fixtures-input.
  • crates/benchmark-runner: Provides a unified framework for running benchmarks across different zkVM implementations, including guest program input generation and execution orchestration.
  • scripts/: Contains helper scripts (e.g., fetching fixtures).

Guest programs are maintained in the eth-act/ere-guests repository and downloaded automatically during benchmark runs.

Workflow Overview

The benchmarking process is decoupled into two distinct phases:

  1. Fixture Generation (witness-generator-cli): Processes Ethereum benchmark fixtures (EEST), RPC data, or raw input files to generate individual BlockAndWitness fixtures as JSON files saved in zkevm-fixtures-input/.
  2. Benchmark Execution (ere-hosts): Reads from zkevm-fixtures-input/ and runs performance benchmarks across different zkVM platforms.

This decoupling provides several benefits:

  • Independent fixture generation and benchmark execution
  • Reuse of generated fixtures across multiple benchmark runs

Prerequisites

  1. Rust Toolchain: A standard Rust installation managed by rustup.
  2. Docker: All zkVMs use EreDockerized, which means you don't need to install zkVM-specific toolchains locally. Docker handles all the compilation and execution environments.
  3. Git: Required for cloning the repository.
  4. Common Shell Utilities: The scripts require a bash-compatible shell and standard utilities like curl, jq, and tar.

Setup

  1. Clone the Repository:

    git clone https://github.com/eth-applied-research-group/zkevm-benchmark-workload.git
    cd zkevm-benchmark-workload
  2. Generate Benchmark Input Files (required for stateless-validator guest program):

    cargo run --release -- tests --include 10M --include Prague
    
    # Or generate from local EEST fixtures
    cargo run --release -- tests --eest-fixtures-path /path/to/local/eest/fixtures
    
    # Or generate from RPC
    cargo run --release -- rpc --last-n-blocks 2 --rpc-url <your-rpc-url>
    
    # Or listen for new blocks continuously
    cargo run --release -- rpc --follow --rpc-url <your-rpc-url>
    
    # Or generate from pre-collected raw input files
    cargo run --release -- raw-input --input-folder /path/to/raw/inputs

    This creates individual .json files in the zkevm-fixtures-input/ directory that will be consumed by the benchmark runner.

  3. Run Benchmarks:

    Run benchmarks using the generated fixture files. All zkVMs are dockerized, so no additional setup is required:

    cd crates/ere-hosts
    
    # Run Ethereum stateless validator benchmarks with Reth execution client
    cargo run --release -- --zkvms risc0 stateless-validator --execution-client reth
    
    # Run Ethereum stateless validator benchmarks with Ethrex execution client
    cargo run --release -- --zkvms sp1 stateless-validator --execution-client ethrex
    
    # Run empty program benchmarks (for measuring zkVM overhead)
    cargo run --release -- empty-program
    
    # Run block encoding length benchmarks
    cargo run --release -- block-encoding-length --loop-count 100 --format rlp
    
    # Run block encoding length benchmarks (with SSZ encoding format)
    cargo run --release -- block-encoding-length --loop-count 100 --format ssz
    
    # Use custom input folder for stateless validator benchmarks
    cargo run --release -- stateless-validator --execution-client reth --input-folder my-fixtures
    
    # Dump raw input files used in benchmarks (opt-in)
    cargo run --release -- --zkvms sp1 --dump-inputs my-inputs stateless-validator --execution-client reth

    See the respective README files in each crate for detailed usage instructions.

Dumping Input Files

The --dump-inputs flag allows you to save the raw serialized input bytes used for each benchmark run. This is useful for:

  • Debugging guest programs independently
  • Analyzing input data characteristics
  • Replaying specific test cases outside the benchmark framework

When specified, input files are saved to the designated folder with the following structure:

{dump-folder}/
  └── {sub-folder}/       # e.g., "reth" for stateless-validator, empty for others
      └── {name}.bin      # Input files (one per benchmark)

Example usage:

cd crates/ere-hosts

# Dump inputs for stateless validator with Reth
cargo run --release -- --zkvms sp1 --dump-inputs debug-inputs stateless-validator --execution-client reth

# This creates files like:
# debug-inputs/reth/block-12345.bin
# debug-inputs/reth/block-12346.bin

Note: Input files are zkVM-independent (the same input is used across all zkVMs), so they're only written once even when benchmarking multiple zkVMs.

Proof Generation & Verification

The benchmark runner supports a decoupled prove/verify workflow using the --action flag. This allows generating proofs on one machine and verifying them on another.

Actions:

  • --action execute (default): Only execute the zkVM, no proof generation.
  • --action prove: Execute and generate a zkVM proof, with optional proof persistence via --save-proofs.
  • --action verify: Verify pre-generated proofs loaded from disk or a remote URL.

Step 1: Generate and save proofs

cd crates/ere-hosts

# Prove and save proof artifacts to a folder
cargo run --release -- --zkvms sp1 --action prove --save-proofs my-proofs \
    stateless-validator --execution-client reth

This creates proof files in the following structure:

my-proofs/
└── reth-v1.10.2/
    └── sp1-v4.0.0/
        ├── fixture1.proof
        └── fixture2.proof

Step 2: Verify proofs

From a local folder:

cargo run --release -- --zkvms sp1 --action verify --proofs-folder my-proofs \
    stateless-validator --execution-client reth

From a remote .tar.gz archive (e.g., hosted on GitHub releases or S3):

cargo run --release -- --zkvms sp1 --action verify \
    --proofs-url https://example.com/proofs.tar.gz \
    stateless-validator --execution-client reth

When using --proofs-url, the archive is downloaded and extracted to a temporary directory that is cleaned up after verification completes. The .tar.gz should contain the same folder structure as --save-proofs produces.

License

Licensed under either of

at your option.

About

zkVM benchmarking for Ethereum

Topics

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Packages

 
 
 

Contributors