-
Notifications
You must be signed in to change notification settings - Fork 102
feat: engine_newPayloadV3
: validate, execute & store block
#222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
github-merge-queue bot
pushed a commit
that referenced
this pull request
Aug 6, 2024
…idation (#220) **Motivation** Fetch cancun time from DB when validating payload v3 timestamp <!-- Why does this pull request exist? What are its goals? --> **Description** * Store cancun_time in db * Use the stored cancun_time when validating payload timestamp in `eth_newPayloadV3` * Replace update methods for chain data in `Store` with `set_chain_config` Bonus: * Move `NewPayloadV3Request` to its corresponding module and update is parsing to match the rest of the codebase <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes None, but is part of #51
ElFantasma
pushed a commit
that referenced
this pull request
Aug 6, 2024
**Motivation** Having a way to obtain lates/earliest/pending/etc block numbers <!-- Why does this pull request exist? What are its goals? --> **Description** * Add get and update methods for earliest, latest, finalized, safe & pending block number to `Store` & `StoreEngine` * Resolve block numbers from tag in rpc methods <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes None, but fixes many and enables others
github-merge-queue bot
pushed a commit
that referenced
this pull request
Aug 7, 2024
…pts & withdrawals (#225) **Motivation** These roots are currently being calculated using `from_sorted_iter` but without being sorted beforehand. This PR replaces this behavior with inserting directly into the trie to ensure that it is ordered, then computing the root (The same fix that has been previously applied to storage root) **Description** Fixes `compute_transactions_root`, `compute_receipts_root` & `compute_withdrawals_root` <!-- A clear and concise general description of the changes this PR introduces --> **Notes** After this change, the payloads created by kurtosis local net now pass the block hash validations in `engine_NewPayloadV3` <!-- Link to issues: Resolves #111, Resolves #222 --> Closes None, but is needed for #51 Co-authored-by: Federica Moletta <[email protected]>
emirongrr
pushed a commit
to emirongrr/ethrex
that referenced
this pull request
Sep 3, 2025
**Motivation** Implement gas cost calculation updates defined in [EIP-7883](https://eips.ethereum.org/EIPS/eip-7883) Add boundary checks defined in [spec](https://github.com/ethereum/execution-specs/blob/51cabd86502df7af596f5c78a0c4e00a4f92822c/src/ethereum/osaka/vm/precompiled_contracts/modexp.py#L31) <!-- Why does this pull request exist? What are its goals? --> **Description** * Adds gas cost calculation for modexp precompile as of Osaka fork * Adds boundary checks for modexp inputs as of Osaka fork * Gate base fee being charged if modules & base where == 0 under pre-osaka fork (Not in the spec & caused tests to fail) * `PrecompileFn` now takes a `fork: Fork` argument (there was no other way around this) <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves lambdaclass#111, Resolves lambdaclass#222 --> Closes lambdaclass#4155
emirongrr
pushed a commit
to emirongrr/ethrex
that referenced
this pull request
Sep 3, 2025
…ss#4029) **Motivation** * Start the block building process as soon as we get a payload from a FCU * Only wait for the building process to finish when we get a `GetPayload` message * Keep latest 10 payloads in memory instead of filling up the DB with old payloads * Continuosly rebuild payload until requested/ slot time is up * Only remove transactions from mempool when we execute a payload <!-- Why does this pull request exist? What are its goals? --> **Description** * Add `Blockchain` methods `initiate_payload_build` & `get_payload` to start & finish a payload build process * Add `Blockchain` field `payloads` containing a vector of payload ids and payload building tasks/ built payloads (max 10 at a time) * Remove any instance of payloads being stored in the DB <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves lambdaclass#111, Resolves lambdaclass#222 --> Closes lambdaclass#3920 --------- Co-authored-by: Tomás Grüner <[email protected]> Co-authored-by: Martin Paulucci <[email protected]>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 4, 2025
**Motivation** I ran into the following panic while running ethrex on sepolia testnet: ``` thread 'tokio-runtime-worker' panicked at crates/networking/p2p/rlpx/eth/eth68/receipts.rs:30:69: index out of bounds: the len is 0 but the index is 0 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace ``` This PR prevents this panic from happening <!-- Why does this pull request exist? What are its goals? --> **Description** * (eth68 message) `Receipts::new` no longer asumes we have at least one receipt for a given block <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 4, 2025
…es NEON..." (#4322) This reverts commit 97d4827. **Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> - This commit broke blockchain tests on ARM ```bash failures: blockchain_runner::istanbul/eip152_blake2/blake2/blake2b.json blockchain_runner::istanbul/eip152_blake2/blake2/blake2b_large_gas_limit.json blockchain_runner::istanbul/eip152_blake2/blake2/blake2b_gas_limit.json ``` <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 4, 2025
**Motivation** The server script runner doesn't stop if a run failed, restarting and not letting debug. <!-- Why does this pull request exist? What are its goals? --> **Description** Now the script stops if a run failed <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 5, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> make replay test not required so we can merge despite it failing, maybe in the future when things become more stable we can make it required again. <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 5, 2025
**Motivation** Current version just broadcast every transaction we receive to every active peer we have. This presents some issues on performance, memory and even in code responsibilities. <!-- Why does this pull request exist? What are its goals? --> **Description** Reorganizes the transaction broadcast logic When a new peer connection is initialized, we send the hashes of all the txs in the mempool. Then every time we receive new txs, we add them to the mempool and send them to our peers, we send the txs to a sqrt of our peers, while sending only the hashes to the rest. The mempool itself is the one responsible for knowing which txs need to be broadcasted. Every time we add a new tx to the mempool we add it to the list of pending broadcast, then once we broadcast them, we erase them. Also if a tx is removed from the mempool before being broadcasted it is also deleted from the broadcast pool. There is one possible scenario where we could send a tx twice to the same peer, that is if after the tx left the mempool, we receive it again and add it back, we would send it again to the peers. <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #4241
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 5, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** Adds an option to the makefile and server runner script to run with debug assertions enabled <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 -->
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 5, 2025
**Motivation** The current update_pivot funciton in crates/networking/p2p/sync.rs has several room for improvement, primarily in the logic. <!-- Why does this pull request exist? What are its goals? --> **Description** The function now estimates the new block number based on the block timestamp <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #4261 --------- Co-authored-by: Javier Rodríguez Chatruc <[email protected]>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 5, 2025
**Motivation** Add the max blobs per tx check introduced by [EIP-7594](https://eips.ethereum.org/EIPS/eip-7594) <!-- Why does this pull request exist? What are its goals? --> **Description** * Add max blobs per tx check to levm pre check sequence * Enable ef tests for eip-7594 <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #4152
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 8, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> - ExecutionResult was inaccurate with LEVM, specially the gas used that subtracted the gas refunded but that's already done in LEVM. We don't have the ideal endpoints to try this out because the only ones that use this are `eth_estimateGas` and `eth_call`. The only way to see if it works fine is to try out the former RPC method with a transaction that triggers refunds... It would be good to have `eth_simulateV1` I guess Here's evidence that the gas_used in the VM is the `total gas used - refunded gas` https://github.com/lambdaclass/ethrex/blob/1918c5cedaf3f76df1bc01aba3b20d1d95162c84/crates/vm/levm/src/hooks/default_hook.rs#L199-L201 Bear with me, `exec_gas_consumed` is the same as `actual_gas_used` https://github.com/lambdaclass/ethrex/blob/1918c5cedaf3f76df1bc01aba3b20d1d95162c84/crates/vm/levm/src/hooks/default_hook.rs#L164 <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 8, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> ETH client doesn't log too much, making it difficult to debug different problems. **Description** <!-- A clear and concise general description of the changes this PR introduces --> Add trace, debug, and warning logs for better debugging. <!-- Link to issues: Resolves #111, Resolves #222 -->
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 8, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> Sometimes we want to change the log level, primarly for debugging, and restarting the node would break the case of study. **Description** <!-- A clear and concise general description of the changes this PR introduces --> Add a custom `admin_setLogLevel` endpoint that enables the node operator to specify a new log filter, just like with `RUST_LOG`. How to test: 1. Run a node (e.g. `ethrex --dev`) 2. Change the log levels: ``` curl localhost:8545 -H 'content-type: application/json' -d '{"jsonrpc": "2.0", "id": "1", "method": "admin_setLogLevel", "params": ["ethrex_dev::block_producer=info"]}' ``` 3. You should now only see `ethrex_dev::block_producer` logs > [!WARNING] > The `admin` namespace is currently unauthenticated and cannot be turned off. Be aware of this in public nodes <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #4299
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 8, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> Currently the L2 metrics gatherer component scraps data every 1s, including L1 data (gas price and last committed/verified batch). This consumes a lot of unnecessary resources. **Description** <!-- A clear and concise general description of the changes this PR introduces --> Increase the check interval to 5 seconds <!-- Link to issues: Resolves #111, Resolves #222 -->
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 8, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> We lack a way to get the latest batch sealed, something similar to `eth_blockNumber`. **Description** <!-- A clear and concise general description of the changes this PR introduces --> Add a new RPC method `ethrex_batchNumber` that returns the last sealed batch. How to test: 1. Start an L2 node: ``` COMPILE_CONTRACTS=true cargo run --release -- l2 --dev ``` 3. Wait about 1 minute until a batch is sealed. 4. Request the last batch number: ``` curl localhost:1729 -H 'content-type: application/json' -d '{"jsonrpc": "2.0", "id": "1", "method": "ethrex_batchNumber"}' ``` Should return something like `{"id":"1","jsonrpc":"2.0","result":"0x2"}` <!-- Link to issues: Resolves #111, Resolves #222 --> --------- Co-authored-by: Copilot <[email protected]>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 8, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> We want to be able to compile ethrex without compiling revm, this PR enables it. **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 9, 2025
**Motivation** Performance improvement on snap sync. <!-- Why does this pull request exist? What are its goals? --> **Description** This PR refactors the P2P networking layer by centralizing peer scoring functionality into a unified system. The changes move from multiple scattered peer scoring implementations to a single `PeerScore` module that manages peer reputation and selection consistently across all sync operations. Key changes include: - Creation of a centralized `PeerScores` system to replace scattered scoring logic - Removal of local peer scoring implementations from state and storage healing modules - Updated peer selection to use the unified scoring system across all sync operations <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #2073 **Known issues** #4352: the usage of this method is required for scoring to be useful, but it shouldn't be used always for performance reasons. This is error prone, so eventually the scoring data should be embedded in the kademlia table's `PeerData`. --------- Co-authored-by: Javier Chatruc <[email protected]> Co-authored-by: Juan Munoz <[email protected]> Co-authored-by: Mateo Rico <[email protected]> Co-authored-by: Esteban Dimitroff Hodi <[email protected]> Co-authored-by: Lucas Fiegl <[email protected]> Co-authored-by: Gianbelinche <[email protected]> Co-authored-by: Francisco Xavier Gauna <[email protected]> Co-authored-by: ricomateo <[email protected]> Co-authored-by: juan518munoz <[email protected]> Co-authored-by: Javier Rodríguez Chatruc <[email protected]> Co-authored-by: Copilot <[email protected]>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 9, 2025
**Motivation** Currenlty, we have no way of resuming an archive sync if it crashes/ we need to stop it. Our only option is to begin from the start, which can make us lose precious time in the case of unexpected crashes during long archive syncs. This PR proposes adding a `--checkpoint` flag which will periodically write the current archive sync's checkpoint data (current root, last hash, current file, etc) to a file. And which can later be used to resume archive sync from that same checkpoint <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: cdiielsi <[email protected]>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 9, 2025
**Motivation** Remove cases where we using `as usize` to cast `u64` values when not needed <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Contributes to #4081
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 9, 2025
**Motivation** These handlers are currently sync and can block the runtime for quite a long time. This is not the ideal solution for this; we still need to do things better so the handler itself doesn't take a long time to fetch the requested nodes. We'll tackle this on a subsequent PR. **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number --------- Co-authored-by: Javier Chatruc <[email protected]>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 10, 2025
) **Motivation** <!-- Why does this pull request exist? What are its goals? --> Using "localhost" as default bind address can cause some problems. **Description** <!-- A clear and concise general description of the changes this PR introduces --> Changed default to `0.0.0.0` for ETH RPC and `127.0.0.1` for EngineAPI <!-- Link to issues: Resolves #111, Resolves #222 -->
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 10, 2025
**Motivation** There are several possible sources of panic in `handshake.rs` and `codec.rs` <!-- Why does this pull request exist? What are its goals? --> **Description** Solves the possible panics <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #4314
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 11, 2025
**Motivation** When adding support for trie iteration from a given starting point, we added a `debug_assert!` that checks that the internal stack isn't empty, but this is incorrect; it is entirely possible that the stack is empty if the trie is empty, in that case we simply return an empty iterator, which is fine. **Description** <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #issue_number
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 11, 2025
…EFTests (#4284) **Motivation** Implement changes defined in [EIP-7918](https://eips.ethereum.org/EIPS/eip-7918) & [EIP-7892](https://eips.ethereum.org/EIPS/eip-7892) Fix EFTests not using the `BlobSchedule` defined in the test files <!-- Why does this pull request exist? What are its goals? --> **Description** * Update `calc_excess_blob_gas` according to EIP * Add `ForkBlobSchedule`s for Osaka & BPO1-BPO5 forks * Enable EIP-7918 & EIP-7892 ef tests * Use the `BlobSchedule` defined in each test file when running EF Tests <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #4157 & Closes #4156
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 11, 2025
…ckRangeUpdate` handling (#4360) **Motivation** Being able to support eth/69 messages <!-- Why does this pull request exist? What are its goals? --> **Description** * Add missing validation to `BlockRangeUpdate` (latest_hash != zero) * Disconnect from peers if `BlockRangeUpdate` is invalid * Enable eth/69 as supported capability * Use negotiated eth capability version when decoding incoming messages * Enable hive tests for `BlockRangeUpdate` <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes ##2785
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 11, 2025
…4363) **Motivation** The TxBroadcaster does not follow the spec, since it can send a tx to a peer that already knows of it <!-- Why does this pull request exist? What are its goals? --> **Description** Adds a HashMap stating which transactions each peer knows, and avoiding sending a tx if a peer already knows it. After some time, this HashMap is pruned. <!-- A clear and concise general description of the changes this PR introduces --> <!-- Link to issues: Resolves #111, Resolves #222 --> Closes #4356
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 12, 2025
**Motivation** <!-- Why does this pull request exist? What are its goals? --> **Description** <!-- A clear and concise general description of the changes this PR introduces --> Add a `docker-compose.yaml` file that deploys a Consensus client (lighthouse) and ethrex with just one command: ```sh docker compose up ``` `ETHREX_NETWORK` can be used to set the network, default is `mainnet`. <!-- Link to issues: Resolves #111, Resolves #222 --> --------- Co-authored-by: Copilot <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
Being able to fully validate execute and store blocks received by
engine_newPayloadV3
Description
engine_newPayloadV3
endpointFixes:
Genesis.get_block
:INITIAL_BASE_FEE
as base_fee_per_gas(With these fixes the genesis' block hash now matches the parentBlockHash of the next block when running with kurtosis)
beacon_root_contract_call
now sets the block's gas_limit to avoid tx validation errorsMisc:
compute_transactions_root
is now a standalone function matching the other compute functionsengine_newPayloadV3
endpointExecutionPayloadV3
&PayloadStatus
Other

We can now execute payloads when running with kurtosis 🚀
Disclaimer: We are still getting some execution errors in later blocks that we need to look into (They are all currently passing the block validations)
Closes #51