-
Notifications
You must be signed in to change notification settings - Fork 21.4k
core: implement in-block prefetcher #31557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core: implement in-block prefetcher #31557
Conversation
f4f1f5a to
ce318a3
Compare
ebec558 to
5abc763
Compare
8ef7604 to
271503f
Compare
|
@MariusVanDerWijden @fjl Please take a look. This PR is ready for reviewing. |
| return nil | ||
| } | ||
| // Preload the touched accounts and storage slots in advance | ||
| sender, err := types.Sender(signer, tx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this realistically fail? Only if either the block as at a fork boundary and the signer changes or if the signature was invalid, right? Shouldn't we just exit here? and else always warm the sender
| statedb.IntermediateRoot(true) | ||
| // Preload the contract code if the destination has non-empty code | ||
| if account != nil && !bytes.Equal(account.CodeHash, types.EmptyCodeHash.Bytes()) { | ||
| reader.Code(*tx.To(), common.BytesToHash(account.CodeHash)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this faster than blindly loading the code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also follow 7702 delegations here already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this faster than blindly loading the code?
not sure, but it's cheap anyway?
9260d8e to
8be2f84
Compare
| // This operation incurs significant memory allocations due to | ||
| // trie hashing and node decoding. TODO(rjl493456442): investigate | ||
| // ways to mitigate this overhead. | ||
| stateCpy.IntermediateRoot(true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're only checking the interrupt at the beginning of the call, which was fine previously where we linearly executed the transactions, but now the interrupt will most likely not stop any work from being done, since all go routines are likely to be past the entry point. I'm wondering whether it would make sense to start a second go routine that does something like this:
go func (evm *EVM, interrupt *atomic.Bool) {
for {
time.Sleep(time.Millisecond)
if interrupt != nil && interrupt.Load() {
evm.Cancel()
}
}(or something similar, you get the gist)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really. We limit the parallelism of workers to runtime.NumCPU() / 2. If the available CPU cores is 16, then only 8 routines will be created and transactions are assigned to these workers linearly.
If the prefetching is terminated, we still have very high chance to stop/prevent the following tx executions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah, I missed that. Makes sense
|
Allocations are really a bit crazy :D Going up to 500MB/s. Just added two nitpicks, otherwise this looks good to me. |
Co-authored-by: Marius van der Wijden <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This pull request enhances the block prefetcher by executing transactions in parallel to warm the cache alongside the main block processor. Unlike the original prefetcher, which only executes the next block and is limited to chain syncing, the new implementation can be applied to any block. This makes it useful not only during chain sync but also for regular block insertion after the initial sync. --------- Co-authored-by: Marius van der Wijden <[email protected]>
This pull request enhances the block prefetcher by executing transactions in parallel to warm the cache alongside the main block processor. Unlike the original prefetcher, which only executes the next block and is limited to chain syncing, the new implementation can be applied to any block. This makes it useful not only during chain sync but also for regular block insertion after the initial sync. --------- Co-authored-by: Marius van der Wijden <[email protected]>






This pull request enhances the block prefetcher by executing transactions in parallel
to warm the cache alongside the main block processor.
Unlike the original prefetcher, which only executes the next block and is limited to chain
syncing, the new implementation can be applied to any block. This makes it useful not
only during chain sync but also for regular block insertion after the initial sync.
TODO