-
Notifications
You must be signed in to change notification settings - Fork 518
Create a unified transaction verification cache #1757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| r := rand.Intn(numAccs) | ||
| a := rand.Intn(1000) | ||
| f := config.Consensus[protocol.ConsensusCurrentVersion].MinTxnFee + uint64(rand.Intn(10)) | ||
| f := config.Consensus[protocol.ConsensusCurrentVersion].MinTxnFee + uint64(rand.Intn(10)) + u |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was done to ensure that we don't end up with identical transactions when generating large number of txns.
algorandskiy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Som initial minor remarks. I need to take another look later.
| } | ||
| groupCtxs := make([]*GroupContext, len(txnGroups)) | ||
| for i, signTxnsGrp := range txnGroups { | ||
| groupCtxs[i], grpErr = TxnGroup(signTxnsGrp, blkHeader, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is the cache param nil here? AddPayset is used only here, so... maybe let TxnGroup add a group?
I do not think few additional locks make a difference there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
many transaction groups will be of size 1. I think that we shouldn't take the lock if we don't have to..
after all, taking the lock takes 3000-5000 ns; multiply this by 10000 and you'll end up with some notable delay.
algorandskiy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as gerrit says "+1, Looks good to me, but someone else must approve"
| if len(v.buckets[v.base])+len(txgroup) > entriesPerBucket { | ||
| // move to the next bucket while deleting the content of the next bucket. | ||
| v.base = (v.base + 1) % len(v.buckets) | ||
| v.buckets[v.base] = make(map[transactions.Txid]*GroupContext, entriesPerBucket) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably it is better pre-allocate to max(entriesPerBucket, len(txgroup))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the number of transaction in a group is 16. the entriesPerBucket is in the order of several thousands.
when allocating a new bucket, we want to have large buckets, and have each bucket contain all the transactions of a single txn group.
| } | ||
| } | ||
| if !found { | ||
| transcationMissing = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
break? since we going to error anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh yes.. Failing to pin a transaction within group ( or part of it ) isn't a good thing, but it shouldn't prevent us from pinning the rest of the entries. ( i.e. in the worst case scenario, we will need to verify the signature again for that particular transaction ).
The caller should log this, but there is nothing that really can be done at that point. ( and it's not really harmful either )
| // we use the (base + W) % W trick here so we can go backward and wrap around the zero. | ||
| for offsetBucketIdx := baseBucket + len(v.buckets); offsetBucketIdx > baseBucket; offsetBucketIdx-- { | ||
| bucketIdx := offsetBucketIdx % len(v.buckets) | ||
| if ctx, has := v.buckets[bucketIdx][txID]; has { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we might stop earlier if we track how many buckets are in use. Maybe not a big deal, will only help on non-full cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that after the first cycle, all the bucket will be in use ( although they might contain "old" entries ).
My intent here was to try and avoid deleting the old maps entries.
algonautshant
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great.
I have some clarification questions.
| // LogicSigSanityCheck checks that the signature is valid and that the program is basically well formed. | ||
| // It does not evaluate the logic. | ||
| func LogicSigSanityCheck(txn *transactions.SignedTxn, ctx *Context) error { | ||
| func LogicSigSanityCheck(txn *transactions.SignedTxn, groupIndex int, groupCtx *GroupContext) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a test for this function?
| // errMissingPinnedEntry is being generated when we're trying to pin a transaction that does not appear in the cache | ||
| var errMissingPinnedEntry = &VerifiedTxnCacheError{errors.New("Missing pinned entry")} | ||
|
|
||
| // VerifiedTransactionCache provides a cached store of recently verified transactions. The cache is desiged two have two separate "levels". On the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: designed two have -> designed to have
| // entry isn't in pinned; maybe we have it in one of the buckets ? | ||
| found := false | ||
| // we use the (base + W) % W trick here so we can go backward and wrap around the zero. | ||
| for offsetBucketIdx := v.base + len(v.buckets); offsetBucketIdx > v.base; offsetBucketIdx-- { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of the buckets are expected to be non-empty most of the time right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depends on the usage. Proposal validation would cause full buckets, transaction gossiping that goes into the txpool would first go into the buckets and then moved into the pinned map.
Create a unified transaction verification cache
Summary
The existing transaction cache was always around the transaction entries that we had in our transaction pool. That has been working well for scenarios where the transaction pool is not congested. However, when we get to situations where the transaction pool is congested, it creates issues in the following two scenarios:
To address both issues, I've extracted the verified transaction cache out of the transaction pool into a separate object that is being held by the ledger. This object is always being used when verifying a transaction, and any verified transaction is being "set" in that object.
Test Plan
Unit tests were added and updated.
Performance Testing
The changes were tested using scenario1 and scenario2 networks; no regression was noted.