-
Notifications
You must be signed in to change notification settings - Fork 239
feat: v3 cert type with new interaction patterns with onchain Router
&& CertVerifier
interfaces
#1532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: v3 cert type with new interaction patterns with onchain Router
&& CertVerifier
interfaces
#1532
Conversation
fix: remove operator state retriever change fix: readd v1 test
feat: remove old files refactor: file layout
Co-authored-by: ethenotethan <[email protected]>
… Router && CertVerifier interfaces
… Router && CertVerifier interfaces - fix compilation bugs && introduce common cert interface for caller level usage
… Router && CertVerifier interfaces - add serialization input type
…into epociask--feat-v3-cert-type-with-router-comm
… Router && CertVerifier interfaces - address more pr comments
… Router && CertVerifier interfaces - update cert verifier path link
…feat-v3-cert-type-with-router-comm
… Router && CertVerifier interfaces - fix golangci lint
… Router && CertVerifier interfaces - update CheckDACert to take in full interface
… Router && CertVerifier interfaces - remove call opts for block # in check call
… Router && CertVerifier interfaces - address PR feedback & remove legacy verifier / v2 cert type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't finished reviewing everything, but still want to submit this review now to focus on the one big comment regarding the interface I left, which I think is important to resolve (one way or the other.. aka I might be wrong)
api/clients/v2/cert_builder.go
Outdated
} | ||
cb.logger.Debug("Retrieved NonSignerStakesAndSignature", "blobKey", blobKey.Hex()) | ||
|
||
eigenDACert, err := coretypes.BuildEigenDACertV3(blobStatusReply, nonSignerStakesAndSignature) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still think the name warrants changing to make them both have the same name. Actually maybe make the coretype use NewEigenDACertV3.
I like the separation between New and Build, and imo Build should be the external layer doing IO that can return errors, and the New constructors should be purely stateless with dependency injection and are just doing stateless data manipulation (which I think is what's happening here..)
// 6. Return the valid cert | ||
func (pd *PayloadDisperser) SendPayload( | ||
ctx context.Context, | ||
// payload is the raw data to be stored on eigenDA | ||
payload *coretypes.Payload, | ||
) (*coretypes.EigenDACert, error) { | ||
) (coretypes.EigenDACert, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thought about this some more, and I think this interface is not the right thing to return here. What we really want to return is just a "sum type" enum like rust has, but golang doesn't have those, and interfaces (open-ended sum types) are the recommended way to model/fake sum types.
The issue with returning this super large interface is that I don't think it'll be future compatible, and will cause breaking changes. For example, do we know for a fact that EVERY future cert will have a ReferenceBlockNumber()
method? What if we change to timestamps like we've been talking about for L3s?
What this interface is trying to do is to think through all possible "things" users of certs would want to do with them, and puts all those things as methods in the interface. But I don't think that is necessary, since anyways users can type switch on the cert type, and then typecast to the actual cert version, and have access to the raw data and methods.
What I am saying in code is that most call sites would look like:
cert, _ := pd.SendPayload(ctx, payload)
switch cert.Version() {
case 3:
certV3 := coretypes.CertV3(cert)
// do something with certV3
case 4:
certV4: coretypes.CertV4(cert)
// do something with certV4
This gives all the flexibility to callers. So my suggestion is to return the super simple interface
type EigenDACert interface {
Version() CertificateVersion
}
and let users use it as a sum-type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm so thinking this a bit - iiuc sum-type is in-essence putting a higher order category or label on a set of data structs with different attributes. This could work but there are considerations:
- Relay and validator retrieval would need to also express this type filter. This would likely require forking retrieval entrypoints for every new versioned cert.
- Currently retrieval logics expect common or core eigenda types (e.g, blob header, relay keys). We'd likely still need
- Dispersal changes will always be a hardfork where we literally just delete the old but for retrieval we need to support entrypoints to securely enable migrations
Using the existing interface approach we can still type switch on the version to understand what fields would need to be referenced for new retrieval (e.g, rbn vs reference timestamp). I'm a bit torn here tbh but feel like the existing solution would be best fit for more durable extensibility for retrieval logic over time. but at the same type there would a consequence of invalidated type safety.....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sum-type is just a type-safe tagged union that says "data is one of these n types" (for example, rust Option is a sum-type that either holds a data of type T, or None).
- yes, but that's normal and desired. When you add a new case to an enum, then all call sites need to deal with that new enum. They're in the same codebase so easy to change together when introducing a new cert type. Using the interface you have here might work for a few cert versions but there will inevitably come a time when the interface needs to change, and then what? Either we will introduce more functions and bloat the interface, or you will need to type switch anyways. So I'm saying we might as well just do it by type switching right away (I still 100% think this is the right pattern)
- unfinished sentence
- yes, see point 1
wdym by "consequence of invalidated type safety"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// convert the higher level payload into a low level EigenDA blob using the polynomial form | ||
// to dictate whether the representation is in coefficient (requires IFFT) or evaluation form |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// convert the higher level payload into a low level EigenDA blob using the polynomial form | |
// to dictate whether the representation is in coefficient (requires IFFT) or evaluation form | |
// convert the payload into an EigenDA blob by interpreting the payload in polynomial form, | |
// which means the encoded payload will need to be IFFT'd since EigenDA blobs are in coefficient form. |
…feat-v3-cert-type-with-router-comm
… Router && CertVerifier interfaces - use sum type pattern for cert
… Router && CertVerifier interfaces - address pr feedback
… Router && CertVerifier interfaces - rm duplicate import
… Router && CertVerifier interfaces - pass lint
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot going on here, and too many half-finished conversations for me to confidently understand everything happening in this PR. Is this PR blocking any other downstream work? If so, then I'd say let's merge it and keep going. This PR has imo reached the attention threshold for both author and reviewers.
On a more meta level though I'd like to understand how we can prevent in the future having these PRs with 150+ comments, because it feels to me like its showing some underlying issues: should we have broken this down the same way you asked patrick to break down his contract PR for example? Was this too big of a task to chew in one PR? We had talked about the RII approach previously (refactor; interface; implementation; in 3 separate PRs) before. Perhaps this PR should have followed this approach (and any PR with &&
in the title is RII-smell?).
Prob worth spending a bit of time during standup to discuss this. For example there's still some @litt3 threads open/unaddressed. Might indicate a disconnect between what we consider nits (think austin and I are similar in that some code refactors that improve readability/maintainability are not only nits, whereas it seems you often brush our requests as "let's not waste time on these nits").
Just to be clear, I don't think there's anything clearly wrong with this PR at this point. It's most likely fine overall. But I just have a bit of discomfort approving it, because I still don't feel >75% confident approving it despite having spent a lot of time/effort reviewing it. I checked with @litt3 and I think he feels the same way. Would be good to find a way to improve this process and have reviewers fully confident when merging PRs.
Why are these changes needed?
Updates V2 clients to be compatible with new
GenericCertVerifier
interface. This introduces a hardfork to the disperser client that kills the usage of the now historicalEigenDACertVerifierV2
which accepted averifyV2Cert
method using structured input. Sadly the older verifier will need to be kept around to safely migrate existing customer testnets. Once done we should be able to nuke it entirelyKey Changes
EigenDAOperatorStateRetriever
directly instead. This requires adding theEigenDARegistryCoordinator
contract address as a dependency and will need to be passed from proxy as well.REVIEWER NOTES
Currently this PR has a contingency on the cert verifier router PR. Once that's merged I will rebase to use the updated contract interfaces accordingly 🫡
Checks