-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[ty] Better implementation of assignability for intersections with negated gradual elements #20773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Diagnostic diff on typing conformance testsNo changes detected when running ty on typing conformance tests ✅ |
|
…egated gradual elements
465ca4a
to
ec7c5cd
Compare
/// Return true if the top materialization of `self` has no overlap with the | ||
/// top materialization of `other`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's true that it is enough to test disjointness by checking if the intersection between the two top-materializations is empty, but I think I would prefer a definition that is a bit clearer / more intuitive.
Two gradual types A and B are disjoint if their intersection is empty: A & B = Never
.
From this definition, you can arrive at your property:
A & B = Never
is true if and only if the bottom and the top materialization of the two types are the same, i.e. ifTop[A & B] = Never
andBottom[A & B] = Never
.Top[X] = Never
always impliesBottom[X] = Never
, becauseBottom[X] <: Top[X]
.- This means that it is enough to test
Top[A & B] = Never
. - Since
Top[…]
distributes over intersections, we can also write this asTop[A] & Top[B] = Never
, which is equivalent to saying that the two fully-static typesTop[A]
andTop[B]
must be disjoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two gradual types A and B are disjoint if their intersection is empty:
A & B = Never
.
Riiight, but with our current implementation, this is surely circular -- we call is_disjoint_from
from our intersection simplification infrastructure to determine whether A & B
should simplify to Never
!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I thought you wanted to clarify what the semantics of this function are, not how it's implemented (it's not implemented by comparing top-materializations either, as far as I can tell).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not implemented by comparing top-materializations either
yes, fair point. I'll try to reword it 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to review the rest of the PR as well, but that will have to wait until tomorrow 😄
Let me see if I understand this:
The check for redundancy implemented here is too strict. This is not too surprising, because we know that redundancy is a weaker relation than subtyping. A counterexample would be I don't know what this means for this PR. I think there is still the bigger question as to whether or not we can actually use the true redundancy relation for union simplification (given that it breaks transitivity), or if we actually need something that sits between redundancy and subtyping, the sorts of ad-hoc rules that we currently implement in order to simplify things like |
That all seems correct to me. It's also unchanged from the redundancy implementation on I could add a comment linking to this discussion, and noting that this is probably not a complete implementation of the redundancy relation, but that we always err on the side of strictness for redundancy?
Indeed. But pragmatically, we already recognise |
I think it is a "known issue" that our implementation of redundancy is "too strict" compared to what it theoretically could/should be, but it must be too strict in order to avoid breaking transitivity (and transitivity is critical to the correct functioning of union simplification.)
I think this is basically right, but I would frame it slightly differently. The reason we named the redundancy relation "redundancy" is in order to communicate "this is the relation we use for determining redundancy in unions." So we don't "need something that sits between redundancy and subtyping." We already have exactly that: we call it "redundancy". It sits between "S+ <: T+ && S- <: T- subtyping" (what you are I think here calling "true redundancy", but we could also call "true subtyping") and "S+ <: T- subtyping" (which we currently call "subtyping", but could also be called "too-strict subtyping.") Our redundancy relation is "too-strict subtyping", but with some ad-hoc elements of "true subtyping", (hopefully) carefully chosen to avoid breaking transitivity. I don't think there is an open question about whether we can use a relation that breaks transitivity for union simplification. We definitely cannot, or else the result of union simplification depends on the order elements are added; this is not tenable. |
Yes, I think the disconnect is that the ("pure"? But nontransitive?) version of the redundancy relation described in the doc-comment above the |
Then we should reflect some of this discussion into that doc comment, to help us avoid continuing to have this same conversation on every PR related to redundancy 😆 |
I was genuinely confused by this, because there was an effort to formalize the new relation called redundancy in #20602, but then we ended up with a doc comment that essentially just describes "S+ <: T+ && S- <: T-" subtyping in a different way, as pointed out here. So I thought we actually aim to implement the "S+ <: T+ && S- <: T-" subtyping relation under the name of "redundancy". But this discussion here clarifies the situation: the doc comment is wrong, and what we call "redundancy" is "too-strict subtyping, but with some ad-hoc elements of true subtyping". In that case, I'm just worried that we won't be able to succinctly define what that relation actually is. Basically what Doug said here (with what looks like a wrong definition of assignability 😉). |
Our implementation of assignability between a type
T
and another type~U
is currently too strict. We currently only considerT
assignable to~U
ifTop[T]
is disjoint fromTop[U]
. That's correct for subtyping and redundancy, but not for assignability: for assignability, we should be more permissive, and allowT
to be considered assignable to~U
ifBottom[T]
is disjoint fromBottom[U]
.As part of this PR, I also improved the docstring of
Type::is_disjoint_from
to make it clear what it actually does (it tests for disjointness between the top materializations of the pair of types).Fixes astral-sh/ty#767
Test plan
main
QUICKCHECK_TESTS=1000000 cargo test --release -p ty_python_semantic -- --ignored types::property_tests::stable
locally