-
Notifications
You must be signed in to change notification settings - Fork 278
✨ Support propagate uplink status #1481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Support propagate uplink status #1481
Conversation
Skipping CI for Draft Pull Request. |
✅ Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
/test pull-cluster-api-provider-openstack-build |
/test pull-cluster-api-provider-openstack-e2e-full-test |
1 similar comment
/test pull-cluster-api-provider-openstack-e2e-full-test |
What does it do? The neutron API doc is here: https://docs.openstack.org/api-ref/network/v2/index.html#show-port-details I read it and I still don't know what it does. |
The best I could find about it is this release note:
|
I also comments on the issue, just want to know the use case for this .. |
d270f31
to
d0cefe2
Compare
/test pull-cluster-api-provider-openstack-build |
@lentzi90 can you help fix the conflict ?Thanks |
The conflict was already fixed, or am I missing something? I'm waiting for a new gophercloud release though before we can take this in. It requires some changes in there also. 🙂 |
/test pull-cluster-api-provider-openstack-e2e-test yes you are right, I missed the latest update of the flag 'need-rebase' removal :) |
d0cefe2
to
3c2bfe6
Compare
} | ||
} | ||
|
||
func restorev1alpha7ClusterStatus(previous *infrav1.OpenStackClusterStatus, dst *infrav1.OpenStackClusterStatus) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not too sure about this. Should we really restore the cluster status or is it better to let it re-build when reconciled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might need this, but please check my thinking:
Lets ignore for a moment whether this information should be in the cluster status at all (I think it should not), because that's a separate issue. Lets assume that the information is required.
We are running v0.8. OpenStackCluster is v1alpha6. Code in v0.8 expects that if it sets a value in Status then when it reads it, it will still be set in Status.
- Code does the thing.
- Code sets v1alpha7.Status.Foo to record that it did the thing
- Webhook down-converts to v1alpha6
- v1alpha6 is persisted in API
- Next reconcile v1alpha6 is read from API
- Webhook up-converts to v1alpha7
- v1alpha7.Status.Foo is no longer set, so we might have to do the thing again.
Making my own counter-argument, this case might be different though because the value is immutable so it can only be set in the storage version, and if the storage version is v1alpha6 it can't be set at all 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If that was the case (v1alpha6 was storage version for v0.8), then I guess that would be true. But the storage version is set to v1alpha7, which makes much more sense IMO. So I'm thinking there would not be any conversions required "internally" and thus no need to restore it really (other than for happy fuzzing tests).
I was also thinking about what happens if the user asks for v1alpha6, makes changes to it and applies. Then it would be up-converted to v1alpha7, including the status. But the user should not normally make changes to the status (or even be able to) so maybe this scenario is irrelevant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed in the office hours, let's keep this just in case. It should not do any harm
@mdbooth
/lgtm |
Actually, @dulek are you able to look at this? I think it's good from an API pov but I don't fully understand the neutron side. I think it's good, though. Please bless with lgtm if appropriate :) /lgtm cancel |
It's just an option used for SR-IOV ports, not a big deal from K8s point of view, I bet it's to be used on secondary ports in the workers. Looks good from my side, leaving lgtm to @seanschneeweiss once he's satisfied with remarks he made. |
/lgtm |
/approve leave someone else want to take a look then do the hold cancel to merge |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jichenjc, lentzi90, mdbooth The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@lentzi90 Down to you to remove the hold. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/hold cancel |
/test pull-cluster-api-provider-openstack-e2e-test |
What this PR does / why we need it:
This adds support for propagate_uplink_status on ports.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #1480
Special notes for your reviewer:
TODOs:
/hold