You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently, we had to migrate one of the k8s worker node (a VM over openstack) from one availability zone(AZ) to another due to hardware issues and resource limitations in that AZ and we noticed that the label "topology.cinder.csi.openstack.org/zone" was still having the value as old AZ and it was not letting the "csi-cinder-nodeplugin" daemonset pod start on that node.
We had to update the label on the node manually in order to allow pod to run on this worker node.
What you expected to happen:
We expect the label to be updated automatically by checking the actual AZ from the underlying node.
How to reproduce it:
Deploy cinder-csi on a kubernetes cluster(hosted over openstack) with topology awareness enabled(which is default).
Migrate k8s-worker VM from a compute in the original AZ to another AZ
"csi-cinder-nodeplugin" daemonset pod will fail to start on that worker node after this change in AZ for the node.
Anything else we need to know?:
Environment:
openstack-cloud-controller-manager(or other related binary) version: openstack-cloud-controller-manager=v1.28.1 and cinder-csi-driver=v1.27.1