-
Notifications
You must be signed in to change notification settings - Fork 278
🐛 Wait for ports creation in ports e2e test #938
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Wait for ports creation in ports e2e test #938
Conversation
Hi @macaptain. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
6be9120
to
b0df6bb
Compare
b4257e3
to
94620ca
Compare
/retest |
1 similar comment
/retest |
I hope this now improves the flakiness because we are waiting for the port to exist before proceeding with the tests. If it is possible to run the e2e tests once or twice more, can we do this to check the problem is gone? But it's looking hopeful that the flakiness is fixed, so feel free to merge if this PR is up to standard. /unhold |
I want to see this PR really fixes. /test pull-cluster-api-provider-openstack-e2e-test |
Success twice. /test pull-cluster-api-provider-openstack-e2e-test |
actually as stated in #927 , there are 2 tests in flaky state now .. anyway, maybe worthy a few tries (let's say 5 or 10? my previous unsuccessful fix I tried several times and all success, but still not fix this ) ? |
/test pull-cluster-api-provider-openstack-e2e-test one more |
There are probably different flaky failures as well. I don't expect this change to fix the flakiness in a different test. However, the ports test failing seems to be most common based on Testgrid
It looks like this latest test run got stuck very early on in the test run: https://prow.k8s.io/log?container=test&id=1415522658715439104&job=pull-cluster-api-provider-openstack-e2e-test - is this a known issue? |
not I am aware .. at least it doesn't report the flaky test we are targetting to solve /test pull-cluster-api-provider-openstack-e2e-test |
I've raised #940 since it seems the issue is persistent and caused by a new version of sshuttle released a few hours ago. |
The ports e2e test was checking that a failure condition had not occurred. However, it's possible the failure condition hadn't occurred because the machine used for the test had not been created yet, and the rest of the tests race ahead and do not find the port. This commit ensures that we wait for the port to be created before testing properties of it.
94620ca
to
3e9c05c
Compare
Rebased after the sshuttle fix, let's hope the tests go green again showing that it's not so flaky anymore. |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jichenjc, macaptain The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The ports e2e test was checking that a failure condition had not occurred. However, it's possible the failure condition hadn't occurred because the machine used for the test had not been created yet, and the rest of the tests race ahead and do not find the port.
This commit ensures that we wait for the port to be created before testing properties of it.
What this PR does / why we need it:
We need to wait for machine deployment to finish before listing ports in the deployment.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Maybe fixes #927
/hold