-
Notifications
You must be signed in to change notification settings - Fork 161
fix: Clear related caches on clusters update #2085
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This looks good, but it's not enough, since we need to clean up the So I propose that we create a hash for the list of clusters, and only reset everything up if that hash changes, so basically you'd do:
With that, we should expect things to be eventually consistent, after 30 seconds or so. |
I get some very strange behaviour..
|
I had deliberately left that to handle separately, but you are right that they have to be considered together 👍 |
This is a separate issue, related to that handler specifically, looking at the code it tries to do some secret querying indeed. I also noticed you are trying to query by |
01d7772
to
8e071b5
Compare
Updated commit and description |
8e071b5
to
dafb4bf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks great beside couple small nits.
@foot would be possible for you to test it from this branch? or does it need to be merged?
I can test from this branch 👌, will take it for a spin tomo thanks team!
…On Thu 5. May 2022 at 18:47, Luiz Filho ***@***.***> wrote:
***@***.**** approved this pull request.
this looks great beside couple small nits.
@foot <https://github.com/foot> would be possible for you to test it from
this branch? or does it need to be merged?
—
Reply to this email directly, view it on GitHub
<#2085 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAFL6DSCTOGQWI6RSKKHFDVIP3SJANCNFSM5VBNNO5A>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Still not quite working unfortunately. /v1/flux_runtime_objects returns:
We could add some debug logging? 🤔 , I can put together a cluster to play with if it helps too. |
This fixes an issue where the clustersNamespaces and the userNamespaces caches were not updated when Clusters were deleted. This was most noticeable for clustersNamespaces since the GetAll there would return everything in the cache, dated or not. For the userNamespaces cache this was hidden, since the GetAll in this case would receive an up to date list of the cached clusters and then use that to look up individual keys, returning them in a collated list. This meant it _looked like_ the cache was updated, but it was only showing us what we wanted to see; the cache was still quietly building up. The solution here is to generate a Hash based on an ordered list of the names of current clusters. When updating the clustersNamespaces cache, we compare the saved with the current hash, and clear the clustersNamespaces if they differ. We also clear the userNamespaces at this point; since the userNamespaces cache is dependent on both the clusters and the clustersNamespaces cache, it is simpler to clear everything together. The `userNsList` fun has been pulled apart for testing purposes, but is otherwise the same. The testing that the userNamespaces cache has been cleared was harder, since, as I said above the GetAll does not _really_ get all, but "gets all based on this cluster list". I can expose new methods and change this into a genuine List if that is preferred, but have not done that for now.
dafb4bf
to
3b47eaa
Compare
@foot we could jump on a call together, might be faster? I only have a couple of hours today before I go on holiday, so someone may need to take this off me if it drags on. |
After some testing and being more careful about observing the 30s window etc it seems to be working great! 💯 LGTM ⭐ |
Includes important MC fixes: - weaveworks/weave-gitops#2137 - weaveworks/weave-gitops#2085 Fixes needed to address core changes: - `wego-admin` is the new user to login as (no longer `admin`) - Core can now be configured with a fake-client so we can remove some hacks - Adds a little bit of debugging around MC querying
This fixes an issue where the clustersNamespaces and the userNamespaces
caches were not updated when Clusters were deleted.
This was most noticeable for clustersNamespaces since the GetAll there
would return everything in the cache, dated or not.
For the userNamespaces cache this was hidden, since the GetAll in this
case would receive an up to date list of the cached clusters and then
use that to look up individual keys, returning them in a collated list.
This meant it looked like the cache was updated, but it was only
showing us what we wanted to see; the cache was still quietly building
up.
The solution here is to generate a Hash based on an ordered list of the
names of current clusters. When updating the clustersNamespaces cache, we
compare the saved with the current hash, and clear the clustersNamespaces if
they differ.
We also clear the userNamespaces at this point; since the userNamespaces
cache is dependent on both the clusters and the clustersNamespaces
cache, it is simpler to clear everything together. (I can change this if wanted
nbd.)
The
userNsList
fun has been pulled apart for testing purposes, but isotherwise the same. The testing that the userNamespaces cache has been
cleared was harder, since, as I said above the GetAll does not really
get all, but "gets all based on this cluster list". I can expose new
methods and change this into a genuine List if that is preferred, but
have not done that for now.
Closes #2083