-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Unblock pending poll when passive cluster becomes active #8506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ctx, cancel := context.WithCancel(ctx) | ||
key := uuid.New() | ||
// Listening to registry changes in case the cluster becomes active while the poll is waiting | ||
c.namespaceRegistry.RegisterStateChangeCallback(key, func(ns *namespace.Namespace, deletedFromDb bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of doing it here, I think we should register the namespace state change listener at task queue level, when it sees the state change, it would unload the task queue and reload it.
this is hot path that the overhead here could be too heavy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm... yeah if we normally expect polls on the passive side what you're suggesting makes more sense. I was not sure if polls on passive cluster is typical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not register/unregister callbacks here, but we shouldn't do it at task queue level either, we we should register a single callback at the engine level:
- There's no reason to unload/reload at the task queue partition level, nothing below the partition cares about namespace state.
- The namespace registry calls the callback initially for every loaded namespace when registered, so that's a lot of unnecessary work.
- We don't need a new level of cancellation, we already keep a map of cancelfuncs at the engine level, we just need to register/unregister them in another map by namespace. See https://github.com/temporalio/temporal/blob/main/service/matching/matching_engine.go#L2570-L2574 and related.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correction: two more things do look at active namespace state:
- AddTask skips sync match if passive
- taskValidator skips first validation if active
I don't think there's any reason to unload/reload because of those two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On second thought, it might simplify a few other things if we did reload the partition on ns state change (metrics, query-only state in new matcher). So I'm leaning towards reloading the partition. But I still think the listener itself should be at the engine level and it can just iterate over loaded partitions and unload the ones that belong to ns that changed (which will automatically interrupt polls, of course). Or keep a map to be more efficient.
ctx, cancel := context.WithCancel(ctx) | ||
key := uuid.New() | ||
// Listening to registry changes in case the cluster becomes active while the poll is waiting | ||
c.namespaceRegistry.RegisterStateChangeCallback(key, func(ns *namespace.Namespace, deletedFromDb bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not register/unregister callbacks here, but we shouldn't do it at task queue level either, we we should register a single callback at the engine level:
- There's no reason to unload/reload at the task queue partition level, nothing below the partition cares about namespace state.
- The namespace registry calls the callback initially for every loaded namespace when registered, so that's a lot of unnecessary work.
- We don't need a new level of cancellation, we already keep a map of cancelfuncs at the engine level, we just need to register/unregister them in another map by namespace. See https://github.com/temporalio/temporal/blob/main/service/matching/matching_engine.go#L2570-L2574 and related.
return t, err | ||
} // else the cluster has become active so continue regular poll path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not necessarily true, the caller could have canceled.
Reloading approach makes the most sense to me too. Closing this PR. Moody or me will send another PR. |
What changed?
Describe what has changed in this PR.
Why?
Tell your future self why have you made these changes.
How did you test it?
Potential risks
Any change is risky. Identify all risks you are aware of. If none, remove this section.