-
Notifications
You must be signed in to change notification settings - Fork 32
Emit all datasets recalc in one consumer #2425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Emit all datasets recalc in one consumer #2425
Conversation
Signed-off-by: Andrea Lamparelli <[email protected]>
|
Is there anything I am missing and for which the previous implementation is needed? |
|
I think it makes sense to move emitting all events into one TX callback, i don't know how the callback mechanism scales. I am not sure if it will resolve the error you are seeing though, that appears related to unavailable credits, and the number of messages pushed through the AMQ client will be the same |
|
Thanks @johnaohara
I agree, but the error seems related to some wrong calculation That's why I am thinking that the wrong calculation is caused by some race condition, but I cannot confirm so.. |
This may change the timing, and therefore not trigger the race cond. but if there is a race it looks like it is in smallrye reactive messaging and would still be present. I agree that this should apply back pressure on the client to slow down emitting messages, maybe the smallrye reactive messaging team should take a look? |
|
With emitting all events in on the same TX would there be a potential issue with timeout? +1 on looping in someone with some messaging background here :) |
This change is not changing the TX semantics, but is registering one callback instead of n callbacks when the tx is committed (or rolled back). The callback occurs after the tx has succeeded/failed |
yeah exactly
anyone you know of? otherwise I can open an issue on quarkus directly, wdyt? |
|
perhaps @gemmellr or @ozangunalp? |
|
Meanwhile, @stalep @johnaohara wdyt about this fix? Is it good to go as it is? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the change looks good but I am not familiar with this part of Horreum nor do I believe we have adequate testing to shore up my confidence. In which case I say let's merge it and see what happens :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
I am afraid that sending dataset event in different TX callbacks we are sending events completely asynchronously and when there is a huge amount of them we might risk to hit some race condition that could cause wrong credit calculation on amqp client:
This is just a guess as I don't know how to prove it 🫤
Changes proposed
Instead of adding one TX callback for each dataset to change, I am proposing to add a single TX callback that will send all those events synchronously.
This way the messages are sent in order, and sequentially so that there shouldn't be any race condition anymore.
Check List (Check all the applicable boxes)