-
Notifications
You must be signed in to change notification settings - Fork 3k
Register response.closeHandler() in VertxBlockingOutput and test for clients closing connections #5451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I think this race is causing the worker pool threads to lock in #5443 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is already called under lock.
That is what I thought when I first looked, but I am assuming that the thread that calls |
Both threads will be holding the same lock though, there is |
Actually I think isWriteable might be able to change in the background. I probably need to re-check it after the drain handler is registered. |
The locking semantics is |
Although there is an |
Thinking about this, this isn't an atomicity issue, it is a visibility issue, and as I mentioned we are at risk of a deadlock with |
Also, synchronzation does not make provisions for timeouts, leaving applications deadlocked if there is an error. I think a locking impl with timeouts and retries makes sense here, if for some reason the obj monitor does not receive the notification, we could time out and re-check the write queue as we have access to that through the request api |
After further investigation, it looks like what is happening is;
I am debugging further to map out the life cycle of the connection after an VertxException is thrown in the event loop I think that |
c810ccd
to
7160e24
Compare
@stuartwdouglas we were not registering a closeHandler for the response and notifying the worker thread when the connection is closed from client-side while there is data remaining to be written. I have updated my PR accordingly. |
7160e24
to
af32598
Compare
@johnaohara I'm not seeing your deadlock scenario. You can only have deadlock if there are two separate things being locked. There are no locks being used, only synchronization wait/notify. There is only synchronization on request.connection() so its impossible to have deadlock. What I think the possible error is that there is no synchronized block in the drain handler. Maybe this isn't true, but I thought notify had to be called within a synchronized block. Apologies for missing this |
When I wrote this I had misunderstood what was happening. What I was concerned about was a livelock scenario, where you have a worker thread obtaining a lock on
What was missing was registering a 1 - Line 89 in af32598
2 - Line 137 in af32598
3 - Line 34 in af32598
|
...steasy/runtime/src/main/java/io/quarkus/resteasy/runtime/standalone/VertxBlockingOutput.java
Outdated
Show resolved
Hide resolved
I think what we actually need is #5491 |
#5491 still finishes with the worker pool threads locked. |
af32598
to
be218e4
Compare
@stuartwdouglas have rebased on top of #5491 and added check for a closed connection, this doesn't hang when connections are closed in the middle of writing |
buffer.clear() does not reset the buffer, it is buffer.release() that does this. It looks like if you call write with the response closed the buffer will not be freed, once it is passed into Netty though it should be safe. I am adding try/catch to the write methods to handle this. |
buffer.release() decrements the reference count, and deallocates the buffer when the ref count reaches 0, or in the case of a pulled buffer, it will be returned to the pool. buffer.clear() set the readIndex to the writeIndex, which is in effect an empty buffer. This has the same effect of reading all the data in the buffer. I don't think we want to release the buffer at this point in the code path. I had another branch that does not try to write the buffer to the response if the connection has been closed |
I have updated quarkus-http with these changes: quarkusio/quarkus-http@4a8fde5 Do you want to update this PR or do you want me to? |
@stuartwdouglas have updated to throw an IOException if the client has closed the connection, and release the buffer if en exception occurs. The stack trace is a lot clearer now as well |
...steasy/runtime/src/main/java/io/quarkus/resteasy/runtime/standalone/VertxBlockingOutput.java
Outdated
Show resolved
Hide resolved
Can you squash the commits? |
I can, but |
4aa4f6f
to
f0c802b
Compare
f0c802b
to
c4e8693
Compare
There is a race cond accessing
waitingForDrain
inVertxBlockingOutput.awaitWriteable()
, whererequest.response().drainHandler()
can be called from a seperate thread, but access to the private booleanwaitingForDrain
is not guarded inrequest.response().drainHandler()
.I have used the same locking pattern used for
request.response().exceptionHandler()
andrequest.response().endHandler()
.However, I have concerns that this could result in a deadlock if we are holding the intrinsic lock on request.connection()
VertxBlockingOutput.awaitWriteable()
and a call torequest.response().drainHandler()
in a seperate thread would block waiting for the lock.Would it be better for
waitingForDrain
to be defined asvolatile
?