-
Notifications
You must be signed in to change notification settings - Fork 6.7k
[core] Improve status messages and add comments about stale seq_no handling #54470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
c2eebd7
3867ca0
2b7885e
91ee2b1
f075074
594a737
aeea870
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -133,12 +133,15 @@ bool ActorSchedulingQueue::CancelTaskIfFound(TaskID task_id) { | |
/// Schedules as many requests as possible in sequence. | ||
void ActorSchedulingQueue::ScheduleRequests() { | ||
// Cancel any stale requests that the client doesn't need any longer. | ||
// This happens when the client sends an RPC with the client_processed_up_to | ||
// sequence number higher than the lowest sequence number of a pending actor task. | ||
// In that case, the client no longer needs the task to execute (e.g., it has been retried). | ||
while (!pending_actor_tasks_.empty() && | ||
pending_actor_tasks_.begin()->first < next_seq_no_) { | ||
auto head = pending_actor_tasks_.begin(); | ||
RAY_LOG(ERROR) << "Cancelling stale RPC with seqno " | ||
<< pending_actor_tasks_.begin()->first << " < " << next_seq_no_; | ||
head->second.Cancel(Status::Invalid("client cancelled stale rpc")); | ||
head->second.Cancel(Status::Invalid("Task cancelled due to stale sequence number. The client intentionally discarded this task.")); | ||
{ | ||
absl::MutexLock lock(&mu_); | ||
pending_task_id_to_is_canceled.erase(head->second.TaskID()); | ||
|
@@ -170,39 +173,40 @@ void ActorSchedulingQueue::ScheduleRequests() { | |
|
||
if (pending_actor_tasks_.empty() || | ||
!pending_actor_tasks_.begin()->second.CanExecute()) { | ||
// No timeout for object dependency waits. | ||
// Either there are no tasks to execute, or the head of line is blocked waiting for | ||
// its dependencies. We do not set a timeout waiting for the dependencies. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There is actually a potentially interesting edge case here: It could be that the head-of-line task is blocking waiting for dependencies and it has an out-of-order sequence number. in that case, we would hang forever without the timeout applying. I'm not sure if this can practically happen, but it might be worth reordering the checks here. @dayshah check my reasoning here. It should be relatively straightforward to fix. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's dependencies should eventually be fetched though right. And then we'll start the 30 second timer. Takes a little longer bc the 30s timer starts after the fetch but still correct. The dependency fetch shouldn't hang for any other case than the one my retry pr is handling??? hopefully? If the obj was totally lost, I'm hoping the dependency waiter still unblocks the task eventually, otherwise that's a problem everywhere. Or are you talking about some other case. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes it should get unblocked eventually... |
||
wait_timer_.cancel(); | ||
} else { | ||
// Set a timeout on the queued tasks to avoid an infinite wait on failure. | ||
// We are waiting for a task with an earlier seq_no from the client. | ||
// The client always sends tasks in seq_no order, so in the majority of cases we | ||
// should receive the expected message soon, but messages can come in out of order. | ||
// | ||
// We set a generous timeout in case the expected seq_no is never received to avoid | ||
// hanging. This should happen only if the client crashes or misbehaves. After the | ||
// timeout, all tasks will be canceled and the client (if alive) must retry. | ||
wait_timer_.expires_from_now(boost::posix_time::seconds(reorder_wait_seconds_)); | ||
RAY_LOG(DEBUG) << "waiting for " << next_seq_no_ << " queue size " | ||
<< pending_actor_tasks_.size(); | ||
wait_timer_.async_wait([this](const boost::system::error_code &error) { | ||
if (error == boost::asio::error::operation_aborted) { | ||
return; // time deadline was adjusted | ||
return; // Timer deadline was adjusted. | ||
} | ||
RAY_LOG(ERROR) << "Timed out waiting for task with seq_no=" << next_seq_no_ | ||
<< ", cancelling all queued tasks."; | ||
edoakes marked this conversation as resolved.
Show resolved
Hide resolved
|
||
while (!pending_actor_tasks_.empty()) { | ||
auto head = pending_actor_tasks_.begin(); | ||
head->second.Cancel(Status::Invalid("Server timed out while waiting for an earlier seq_no.")); | ||
edoakes marked this conversation as resolved.
Show resolved
Hide resolved
|
||
next_seq_no_ = std::max(next_seq_no_, head->first + 1); | ||
{ | ||
absl::MutexLock lock(&mu_); | ||
pending_task_id_to_is_canceled.erase(head->second.TaskID()); | ||
} | ||
pending_actor_tasks_.erase(head); | ||
} | ||
OnSequencingWaitTimeout(); | ||
}); | ||
} | ||
} | ||
|
||
/// Called when we time out waiting for an earlier task to show up. | ||
void ActorSchedulingQueue::OnSequencingWaitTimeout() { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I inlined this because it improved readability, but find to keep it separate if others prefer There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i prefer the inline |
||
RAY_CHECK(std::this_thread::get_id() == main_thread_id_); | ||
RAY_LOG(ERROR) << "timed out waiting for " << next_seq_no_ | ||
<< ", cancelling all queued tasks"; | ||
while (!pending_actor_tasks_.empty()) { | ||
auto head = pending_actor_tasks_.begin(); | ||
head->second.Cancel(Status::Invalid("client cancelled stale rpc")); | ||
next_seq_no_ = std::max(next_seq_no_, head->first + 1); | ||
{ | ||
absl::MutexLock lock(&mu_); | ||
pending_task_id_to_is_canceled.erase(head->second.TaskID()); | ||
} | ||
pending_actor_tasks_.erase(head); | ||
} | ||
} | ||
|
||
void ActorSchedulingQueue::AcceptRequestOrRejectIfCanceled(TaskID task_id, | ||
InboundRequest &request) { | ||
bool is_canceled = false; | ||
|
Uh oh!
There was an error while loading. Please reload this page.