Skip to content

Conversation

dranikpg
Copy link
Contributor

@dranikpg dranikpg commented Jan 9, 2024

Optimize blocking command hot path to be ONE hop instead of three in single shard case.

If BLPOP runs only on a single shard and detects that the keys exists, we can actually perform the action in the same hop. If the key doesn't exist, we have to tell the transaction that we changed our mind in the last very second and like to continue 🙂

TODO: check streams? (Doesn't use container utils)
TODO: bench? (Done by Kostas)
TODO: check journal? (Done)

Copy link
Contributor Author

@dranikpg dranikpg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs some more polishment and comments

Copy link
Contributor

@kostasrim kostasrim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work and the best part is that it's mostly the happy path for our use case (single hop && concluding, i.e list is not empty)

@dranikpg dranikpg marked this pull request as ready for review January 12, 2024 12:05
Comment on lines 477 to 478
if (result == OpStatus::OUT_OF_MEMORY) {
local_result_ = result; // TODO: What???
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this safe?... (more than 1 shard)

@dranikpg dranikpg requested a review from romange January 12, 2024 12:07
Signed-off-by: Vladislav Oleshko <[email protected]>

// Handle result flags to alter behaviour.
if (result.flags & RunnableResult::AVOID_CONCLUDING) {
CHECK_EQ(unique_shard_cnt_, 1u); // multi shard must know it ahead, so why do those tricks?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand the comment. what tricks? what multi-shard must know?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the comment, multi shard must conclude either all or none, because the cbs don't coordinate it must know it ahead -> no point in using this flag

@romange
Copy link
Collaborator

romange commented Jan 14, 2024

looks simple but scary at the same time.

@romange romange requested a review from adiholden January 15, 2024 02:01
@romange
Copy link
Collaborator

romange commented Jan 15, 2024

@adiholden please go over as well. Also, left you a question there too.

shard->PollExecution("schedule_unique", nullptr);
}

return quick_run;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if quick_run = true but esult.flags is RunnableResult::AVOID_CONCLUDING
should we return false in this case?

Copy link
Contributor Author

@dranikpg dranikpg Jan 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, because the callback did run and won't run again until requested. This is why quick_run and lock_keys are now different variables. Also I updated the comment above, false means the callback will be run via tx queue (with the flag it won't run again until the next Execute)

adiholden
adiholden previously approved these changes Jan 15, 2024
Signed-off-by: Vladislav <[email protected]>
Signed-off-by: Vladislav Oleshko <[email protected]>
Copy link
Collaborator

@romange romange left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have tests covering this code?

@dranikpg
Copy link
Contributor Author

dranikpg commented Jan 15, 2024

Do we have tests covering this code?

We have lots of existing tests that cover both paths, but the coverage density is less now

@dranikpg dranikpg merged commit de81709 into dragonflydb:main Jan 15, 2024
@dranikpg dranikpg deleted the tx-work-2 branch January 15, 2024 19:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants