-
Notifications
You must be signed in to change notification settings - Fork 143
Brute force knn tile size heuristic #316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
tfeher
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Malte for the PR! Could you address the two smaller issues below?
benfred
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for fixing this @mfoerste4 !
|
|
||
| std::vector<IdxType>* id_ranges; | ||
| if (translations == nullptr) { | ||
| std::vector<IdxType> id_ranges; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
longer term - we can probably remove the code that handles translations entirely. Its not being used in the public api anymore, and is just left over from the RAFT version. (doesn't need to change in this PR though)
benfred
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm - thanks @mfoerste4 !
tfeher
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Malte for the updates, LGTM!
|
/merge |
This PR modifies the tile size heuristic for brute force knn as mentioned in (#277).
It also removes some unneeded cuda calls to save a couple of microseconds which might be relevant when running smaller batches.
CC @tfeher