Skip to content
This repository was archived by the owner on Oct 31, 2023. It is now read-only.

Conversation

@acarapetis
Copy link

If a 16-bit float tensor on the CPU was passed as the input to quantize_blockwise or the output buffer for dequantize_blockwise, the code was previously passing its address to the c[de]quantize_blockwise_cpu_fp32 method, silently casting it to a 32-bit float* and resulting in segfaults.

A similar issue occurs if the absmax/code arguments to dequantize_blockwise are (somehow) 16-bit, resulting in illegal memory accesses on the GPU.

It took me a little while to track down the causes because of the cryptic errors; so I figured it was worth suggesting these changes. I've only been using the blockwise methods, so it's possible there are similar issues in other parts of the code - might be worth checking :)

This PR also includes a couple unrelated typo fixes.

Thanks for your work on this library, it's nice to squeeze the most I can out of my paltry GPU memory :)

It looks like this was meant to be NotImplementedError, but that's not
really appropriate anyway, since this isn't an abstract method. Since
any other dtype is really just a bad input, a ValueError seems
appropriate.
If a 16-bit float tensor on the CPU was passed as the input to
quantize_blockwise or the output buffer for dequantize_blockwise, the
code was previously passing its pointer to the
c[de]quantize_blockwise_cpu_fp32 method, silently casting it to a 32-bit
float* and resulting in segfaults.

A similar issue occurs if the absmax/code arguments to
dequantize_blockwise are float16s, resulting in illegal memory accesses
on the GPU.

This commit adds some simple dtype guards to ensure the tensors have the
expected type before passing them to the C extension.
@facebook-github-bot
Copy link

Hi @acarapetis!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 25, 2022
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@TimDettmers
Copy link
Contributor

Thanks for this PR! I am currently preparing a major overhaul of these algorithms and interfaces. I have to check how to best integrate this PR.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants