Skip to content

Conversation

fxmarty-amd
Copy link
Contributor

@fxmarty-amd fxmarty-amd commented Jan 31, 2025

#10765 was merged into vllm to support Quark quantized models. Unfortunately, tests are somewhat limited and the issue fixed in this PR was not catched by the tests. Namely, if you compare

if current_platform.is_rocm():
weight, weight_scale, input_scale = \
normalize_e4m3fn_to_e4m3fnuz(
weight=weight,
weight_scale=weight_scale,
input_scale=layer.input_scale)
if input_scale is not None:
layer.input_scale = Parameter(input_scale,
requires_grad=False)
weight_scale, weight = requantize_with_max_scale(
weight=weight,
weight_scale=weight_scale,
logical_widths=layer.logical_widths,
)

and
if self.qscheme == "per_tensor":
max_w_scale, weight = requantize_with_max_scale(
weight=layer.weight,
weight_scale=layer.weight_scale,
logical_widths=layer.logical_widths,
)
if current_platform.is_rocm():
weight, max_w_scale, input_scale = normalize_e4m3fn_to_e4m3fnuz(
weight=weight,
weight_scale=max_w_scale,
input_scale=layer.input_scale)
if input_scale is not None:
layer.input_scale = Parameter(input_scale,
requires_grad=False)
layer.weight = Parameter(weight.t(), requires_grad=False)
layer.weight_scale = Parameter(max_w_scale, requires_grad=False)

We see that the call order of normalize_e4m3fn_to_e4m3fnuz and requantize_with_max_scale is swapped for quark. This eventually loads in weights being different in case a checkpoint uses "quant_method": "quark" and is loaded in vllm.

We might want to add more tests or extend https://github.com/vllm-project/vllm/blob/main/tests/quantization/test_quark.py in this PR or in a later PR.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@BowenBao
Copy link
Contributor

BowenBao commented Feb 3, 2025

@fxmarty-amd let's update this PR with fix we discussed internally.

@fxmarty-amd fxmarty-amd force-pushed the upstream_quark-fp8-fix branch from 18b597e to 1eacf30 Compare February 7, 2025 09:39
Copy link
Contributor

@kewang-xlnx kewang-xlnx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Copy link

mergify bot commented Feb 7, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @fxmarty-amd.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 7, 2025
@fxmarty-amd fxmarty-amd force-pushed the upstream_quark-fp8-fix branch from 82ed62b to 735fb92 Compare February 7, 2025 13:28
@mergify mergify bot removed the needs-rebase label Feb 7, 2025
@fxmarty-amd fxmarty-amd changed the title Fix quark fp8 format loading [Bugfix] Fix quark fp8 format loading on AMD GPUs Feb 7, 2025
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@fxmarty-amd
Copy link
Contributor Author

@mgoin is there anything I should do about the red CI? Some failing tests have been due to timeout to hf.co on a few commits

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 10, 2025
@mgoin
Copy link
Member

mgoin commented Feb 10, 2025

Not much to do there, we are landing PRs to try to reduce flakiness with HF. Let me merge with main

@mgoin
Copy link
Member

mgoin commented Feb 11, 2025

@fxmarty-amd There is an error related to your parameters in the quantization test

[2025-02-10T23:36:19Z] FAILED quantization/test_quark.py::test_quark_fp8_generate - RuntimeError: Creating a Parameter from an instance of type PerTensorScaleParameter requires that detach() returns an instance of the same type, but return type Tensor was found instead. To use the type as a Parameter, please correct the detach() semantics defined by its __torch_dispatch__() implementation.
[2025-02-10T23:36:19Z] FAILED quantization/test_quark.py::test_quark_fp8_parity - RuntimeError: Creating a Parameter from an instance of type PerTensorScaleParameter requires that detach() returns an instance of the same type, but return type Tensor was found instead. To use the type as a Parameter, please correct the detach() semantics defined by its __torch_dispatch__() implementation.

@fxmarty-amd fxmarty-amd force-pushed the upstream_quark-fp8-fix branch 4 times, most recently from 671ffa9 to 4c30bcd Compare April 22, 2025 15:40
@russellb russellb moved this to Secondary in Structured Output Apr 22, 2025
@kewang-xlnx kewang-xlnx force-pushed the upstream_quark-fp8-fix branch 2 times, most recently from b0845f9 to 66fd8c2 Compare May 7, 2025 15:57
fxmarty-amd and others added 3 commits May 7, 2025 20:48
Signed-off-by: Felix Marty <[email protected]>
Signed-off-by: kewang2 <[email protected]>
@kewang-xlnx kewang-xlnx force-pushed the upstream_quark-fp8-fix branch from 66fd8c2 to 8825c36 Compare May 8, 2025 02:51
@kewang-xlnx
Copy link
Contributor

@mgoin @robertgshaw2-redhat Could you please take a look at this error https://buildkite.com/vllm/ci/builds/19546#0196adcf-4658-4117-9f60-9faa0fb1fc90, and give some advice to fix it, thanks.

@DarkLight1337
Copy link
Member

It is also failing on main, you can ignore it. I can force merge once I am back on PC

@vllm-bot vllm-bot merged commit bb239a7 into vllm-project:main May 8, 2025
54 of 56 checks passed
princepride pushed a commit to princepride/vllm that referenced this pull request May 10, 2025
Signed-off-by: Felix Marty <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Signed-off-by: 汪志鹏 <[email protected]>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Signed-off-by: Felix Marty <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Signed-off-by: Mu Huai <[email protected]>
mawong-amd pushed a commit to ROCm/vllm that referenced this pull request May 14, 2025
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
Signed-off-by: Felix Marty <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Signed-off-by: Yuqi Zhang <[email protected]>
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
Signed-off-by: Felix Marty <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Signed-off-by: minpeter <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build documentation Improvements or additions to documentation frontend ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants