-
-
Notifications
You must be signed in to change notification settings - Fork 9.2k
Closed
Labels
feature requestNew feature or requestNew feature or request
Description
🚀 The feature, motivation and pitch
I wanna run Qwen3-235B-A22B on Ampere (A100) in fp8.
I quantized it to w8a16 using llm-compressor
https://huggingface.co/cognitivecomputations/Qwen3-235B-A22B-FP8-W8A16
but when I run it, I get the error
ERROR 05-02 03:16:53 [multiproc_executor.py:435] AssertionError: float16 is required for MoE compressed models. Set dtype=torch.float16
Please support FP8 with MoE in Marlin kernel
Alternatives
No response
Additional context
No response
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
hibukipanim, zhangxiaoxing and ehartford
Metadata
Metadata
Assignees
Labels
feature requestNew feature or requestNew feature or request