### Feature request Is there a way to autoawq support for vllm, Im 'setting quantization to 'awq' but its not working ### Motivation faster inference ### Your contribution N/A