Skip to content

Commit c9e864a

Browse files
dsikkakylesayrs
authored andcommitted
[Docs] Update README to list fp4 (vllm-project#1462)
Summary - Update supported formats list in README.md - Point to examples - Fix AWQ link in README --------- Co-authored-by: Kyle Sayers <[email protected]>
1 parent 0acf6b3 commit c9e864a

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,14 @@
1616

1717
Big updates have landed in LLM Compressor! Check out these exciting new features:
1818

19+
* **FP4 Weight Only Quantization Support:** Quantize weights to FP4 and seamlessly run the compressed model in vLLM. Model weights are quantized following the NVFP4 [configuration](https://github.com/neuralmagic/compressed-tensors/blob/1b6287a4b21c16e0842f32fadecb20bb4c0d4862/src/compressed_tensors/quantization/quant_scheme.py#L103). See an example [here](examples/quantization_w4a16_fp4/llama3_example.py).
1920
* **Axolotl Sparse Finetuning Integration:** Easily finetune sparse LLMs through our seamless integration with Axolotl. [Learn more here](https://docs.axolotl.ai/docs/custom_integrations.html#llmcompressor).
2021
* **AutoAWQ Integration:** Perform low-bit weight-only quantization efficiently using AutoAWQ, now part of LLM Compressor. *Note: This integration should be considered experimental for now. Enhanced support, including for MoE models and improved handling of larger models via layer sequential pipelining, is planned for upcoming releases.* [See the details](https://github.com/vllm-project/llm-compressor/pull/1177).
2122
* **Day 0 Llama 4 Support:** Meta utilized LLM Compressor to create the [FP8-quantized Llama-4-Maverick-17B-128E](https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8), optimized for vLLM inference using [compressed-tensors](https://github.com/neuralmagic/compressed-tensors) format.
2223

2324
### Supported Formats
2425
* Activation Quantization: W8A8 (int8 and fp8)
25-
* Mixed Precision: W4A16, W8A16
26+
* Mixed Precision: W4A16, W8A16, NVFP4A16
2627
* 2:4 Semi-structured and Unstructured Sparsity
2728

2829
### Supported Algorithms
@@ -50,8 +51,9 @@ pip install llmcompressor
5051
Applying quantization with `llmcompressor`:
5152
* [Activation quantization to `int8`](examples/quantization_w8a8_int8/README.md)
5253
* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8/README.md)
54+
* [Weight only quantization to `fp4`](examples/quantization_w4a16_fp4/llama3_example.py)
5355
* [Weight only quantization to `int4` using GPTQ](examples/quantization_w4a16/README.md)
54-
* [Weight only quantization to `int4` using AWQ](examples/awq/awq_one_shot.py)
56+
* [Weight only quantization to `int4` using AWQ](examples/awq/README.md)
5557
* [Quantizing MoE LLMs](examples/quantizing_moe/README.md)
5658
* [Quantizing Vision-Language Models](examples/multimodal_vision/README.md)
5759
* [Quantizing Audio-Language Models](examples/multimodal_audio/README.md)

0 commit comments

Comments
 (0)