Skip to content

libozhu03/QArtSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 

Repository files navigation

QArtSR: Quantization via Reverse-Module and Timestep-Retraining in One-Step Diffusion based Image Super-Resolution

LiBo Zhu, Haotong Qin, Kaicheng Yang, Wenbo Li, Yong Guo, Yulun Zhang, Susanto Rahardj and Xiaokang Yang, "QArtSR: Quantization via Reverse-Module and Timestep-Retraining in One-Step Diffusion based Image Super-Resolution", arXiv, 2025

[arXiv] [supplementary material] [visual results]

πŸ”₯πŸ”₯πŸ”₯ News

  • 2025-3-10: This repo is released.

Abstract: One-step diffusion-based image super-resolution (OSDSR) models are showing increasingly superior performance nowadays. However, although their denoising steps are reduced to one and they can be quantized to 8-bit to reduce the costs further, there is still significant potential for OSDSR to quantize to lower bits. To explore more possibilities of quantized OSDSR, we propose an efficient method, Quantization viA reverse-module and timestep-retraining for OSDSR, named QArtSR. Firstly, we investigate the influence of timestep value on the performance of quantized models. Then, we propose Timestep Retraining Quantization (TRQ) and Reversed Per-module Quantization (RPQ) strategies to calibrate the quantized model. Meanwhile, we adopt the module and image losses to update all quantized modules. We only update the parameters in quantization finetuning components, excluding the original weights. To ensure that all modules are fully finetuned, we add extended end-to-end training after per-module stage. Our 4-bit and 2-bit quantization experimental results indicate that QArtSR obtains superior effects against the recent leading comparison methods. The performance of 4-bit QArtSR is close to the full-precision one.


HR LR ($\times$ 4) DiffBIR(32-bit) OSEDiff(32-bit) PassionSR(8-bit) QArtSR(4-bit)

βš’οΈ TODO

  • Release code and pretrained models

πŸ”— Contents

  1. Datasets
  2. Calibration
  3. Results
  4. Citation

πŸ”Ž Results

QArtSR significantly out-performs previous methods at the setting of W4A4, W3A3, and W2A2.

Evaluation on Synthetic Datasets

quantitative comparisons in Table 3 of the main paper (click to expand)

visual comparison in Figure 6 of the main paper (click to expand)

πŸ“Ž Citation

If you find the code helpful in your research or work, please cite the following paper(s).

@article{zhu2025qartsrquantizationreversemoduletimestepretraining,
  title={QArtSR: Quantization via Reverse-Module and Timestep-Retraining in One-Step Diffusion based Image Super-Resolution},
  author={Libo Zhu, Haotong Qin, Kaicheng Yang, Wenbo Li, Yong Guo, Yulun Zhang, Susanto Rahardja, and Xiaokang Yang},
  journal={arXiv preprint arXiv:2503.05584},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published