Donald Shenaj♦ ♠ Ondrej Bohdal♦ Mete Ozay♦ Pietro Zanuttigh♠ Umberto Michieli♦
♦ Samsung R&D Institute UK ♠ University of Padova
ICCV 2025
Recent advancements in image generation models have enabled personalized image creation with both user-defined subjects (content) and styles. Prior works achieved personalization by merging corresponding low-rank adapters (LoRAs) through optimization-based methods, which are computationally demanding and unsuitable for real-time use on resource-constrained devices like smartphones. To address this, we introduce LoRA.rar, a method that not only improves image quality but also achieves a remarkable speedup of over
conda env create -f lorarar.yaml
conda activate lorarar
Image attributions are provided in the supplementary material. To download the images run:
bash scripts/download_datasets.sh
Train all subject and style LoRAs:
nohup bash scripts/sdxl/train_subject_loras.sh &
nohup bash scripts/sdxl/train_style_loras.sh &
The final checkpoint for SDXL is provided in models/hypernet.pth.
If you want to retrain the hypernetwork, run:
nohup bash scripts/sdxl/train_lorarar.sh &
Run inference on all combinations of subject X style in the test set:
bash scripts/sdxl/run_inference.sh
python mllm_eval.py --generated_imgs_dir $SAVED_IMAGES_PATH --reference_dir=datasets/test_datasets
@InProceedings{shenaj2025lora,
author = {Shenaj, Donald and Bohdal, Ondrej and Ozay, Mete and Zanuttigh, Pietro and Michieli, Umberto},
title = {LoRA.rar: Learning to Merge LoRAs via Hypernetworks for Subject-Style Conditioned Image Generation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025}
}
Acknowledgement: our code extends https://github.com/mkshing/ziplora-pytorch