Here is the implementation of our ACL2025 paper "Adapt Once, Thrive with Updates: Transferable Parameter-Efficient Fine-Tuning on Evolving Base Models".

## Training Environment (trans-peft)
pip install -r requirements-train.txt
cd peft
pip install -e .
cd ../transformers
pip install -e .
## Inference Environment (math_infer)
pip install -r requirements-infer.txt
cd peft
pip install -e .Trans-PEFT
bash ./scripts/trans-peft-qwen.shDirect Transfer
bash ./scripts/direct-trans-qwen.shFine-tune
bash ./scripts/vanilla-ft-qwen.sh@misc{gu2025adaptoncethriveupdates,
title={Adapt Once, Thrive with Updates: Transferable Parameter-Efficient Fine-Tuning on Evolving Base Models},
author={Naibin Gu and Peng Fu and Xiyu Liu and Ke Ma and Zheng Lin and Weiping Wang},
year={2025},
eprint={2506.06844},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.06844},
}
This repo benefits from PEFT, LLM-Adapters, MoRA, ReLoRA, and PiSSA. Thanks to their great work!