logo Idiap Research Institute        
 [BibTeX] [Marc21]
Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic Forgetting
Type of publication: Journal paper
Citation: Chen_TASLP_2024
Publication status: Accepted
Journal: IEEE/ACM Transactions on Audio, Speech, and Language Processing
Year: 2024
DOI: 10.1109/TASLP.2024.3463395
Abstract: We are motivated primarily by the adaptation of text-to-speech synthesis models; however we argue that more generic parameter-efficient fine-tuning (PEFT) is an appropriate framework to do such adaptation. Nevertheless, catastrophic forgetting remains an issue with PEFT, damaging the pre-trained model's inherent capabilities. We demonstrate that existing Bayesian learning techniques can be applied to PEFT to prevent catastrophic forgetting as long as the parameter shift of the fine-tuned layers can be calculated differentiably. In a principled series of experiments on language modeling and speech synthesis tasks, we utilize established Laplace approximations, including diagonal and Kronecker-factored approaches, to regularize PEFT with the low-rank adaptation (LoRA) and compare their performance in pre-training knowledge preservation. Our results demonstrate that catastrophic forgetting can be overcome by our methods without degrading the fine-tuning performance, and using the Kronecker-factored approximation produces a better preservation of the pre-training knowledge than the diagonal ones.
Keywords: Bayesian transfer learning, catastrophic forgetting, Laplace approximation, parameter-efficient fine-tuning
Projects Idiap
NAST
Authors Chen, Haolin
Garner, Philip N.
Added by: [UNK]
Total mark: 0
Attachments
  • Chen_TASLP_2024.pdf
Notes