logo Idiap Research Institute        
 [BibTeX] [Marc21]
Parameter-Efficient Tuning With Adaptive Bottlenecks For Automatic Speech Recognition
Type of publication: Conference paper
Citation: Vanderreydt_ASRU2023_2023
Publication status: Accepted
Booktitle: Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU'23
Year: 2023
Abstract: Transfer learning from large multilingual pretrained models, like XLSR, has become the new paradigm for Automatic Speech Recognition (ASR). Considering their ever-increasing size, fine-tuning all the weights has become impractical when the computing budget is limited. Adapters are lightweight trainable modules inserted between layers while the pretrained part is kept frozen. They form a parameter-efficient fine-tuning method, but they still require a large bottleneck size to match standard fine-tuning performance. In this paper, we propose ABSADAPTER, a method to further reduce the parameter budget for equal task performance. Specifically, ABSADAPTER uses an Adaptive Bottleneck Scheduler to redistribute the adapter's weights to the layers that need adaptation the most. By training only 8% of the XLSR model, ABSADAPTER achieves close to standard fine-tuning performance on a domain-shifted Air-Traffic Communication (ATC) ASR task.
Keywords:
Projects Idiap
CRITERIA
Authors Vanderreydt, Geoffroy
Prasad, Amrutha
Khalil, Driss
Madikeri, Srikanth
Demuynck, Kris
Motlicek, Petr
Added by: [UNK]
Total mark: 0
Attachments
  • Vanderreydt_ASRU2023_2023.pdf
Notes