CONF Kumar_ICASSP2025_2025/IDIAP XLSR-Transducer: Streaming ASR for Self-Supervised Pretrained Models Kumar, Shashi Madikeri, Srikanth Zuluaga-Gomez, Juan Villatoro-Tello, Esaú Thorbecke, Iuliia Motlicek, Petr E, Manjunath K Ganapathiraju, Aravind self-supervised learning streaming ASR transformer transducer XLSR EXTERNAL https://publications.idiap.ch/attachments/papers/2025/Kumar_ICASSP2025_2025.pdf PUBLIC https://publications.idiap.ch/index.php/publications/showcite/Kumar_Idiap-RR-08-2024 Related documents Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Hyderabad, India 2025 IEEE 2379-190X 979-8-3503-6874-1 https://ieeexplore.ieee.org/document/10888110 URL https://doi.org/10.1109/ICASSP49660.2025.10888110 doi Self-supervised pretrained models exhibit competitive performance in automatic speech recognition (ASR) on finetuning, even with limited in-domain supervised data. However, popular pretrained models are not suitable for streaming ASR because they are trained with full attention context. In this paper, we introduce XLSR-Transducer, where the XLSR-53 model is used as encoder in transducer setup. Our experiments on the AMI dataset reveal that the XLSR-Transducer achieves 4% absolute WER improvement over Whisper large-v2 and 8% over a Zipformer transducer model trained from scratch. To enable streaming capabilities, we investigate different attention masking patterns in the self-attention computation of transformer layers within the XLSR-53 model. We validate XLSR-Transducer on AMI and 5 languages from CommonVoice under low-resource scenarios. Finally, with the introduction of attention sinks, we reduce the left context by half while achieving a relative 12% improvement in WER. REPORT Kumar_Idiap-RR-08-2024/IDIAP XLSR-Transducer: Streaming ASR for Self-Supervised Pretrained Models Kumar, Shashi Madikeri, Srikanth Zuluaga-Gomez, Juan Villatoro-Tello, Esaú Iuliia, Nigmatulina Motlicek, Petr E, Manjunath K Ganapathiraju, Aravind EXTERNAL https://publications.idiap.ch/attachments/reports/2024/Kumar_Idiap-RR-08-2024.pdf PUBLIC Idiap-RR-08-2024 2024 Idiap August 2024 Self-supervised pretrained models exhibit competitive performance in automatic speech recognition on finetuning, even with limited in-domain supervised data for training. However, popular pretrained models are not suitable for streaming ASR because they are trained with full attention context. In this paper, we introduce XLSR-Transducer, where the XLSR-53 model is used as encoder in transducer setup. Our experiments on the AMI dataset reveal that the XLSR-Transducer achieves 4% absolute WER improvement over Whisper large-v2 and 8% over a Zipformer transducer model trained from scratch. To enable streaming capabilities, we investigate different attention masking patterns in the self-attention computation of transformer layers within the XLSR-53 model. We validate XLSR-Transducer on AMI and 5 languages from CommonVoice under low-resource scenarios. Finally, with the introduction of attention sinks, we reduce the left context by half while achieving a relative 12% improvement in WER. https://arxiv.org/abs/2407.04439 URL