CONF Carofilis_INTERSPEECH2025_2025/IDIAP Better Semi-supervised Learning for Multi-domain ASR Through Incremental Retraining and Data Filtering Carofilis, Andrés Rangappa, Pradeep Madikeri, Srikanth Kumar, Shashi Burdisso, Sergio Prakash, Jeena Villatoro-Tello, Esaú Motlicek, Petr Sharma, Bidisha Hacioğlu, Kadri Venkatesan, Shankar Vyas, Saurabh Stolcke, Andreas EXTERNAL https://publications.idiap.ch/attachments/papers/2025/Carofilis_INTERSPEECH2025_2025.pdf PUBLIC Interspeech 2025 Rotterdam, The Netherlands 2025 3618--3622 2958-1796 https://www.isca-archive.org/interspeech_2025/carofilis25_interspeech.pdf URL 10.21437/Interspeech.2025-2601 doi Fine-tuning pretrained ASR models for specific domains is challenging when labeled data is scarce. But unlabeled audio and labeled data from related domains are often available. We propose an incremental semi-supervised learning pipeline that first integrates a small in-domain labeled set and an auxiliary dataset from a closely related domain, achieving a relative improvement of 4% over no auxiliary data. Filtering based on multi-model consensus or named entity recognition (NER) is then applied to select and iteratively refine pseudo-labels, showing slower performance saturation compared to random selection. Evaluated on the multi-domain Wow call center and Fisher English corpora, it outperforms single-step fine-tuning. Consensus-based filtering outperforms other methods, providing up to 22.3% relative improvement on Wow and 24.8% on Fisher over single-step fine-tuning with random selection. NER is the second-best filter, providing competitive performance at a lower computational cost.