CONF
amurthy-interspeech-17/IDIAP
Semi-supervised Learning with Semantic Knowledge Extraction for Improved Speech Recognition in Air Traffic Control
Srinivasamurthy, Ajay
Motlicek, Petr
Himawan, Ivan
Szaszak, Gyorgy
Oualil, Youssef
Helmke, Hartmut
EXTERNAL
https://publications.idiap.ch/attachments/papers/2017/amurthy-interspeech-17.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/Srinivasamurthy_Idiap-RR-21-2017
Related documents
Proceedings of Interspeech 2017
Stockholm, Sweden
2017
2406-2410
http://dx.doi.org/10.21437/Interspeech.2017-1446
doi
Automatic Speech Recognition (ASR) can introduce higher levels of automation into Air Traffic Control (ATC), where spoken language is still the predominant form of communication. While ATC uses standard phraseology and a limited vocabulary, we need to adapt the speech recognition systems to local acoustic conditions and vocabularies at each airport to reach optimal performance. Due to continuous operation of ATC systems, a large and increasing amount of untranscribed speech data is available, allowing for semi-supervised learning methods to build and adapt ASR models. In this paper, we first identify the challenges in building ASR systems for specific ATC areas and propose to utilize out-of-domain data to build baseline ASR models. Then we explore different methods of data selection for adapting baseline models by exploiting the continuously increasing untranscribed data. We develop a basic approach capable of exploiting semantic representations of ATC commands. We achieve relative improvement in both word error rate (23.5%) and concept error rates (7%) when adapting ASR models to different ATC conditions in a semi-supervised manner.
REPORT
Srinivasamurthy_Idiap-RR-21-2017/IDIAP
Semi-supervised Learning with Semantic Knowledge Extraction for Improved Speech Recognition in Air Traffic Control
Srinivasamurthy, Ajay
Motlicek, Petr
Himawan, Ivan
Szaszak, Gyorgy
Oualil, Youssef
Helmke, Hartmut
EXTERNAL
https://publications.idiap.ch/attachments/reports/2017/Srinivasamurthy_Idiap-RR-21-2017.pdf
PUBLIC
Idiap-RR-21-2017
2017
Idiap
September 2017
Automatic Speech Recognition (ASR) can introduce higher levels of automation into Air Traffic Control (ATC), where spoken language is still the predominant form of communication. While ATC uses standard phraseology and a limited vocabulary, we need to adapt the speech recognition systems to local acoustic conditions and vocabularies at each airport to reach optimal performance. Due to continuous operation of ATC systems, a large and increasing amount of untranscribed speech data is available, allowing for semi-supervised learning methods to build and adapt ASR models. In this paper, we first identify the challenges in building ASR systems for specific ATC areas and propose to utilize out-of-domain data to build baseline ASR models. Then we explore different methods of data selection for adapting baseline models by exploiting the continuously increasing untranscribed data. We develop a basic approach capable of exploiting semantic representations of ATC commands. We achieve relative improvement in both word error rate (23.5%) and concept error rates (7%) when adapting ASR models to different ATC conditions in a semi-supervised manner.