CONF
Lecorve_INTERSPEECH_2012/IDIAP
Supervised and unsupervised Web-based language model domain adaptation
Lecorvé, Gwénolé
Dines, John
Hain, Thomas
Motlicek, Petr
ASR
Automatic Speech Recognition
domain adaptation
Language Models
supervision
Web data
https://publications.idiap.ch/index.php/publications/showcite/Lecorve_Idiap-RR-22-2012
Related documents
Proceedings of Interspeech
Portland, Oregon, USA
2012
to appear
Domain language model adaptation consists in re-estimating probabilities of a baseline LM in order to better match the specifics of a given broad topic of interest. To do so, a common strategy is to retrieve adaptation texts from the Web based on a given domain-representative seed text. In this paper, we study how the selection of this seed text influences the adaptation process and the performances of resulting adapted language models in
automatic speech recognition. More precisely, the goal of this original study is to analyze the differences of our Web-based adaptation approach between the supervised case, in which the seed text is manually generated, and the unsupervised case, where the seed text is given by an automatic transcript. Experiments were carried out on data sourced from a real-world use case, more specifically, videos produced for a university YouTube channel. Results show that our approach is quite robust since the unsupervised adaptation provides similar performance to the supervised case in terms of the overall perplexity and word error rate.
REPORT
Lecorve_Idiap-RR-22-2012/IDIAP
Supervised and unsupervised Web-based language model domain adaptation
Lecorvé, Gwénolé
Dines, John
Hain, Thomas
Motlicek, Petr
ASR
Automatic Speech Recognition
domain adaptation
Language Models
supervision
Web data
EXTERNAL
https://publications.idiap.ch/attachments/reports/2012/Lecorve_Idiap-RR-22-2012.pdf
PUBLIC
Idiap-RR-22-2012
2012
Idiap
July 2012
Domain language model adaptation consists in re-estimating probabilities of a baseline LM in order to better match the specifics of a given broad topic of interest. To do so, a common strategy is to retrieve adaptation texts from the Web based on a given domain-representative seed text. In this paper, we study how the selection of this seed text influences the adaptation process and the performances of resulting adapted language models in automatic speech recognition. More precisely, the goal of this original study is to analyze the differences of our Web-based adaptation approach between the supervised case, in which the seed text is manually generated, and the unsupervised case, where the seed text is given by an automatic transcript. Experiments were carried out on data sourced from a real-world use case, more specifically, videos produced for a university YouTube channel. Results show that our approach is quite robust since the unsupervised adaptation provides similar performance to the supervised case in terms of the overall perplexity and word error rate.