CONF
Lecorve_JEP_2012/IDIAP
Impact du degre de supervision sur l'adaptation a un domaine d'un modele de langage a partir du Web
Lecorvé, Gwénolé
Dines, John
Hain, Thomas
Motlicek, Petr
ASR
Automatic Speech Recognition
domain adaptation
Language Models
supervision
Web data
EXTERNAL
https://publications.idiap.ch/attachments/papers/2012/Lecorve_JEP_2012.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/Lecorve_Idiap-RR-23-2012
Related documents
Actes de la conference conjointe JEP-TALN-RECITAL 2012
Grenoble, France
1
193-200
2012
ATALA/AFCP
in French
Domain adaptation of a language model aims at re-estimating word sequence probabilities in order to better match the peculiarities of a given broad topic of interest. To achieve this task, a common strategy consists in retrieving adaptation texts from the Internet based on a given domain-representative seed text. In this paper, we study the influence of the choice of this seed text on the adaptation process and on the performances of adapted language models in automatic speech recognition. More precisely, the goal of this original study is to analyze the differences between supervised adaptation, in which the seed text is manually generated, and unsupervised adaptation, where the seed text is an automatic transcript. Experiments carried out on videos from a real-world use case mainly show that differences vary according to adaptation scenarios and that the unsupervised approach is globally convincing, especially according to its low cost.
REPORT
Lecorve_Idiap-RR-23-2012/IDIAP
Impact du degre de supervision sur l'adaptation a un domaine d'un modele de langage a partir du Web
Lecorvé, Gwénolé
Dines, John
Hain, Thomas
Motlicek, Petr
ASR
Automatic Speech Recognition
domain adaptation
Language Models
supervision
Web data
EXTERNAL
https://publications.idiap.ch/attachments/reports/2012/Lecorve_Idiap-RR-23-2012.pdf
PUBLIC
Idiap-RR-23-2012
2012
Idiap
July 2012
in French
Domain adaptation of a language model aims at re-estimating word sequence probabilities in order to better match the peculiarities of a given broad topic of interest. To achieve this task, a common strategy consists in retrieving adaptation texts from the Internet based on a given domain-representative seed text. In this paper, we study the influence of the choice of this seed text on the adaptation process and on the performances of adapted language models in automatic speech recognition. More precisely, the goal of this original study is to analyze the differences between supervised adaptation, in which the seed text is manually generated, and unsupervised adaptation, where the seed text is an automatic transcript. Experiments carried out on videos from a real-world use case mainly show that differences vary according to adaptation scenarios and that the unsupervised approach is globally convincing, especially according to its low cost.