logo Idiap Research Institute        
 [BibTeX] [Marc21]
Probabilistic Lexical Modeling and Unsupervised Training for Zero-Resourced ASR
Type of publication: Conference paper
Citation: Rasipuram_ASRU_2013
Publication status: Accepted
Booktitle: Proceedings of the IEEE workshop on Automatic Speech Recognition and Understanding
Year: 2013
Month: December
Abstract: Standard automatic speech recognition (ASR) systems rely on transcribed speech, language models, and pronunciation dictionaries to achieve state-of-the-art performance. The unavailability of these resources constrains the ASR technology to be available for many languages. In this paper, we propose a novel zero-resourced ASR approach to train acoustic models that only uses list of probable words from the language of interest. The proposed approach is based on Kullback-Leibler divergence based hidden Markov model (KL-HMM), grapheme subword units, knowledge of grapheme-to-phoneme mapping, and graphemic constraints derived from the word list. The approach also exploits existing acoustic and lexical resources available in other resource rich languages. Furthermore, we propose unsupervised adaptation of KL-HMM acoustic model parameters if untranscribed speech data in the target language is available. We demonstrate the potential of the proposed approach through a simulated study on Greek language.
Keywords: Graphemes, Kullback-Leibler divergence based hidden Markov model, phonemes, probabilistic lexical modeling, unsupervised adaptation, zero-resourced speech recognition
Projects Idiap
Authors Rasipuram, Ramya
Razavi, Marzieh
Magimai.-Doss, Mathew
Added by: [UNK]
Total mark: 0
Attachments
  • Rasipuram_ASRU_2013.pdf
Notes