Multilingual bottleneck features for subword modeling in zero-resource languages
Type of publication: | Conference paper |
Citation: | Hermann_INTERSPEECH_2018 |
Publication status: | Published |
Booktitle: | Proc. Interspeech |
Year: | 2018 |
Month: | September |
Pages: | 2668-2672 |
DOI: | 10.21437/Interspeech.2018-2334 |
Abstract: | How can we effectively develop speech technology for languages where no transcribed data is available? Many existing approaches use no annotated resources at all, yet it makes sense to leverage information from large annotated corpora in other languages, for example in the form of multilingual bottleneck features (BNFs) obtained from a supervised speech recognition system. In this work, we evaluate the benefits of BNFs for subword modeling (feature extraction) in six unseen languages on a word discrimination task. First we establish a strong unsupervised baseline by combining two existing methods: vocal tract length normalisation (VTLN) and the correspondence autoencoder (cAE). We then show that BNFs trained on a single language already beat this baseline; including up to 10 languages results in additional improvements which cannot be matched by just adding more data from a single language. Finally, we show that the cAE can improve further on the BNFs if high-quality same-word pairs are available. |
Keywords: | multilingual bottleneck features, subword modeling, unsupervised feature extraction, zero-resource speech technology |
Projects |
Idiap |
Authors | |
Added by: | [UNK] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|