CONF
Dines_INTERSPEECH-2_2009/IDIAP
Speech recognition with speech synthesis models by marginalising over decision tree leaves
Dines, John
Saheer, Lakshmi
Liang, Hui
decision trees
speech recognition
speech synthesis
unified models
EXTERNAL
https://publications.idiap.ch/attachments/papers/2009/Dines_INTERSPEECH-2_2009.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/Dines_Idiap-RR-17-2009
Related documents
Proceedings of Interspeech
Brighton, U.K.
2009
September 2009
There has been increasing interest in the use of unsupervised adaptation for the personalisation of text-to-speech (TTS) voices, particularly in the context of speech-to-speech translation. This requires that we are able to generate adaptation transforms from the output of an automatic speech recognition (ASR) system. An approach that utilises unified ASR and TTS models would seem to offer an ideal mechanism for the application of unsupervised adaptation to TTS since transforms could be shared between ASR and TTS. Such unified models should use a common set of parameters. A major barrier to such parameter sharing is the use of differing contexts in ASR and TTS. In this paper we propose a simple approach that generates ASR models from a trained set of TTS models by marginalising over the TTS contexts that are not used by ASR. We present preliminary results of our proposed method on a large vocabulary speech recognition task and provide insights into future directions of this work.
REPORT
Dines_Idiap-RR-17-2009/IDIAP
Speech recognition with speech synthesis models by marginalising over decision tree leaves
Dines, John
Saheer, Lakshmi
Liang, Hui
EXTERNAL
https://publications.idiap.ch/attachments/reports/2009/Dines_Idiap-RR-17-2009.pdf
PUBLIC
Idiap-RR-17-2009
2009
Idiap
July 2009
There has been increasing interest in the use of unsupervised
adaptation for the personalisation of text-to-speech (TTS)
voices, particularly in the context of speech-to-speech translation.
This requires that we are able to generate adaptation
transforms from the output of an automatic speech recognition
(ASR) system. An approach that utilises unified ASR and TTS
models would seem to offer an ideal mechanism for the application
of unsupervised adaptation to TTS since transforms could
be shared between ASR and TTS. Such unified models should
use a common set of parameters. A major barrier to such parameter
sharing is the use of differing contexts in ASR and TTS. In
this paper we propose a simple approach that generates ASR
models from a trained set of TTS models by marginalising over
the TTS contexts that are not used by ASR. We present preliminary
results of our proposed method on a large vocabulary
speech recognition task and provide insights into future directions
of this work.