CONF
stephenson00c/IDIAP
Automatic Speech Recognition using Dynamic Bayesian Networks with both Acoustic and Articulatory Variables
Stephenson, Todd Andrew
Bourlard, Hervé
Bengio, Samy
Morris, Andrew
EXTERNAL
https://publications.idiap.ch/attachments/papers/2000/todd-icslp2000.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/stephenson00b
Related documents
6th International Conference on Spoken Language Processing: ICSLP~2000 (Interspeech~2000)
2000
Beijing
October 2000
II:951-954
IDIAP-RR 00-19
Current technology for automatic speech recognition (ASR) uses hidden Markov models (HMMs) that recognize spoken speech using the acoustic signal. However, no use is made of the causes of the acoustic signal: the articulators. We present here a dynamic Bayesian network (DBN) model that utilizes an additional variable for representing the state of the articulators. A particular strength of the system is that, while it uses measured articulatory data during its training, it does not need to know these values during recognition. As Bayesian networks are not used often in the speech community, we give an introduction to them. After describing how they can be used in ASR, we present a system to do isolated word recognition using articulatory information. Recognition results are given, showing that a system with both acoustics and inferred articulatory positions performs better than a system with only acoustics.
REPORT
stephenson00b/IDIAP
Automatic Speech Recognition using Dynamic Bayesian Networks with both Acoustic and Articulatory Variables
Stephenson, Todd Andrew
Bourlard, Hervé
Bengio, Samy
Morris, Andrew
EXTERNAL
https://publications.idiap.ch/attachments/reports/2000/rr00-19.pdf
PUBLIC
Idiap-RR-19-2000
2000
IDIAP
In ``6th International Conference on Spoken Language Processing: ICSLP~2000 (Interspeech~2000)'', 2000
Current technology for automatic speech recognition (ASR) uses hidden Markov models (HMMs) that recognize spoken speech using the acoustic signal. However, no use is made of the causes of the acoustic signal: the articulators. We present here a dynamic Bayesian network (DBN) model that utilizes an additional variable for representing the state of the articulators. A particular strength of the system is that, while it uses measured articulatory data during its training, it does not need to know these values during recognition. As Bayesian networks are not used often in the speech community, we give an introduction to them. After describing how they can be used in ASR, we present a system to do isolated word recognition using articulatory information. Recognition results are given, showing that a system with both acoustics and inferred articulatory positions performs better than a system with only acoustics.