An Acoustic Model Based on Kullback-Leibler Divergence for Posterior Features
| Type of publication: | Idiap-RR |
| Citation: | aradilla:rr06-60 |
| Number: | Idiap-RR-60-2006 |
| Year: | 2006 |
| Institution: | IDIAP |
| Abstract: | This paper investigates the use of features based on posterior probabilities of subword units such as phonemes. These features are typically transformed when used as inputs for a hidden Markov model with mixture of Gaussians as emission distribution (HMM/GMM). In this work, we introduce a novel acoustic model that avoids the Gaussian assumption and directly uses posterior features without any transformation. This model is described by a finite state machine where each state is characterized by a target distribution and the cost function associated to each state is given by the Kullback-Leibler (KL) divergence between its target distribution and the posterior features. Furthermore, hybrid HMM/ANN system can be seen as a particular case of this KL-based model where state target distributions are predefined. A training method is also presented that minimizes the KL-divergence between the state target distributions and the posteriors features. |
| Userfields: | ipdmembership={speech}, |
| Keywords: | |
| Projects: |
Idiap |
| Authors: | |
| Crossref by |
aradilla:icassp:2007 |
| Added by: | [UNK] |
| Total mark: | 0 |
|
Attachments
|
|
|
Notes
|
|
|
|
|