CONF Palaz_INTERSPEECH_2013/IDIAP Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks Palaz, Dimitri Collobert, Ronan Magimai.-Doss, Mathew EXTERNAL https://publications.idiap.ch/attachments/papers/2013/Palaz_INTERSPEECH_2013.pdf PUBLIC https://publications.idiap.ch/index.php/publications/showcite/Palaz_Idiap-RR-13-2013 Related documents Proceedings of Interspeech 2013 In hybrid hidden Markov model/artificial neural networks (HMM/ANN) automatic speech recognition (ASR) system, the phoneme class conditional probabilities are estimated by first extracting acoustic features from the speech signal based on prior knowledge such as, speech perception or/and speech production knowledge, and, then modeling the acoustic features with an ANN. Recent advances in machine learning techniques, more specifically in the field of image processing and text processing, have shown that such divide and conquer strategy (i.e., separating feature extraction and modeling steps) may not be necessary. Motivated from these studies, in the framework of convolutional neural networks (CNNs), this paper investigates a novel approach, where the input to the ANN is raw speech signal and the output is phoneme class conditional probability estimates. On TIMIT phoneme recognition task, we study different ANN architectures to show the benefit of CNNs and compare the proposed approach against conventional approach where, spectral-based feature MFCC is extracted and modeled by a multilayer perceptron. Our studies show that the proposed approach can yield comparable or better phoneme recognition performance when compared to the conventional approach. It indicates that CNNs can learn features relevant for phoneme classification automatically from the raw speech signal. REPORT Palaz_Idiap-RR-13-2013/IDIAP Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks Palaz, Dimitri Collobert, Ronan Magimai.-Doss, Mathew EXTERNAL https://publications.idiap.ch/attachments/reports/2013/Palaz_Idiap-RR-13-2013.pdf PUBLIC Idiap-RR-13-2013 2013 Idiap April 2013 In hybrid hidden Markov model/artificial neural networks (HMM/ANN) automatic speech recognition (ASR) system, the phoneme class conditional probabilities are estimated by first extracting acoustic features from the speech signal based on prior knowledge such as, speech perception or/and speech production knowledge, and, then modeling the acoustic features with an ANN. Recent advances in machine learning techniques, more specifically in the field of image processing and text processing, have shown that such divide and conquer strategy (i.e., separating feature extraction and modeling steps) may not be necessary. Motivated from these studies, in the framework of convolutional neural networks (CNNs), this paper investigates a novel approach, where the input to the ANN is raw speech signal and the output is phoneme class conditional probability estimates. On TIMIT phoneme recognition task, we study different ANN architectures to show the benefit of CNNs and compare the proposed approach against conventional approach where, spectral-based feature MFCC is extracted and modeled by a multilayer perceptron. Our studies show that the proposed approach can yield comparable or better phoneme recognition performance when compared to the conventional approach. It indicates that CNNs can learn features relevant for phoneme classification automatically from the raw speech signal.