CONF Ramet_SLT_2018/IDIAP CONTEXT-AWARE ATTENTION MECHANISM FOR SPEECH EMOTION RECOGNITION Ramet, Gaetan Garner, Philip N. Baeriswyl, Michael Lazaridis, Alexandros EXTERNAL https://publications.idiap.ch/attachments/papers/2018/Ramet_SLT_2018.pdf PUBLIC IEEE Workshop on Spoken Language Technology Athens, Greece 2018 126-131 978-1-5386-4333-4 http://www.slt2018.org/ URL In this work, we study the use of attention mechanisms to enhance the performance of the state-of-the-art deep learning model in Speech Emotion Recognition (SER). We introduce a new Long Short-Term Memory (LSTM)-based neural network attention model which is able to take into account the temporal information in speech during the computation of the attention vector. The proposed LSTM-based model is evaluated on the IEMOCAP dataset using a 5-fold cross-validation scheme and achieved 68.8% weighted accuracy on 4 classes, which outperforms the state-of-the-art models.