CONF Garau_INTERSPEECH_2010/IDIAP Audioa??Visual Synchronisation for Speaker Diarisation Garau, Giulia Dielmann, Alfred Bourlard, Hervé audio–visual speech synchrony canonical correlation analysis multimodal speaker diarisation multiparty meetings mutual information International Conference on Speech and Language Processing, Interspeech Makuhari, Japan 2010 September 2010 The role of audio–visual speech synchrony for speaker diarisation is investigated on the multiparty meeting domain. We measured both mutual information and canonical correlation on different sets of audio and video features. As acoustic features we considered energy and MFCCs. As visual features we experimented both with motion intensity features, computed on the whole image, and Kanade Lucas Tomasi motion estimation. Thanks to KLT we decomposed the motion in its horizontal and vertical components. The vertical component was found to be more reliable for speech synchrony estimation. The mutual information between acoustic energy and KLT vertical motion of skin pixels, not only resulted in a 20% relative improvement over a MFCC only diarisation system, but also outperformed visual features such as motion intensities and head poses.