Crossmodal Matching of Speakers using Lip and Voice Features in Temporally Non-overlapping Audio and Video Streams
Type of publication: | Conference paper |
Citation: | Roy_ICPR2010_2010 |
Booktitle: | 20th International Conference on Pattern Recognition, Istanbul, Turkey |
Year: | 2010 |
Month: | 4 |
Location: | Istanbul, Turkey |
Organization: | International Association for Pattern Recognition (IAPR) |
Crossref: | Roy_Idiap-RR-13-2010: |
Abstract: | Person identification using audio (speech) and visual (facial appearance, static or dynamic) modalities, either independently or jointly, is a thoroughly investigated problem in pattern recognition. In this work, we explore a novel task : person identification in a cross-modal scenario, i.e., matching the speaker in an audio recording to the same speaker in a video recording, where the two recordings have been made during different sessions, using speaker specific information which is common to both the audio and video modalities. Several recent psychological studies have shown how humans can indeed perform this task with an accuracy significantly higher than chance. Here we propose two systems which can solve this task comparably well, using purely pattern recognition techniques. We hypothesize that such systems could be put to practical use in multimodal biometric and surveillance systems. |
Keywords: | |
Projects |
Idiap MOBIO SNSF-MULTI |
Authors | |
Added by: | [UNK] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|