logo Idiap Research Institute        
 [BibTeX] [Marc21]
Improving speech embedding using crossmodal transfer learning with audio-visual data
Type of publication: Journal paper
Citation: Le_MTAP_2018
Publication status: Published
Journal: Multimedia Tools and Applications
Volume: 78
Number: 11
Year: 2019
Month: January
Pages: 15681-15704
ISSN: 1380-7501
DOI: 10.1007/s11042-018-6992-3
Abstract: Learning a discriminative voice embedding allows speaker turns to be compared directly and efficiently, which is crucial for tasks such as diarization and verification. This paper investigates several transfer learning approaches to improve a voice embedding using knowledge transferred from a face representation. The main idea of our crossmodal approaches is to constrain the target voice embedding space to share latent attributes with the source face embedding space.The shared latent attributes can be formalized as geometric properties or distribution characterics between these embedding spaces. We propose four transfer learning approaches belonging to two categories: the first category relies on the structure of the source face embedding space to regularize at different granularities the speaker turn embedding space. The second category -a domain adaptation approach- improves the embedding space of speaker turns by applying a maximum mean discrepancy loss to minimize the disparity between the distributions of the embedded features. Experiments are conducted on TV news datasets, REPERE and ETAPE, to demonstrate our methods. Quantitative results in verification and clustering tasks show promising improvement, especially in cases where speaker turns are short or the training data size is limited. The analysis also gives insights the embedding spaces and shows their potential applications.
Keywords: deep learning, Face, Metric learning, multimodal identification, speaker, Speaker Diarization, transfer learning
Projects Idiap
Authors Le, Nam
Odobez, Jean-Marc
Added by: [UNK]
Total mark: 0
Attachments
    Notes