logo Idiap Research Institute        
 [BibTeX] [Marc21]
Custom attribution loss for improving generalization and interpretability of deepfake detection
Type of publication: Conference paper
Citation: Korshunov_ICASSP_2022
Publication status: Accepted
Booktitle: International Conference on Acoustics, Speech, and Signal Processing
Year: 2022
Month: May
Abstract: The simplicity and accessibility of tools for generating deepfakes pose a significant technical challenge for their detection and filtering. Many of the recently proposed methods for deeptake detection focus on a 'blackbox' approach and therefore suffer from the lack of any additional information about the nature of fake videos beyond the fake or not fake labels. In this paper, we approach deepfake detection by solving the related problem of attribution, where the goal is to distinguish each separate type of a deepfake attack. We design a training approach with customized Triplet and ArcFace losses that allow to improve the accuracy of deepfake detection on several publicly available datasets, including Google and Jigsaw, FaceForensics++, HifiFace, DeeperForensics, Celeb-DF, DeepfakeTIMIT, and DF-Mobio. Using an example of Xception net as an underlying architecture, we also demonstrate that when trained for attribution, the model can be used as a tool to analyze the deepfake space and to compare it with the space of original videos.
Keywords:
Projects Idiap
Biometrics Center
Authors Korshunov, Pavel
Jain, Anubhav
Marcel, Sébastien
Added by: [UNK]
Total mark: 0
Attachments
  • Korshunov_ICASSP_2022.pdf
Notes