Improving Generalization of Deepfake Detection by Training for Attribution
Type of publication: | Conference paper |
Citation: | Jain_MMSP_2021 |
Publication status: | Published |
Booktitle: | International Workshop on Multimedia Signal Processing |
Year: | 2021 |
Month: | October |
Abstract: | Recent advances in automated video and audio editing tools, generative adversarial networks (GANs), and social media allow the creation and fast dissemination of high-quality tampered videos, which are commonly called deepfakes. Typically, in these videos, a face is automatically swapped with the face of another person. The simplicity and accessibility of tools for generating deepfakes pose a significant technical challenge for their detection and filtering. In response to the threat, several large datasets of deepfake videos and various methods to detect them were proposed recently. However, the proposed methods suffer from the problem of over-fitting on the training data and the lack of generalization across different databases and generative approaches. In this paper, we approach deepfake detection by solving the related problem of attribution, where the goal is to distinguish each separate type of a deepfake attack. Using publicly available datasets from Google and Jigsaw, FaceForensics++, Celeb-DF, DeepfakeTIMIT, and our own large database DF-Mobio, we demonstrate that an XceptionNet and EfficientNet based models trained for an attribution task generalize better to unseen deepfakes and different datasets, compared to the same models trained for a typical binary classification task. We also demonstrate that by training for attribution with a triplet-loss, the generalization in cross-database scenario improves even more, compared to the binary system, while the performance on the same database degrades only marginally. |
Keywords: | |
Projects |
Idiap Biometrics Center |
Authors | |
Added by: | [UNK] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|