ProActive Deepfake detection using GAN-based visible watermarking

Loading...
Thumbnail Image
Authors
Nadimpalli, Aakash Varma
Rattani, Ajita
Advisors
Issue Date
2024-9-12
Type
Article
Keywords
DeepFakes , Facial manipulations , Proactive deepfake detection
Research Projects
Organizational Units
Journal Issue
Citation
Aakash Varma Nadimpalli and Ajita Rattani. 2024. ProActive DeepFake Detection using GAN-based Visible Watermarking. ACM Trans. Multimedia Comput. Commun. Appl. 20, 11, Article 344 (September 2024), 27 pages. https://doi.org/10.1145/3625547
Abstract

With the advances in generative adversarial networks (GAN), facial manipulations called DeepFakes have caused major security risks and raised severe societal concerns. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and fails in blocking the disinformation spread in advance. Alternatively, precautions such as adding perturbations to the real data for unnatural distorted DeepFake output easily spotted by the human eyes are introduced as proactive defenses. Recent studies suggest that these existing proactive defenses can be easily bypassed by employing simple image transformation and reconstruction techniques when applied to the perturbed real data and the distorted output, respectively. The aim of this article is to propose a novel proactive DeepFake detection technique using GAN-based visible watermarking. To this front, we propose a reconstructive regularization added to the GAN’s loss function that embeds a unique watermark to the assigned location of the generated fake image. Thorough experiments on multiple datasets confirm the viability of the proposed approach as a proactive defense mechanism against DeepFakes from the perspective of detection by human eyes. Thus, our proposed watermark-based GANs prevent the abuse of the pretrained GANs and smartphone apps, available via online repositories, for DeepFake creation for malicious purposes. Further, the watermarked DeepFakes can also be detected by the SOTA DeepFake detectors. This is critical for applications where automatic DeepFake detectors are used for mass audits due to the huge cost associated with human observers examining a large amount of data manually. © 2024 Copyright held by the owner/author(s) Publication rights licensed to ACM.

Table of Contents
Description
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Publisher
Association for Computing Machinery
Journal
ACM Transactions on Multimedia Computing, Communications and Applications
Book Title
Series
PubMed ID
ISSN
15516857
EISSN