GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection
Nadimpalli, Aakash Varma ; Rattani, Ajita
Nadimpalli, Aakash Varma
Rattani, Ajita
Citations
Altmetric:
Other Names
Location
Time Period
Advisors
Original Date
Digitization Date
Issue Date
2023
Type
Conference paper
Genre
Keywords
DeepFakes,Fairness and Bias in AI,Facial Analysis
Subjects (LCSH)
Citation
Nadimpalli, A.V., Rattani, A. (2023). GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection. In: Rousseau, JJ., Kapralos, B. (eds) Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. ICPR 2022. Lecture Notes in Computer Science, vol 13644. Springer, Cham. https://doi.org/10.1007/978-3-031-37742-6_25
Abstract
Facial forgery by deepfakes has raised severe societal concerns. Several solutions have been proposed by the vision community to effectively combat the misinformation on the internet via automated deepfake detection systems. Recent studies have demonstrated that facial analysis-based deep learning models can discriminate based on protected attributes. For the commercial adoption and massive roll-out of the deepfake detection technology, it is vital to evaluate and understand the fairness (the absence of any prejudice or favoritism) of deepfake detectors across demographic variations such as gender and race. As the performance differential of deepfake detectors between demographic sub-groups would impact millions of people of the deprived sub-group. This paper aims to evaluate the fairness of the deepfake detectors across males and females. However, existing deepfake datasets are not annotated with demographic labels to facilitate fairness analysis. To this aim, we manually annotated existing popular deepfake datasets with gender labels and evaluated the performance differential of current deepfake detectors across gender. Our analysis on the gender-labeled version of the datasets suggests (a) current deepfake datasets have skewed distribution across gender, and (b) commonly adopted deepfake detectors obtain unequal performance across gender with mostly males outperforming females. Finally, we contributed a gender-balanced and annotated deepfake dataset, GBDF, to mitigate the performance differential and to promote research and development towards fairness-aware deep fake detectors. The GBDF dataset is publicly available at: https://github.com/aakash4305/GBDF
Table of Contents
Description
Click on the DOI link to access this conference paper (may not be free).
Presented at the 26th International Conference on Pattern Recognition, ICPR 2022, August 21-25, 2022
Presented at the 26th International Conference on Pattern Recognition, ICPR 2022, August 21-25, 2022
Publisher
Springer Science and Business Media Deutschland GmbH
Journal
Book Title
Series
Pattern Recognition, Computer Vision, and Image Processing
Digital Collection
Finding Aid URL
Use and Reproduction
Archival Collection
PubMed ID
DOI
ISSN
0302-9743
