The repository is currently being upgraded to DSpace 7. Temporarily, only admins can login. Submission of items and changes to existing items is prohibited until the completion of this upgrade process.
Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups
Citation
Ramachandran, S., Rattani, A. (2023). Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups. In: Rousseau, JJ., Kapralos, B. (eds) Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. ICPR 2022. Lecture Notes in Computer Science, vol 13645. Springer, Cham. https://doi.org/10.1007/978-3-031-37731-0_40
Abstract
Published studies have suggested the bias of automated face-based gender classification algorithms across gender-race groups. Specifically, unequal accuracy rates were obtained for women and dark-skinned people. To mitigate the bias of gender classifiers, the vision community has developed several strategies. However, the efficacy of these mitigation strategies is demonstrated for a limited number of races mostly, Caucasian and African-American. Further, these strategies often offer a trade-off between bias and classification accuracy. To further advance the state-of-the-art, we leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias. We demonstrate the superiority of our bias mitigation strategy in improving classification accuracy and reducing bias across gender-racial groups through extensive experimental validation, resulting in state-of-the-art performance in intra- and cross dataset evaluations.
Description
Click on the DOI link to access this conference paper (may not be free).
Presented at the 26th International Conference on Pattern Recognition, ICPR 2022, August 21-25, 2022