Show simple item record

dc.contributor.authorKrishnan, Anoop
dc.contributor.authorAlmadan, Ali
dc.contributor.authorRattani, Ajita
dc.date.accessioned2021-06-01T03:35:36Z
dc.date.available2021-06-01T03:35:36Z
dc.date.issued2021-02-21
dc.identifier.citationKrishnan, A., Almadan, A., & Rattani, A. (2021). Probing fairness of mobile ocular biometrics methods across gender on VISOB 2.0 dataset doi:10.1007/978-3-030-68793-9_16en_US
dc.identifier.isbn978-3-030-68792-2
dc.identifier.isbn978-3-030-68793-9
dc.identifier.urihttps://doi.org/10.1007/978-3-030-68793-9_16
dc.identifier.urihttps://soar.wichita.edu/handle/10057/20071
dc.descriptionClick on the URL link to access this conference paper on the publisher’s website (may not be free.)en_US
dc.description.abstractRecent research has questioned the fairness of face-based recognition and attribute classification methods (such as gender and race) for dark-skinned people and women. Ocular biometrics in the visible spectrum is an alternate solution over face biometrics, thanks to its accuracy, security, robustness against facial expression, and ease of use in mobile devices. With the recent COVID-19 crisis, ocular biometrics has a further advantage over face biometrics in the presence of a mask. However, fairness of ocular biometrics has not been studied till now. This first study aims to explore the fairness of ocular-based authentication and gender classification methods across males and females. To this aim, VISOB 2.0 dataset, along with its gender annotations, is used for the fairness analysis of ocular biometrics methods based on ResNet-50, MobileNet-V2 and lightCNN-29 models. Experimental results suggest the equivalent performance of males and females for ocular-based mobile user-authentication in terms of genuine match rate (GMR) at lower false match rates (FMRs) and an overall Area Under Curve (AUC). For instance, an AUC of 0.96 for females and 0.95 for males was obtained for lightCNN-29 on an average. However, males significantly outperformed females in deep learning based gender classification models based on ocular-region.en_US
dc.description.sponsorshipRattani is the co-organizer of the IEEE ICIP 2016 VISOB 1.0 and IEEE WCCI 2020 VISOB 2.0 mobile ocular biometric competitions. Authors would like to thank Narsi Reddy and Mark Nguyen for their assistance in dataset processing.en_US
dc.language.isoen_USen_US
dc.publisherSpringer, Chamen_US
dc.relation.ispartofseriesLecture Notes in Computer Science;Vol. 12668
dc.subjectFairness and bias in AIen_US
dc.subjectMobile ocular biometricsen_US
dc.subjectDeep learningen_US
dc.titleProbing fairness of mobile ocular biometrics methods across gender on VISOB 2.0 dataseten_US
dc.typeConference paperen_US
dc.rights.holderCopyright © 2021, Springer Nature Switzerland AGen_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record