Benchmarking neural network compression techniques for ocular-based user authentication on smartphones


A. Almadan and A. Rattani, "Benchmarking Neural Network Compression Techniques for Ocular-Based User Authentication on Smartphones," in IEEE Access, vol. 11, pp. 36550-36565, 2023, doi: 10.1109/ACCESS.2023.3265357.


With the unprecedented mobile technology revolution, mobile devices have transcended from being the primary means of communication to an all-in-one platform. Consequently, an increasing number of individuals are accessing online services for e-commerce and banking via smartphones instead of traditional desktop computers. However, smartphones can be easily misplaced, lost, or stolen more often than other computing devices, thereby demanding effective user authentication mechanisms for device unlocking and secured transactions. Ocular biometrics has obtained significant attention from academia and industry because of its accuracy, security, and ease of use in mobile devices. Several studies have demonstrated the efficacy of deep learning models for ocular-based user authentication on smartphones. However, these high-performing models require enormous space and computational complexity due to the millions of parameters and computations involved. These requirements make their deployment on resource-constrained smartphones challenging. To this end, a handful of studies have been proposed for compact-size ocular-based deep-learning models to facilitate on-device deployment. In this paper, we conduct a thorough analysis of the existing neural network compression techniques applied as a standalone and in combination for ocular-based user authentication. Extensive experimental validation is performed on the two latest large-scale ocular biometric datasets collected using smartphones, namely, UFPR and VISOB 2.0 datasets. This study benchmarks the results of advanced compression techniques for further research and development in lightweight models for ocular-based user authentication on smartphones.

Table of Content
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see