Compact CNN models for on-device ocular-based user recognition in mobile devices
Authors
Advisors
Issue Date
Type
Keywords
Citation
Abstract
A number of studies have demonstrated the efficacy of deep learning convolutional neural network (CNN) models for ocular-based user recognition in mobile devices. However, these high-performing networks have enormous space and computational complexity due to the millions of parameters and computations involved. These requirements make the deployment of deep learning models to resource-constrained mobile devices challenging. To this end, only a handful of studies based on knowledge distillation and patch-based models have been proposed to obtain compact size CNN models for ocular recognition in the mobile environment. In order to further advance the state-of-the-art, this study for the first time evaluates five neural network pruning methods and compares them with the knowledge distillation method for on-device CNN inference and mobile user verification using ocular images. Subject-independent analysis on VISOB and UPFR-Periocular datasets suggest the efficacy of layerwise magnitude-based pruning at a compression rate of 8 for mobile ocular-based authentication using ResNet-50 as the base model. Further, comparison with the knowledge distillation suggests the efficacy of knowledge distillation over pruning methods in terms of verification accuracy and the realtime inference measured as deep feature extraction time on five mobile devices, namely, iPhone 6, iPhone X, iPhone XR, iPad Air 2 and iPad 7th Generation.