With the unprecedented mobile technology revolution, mobile devices have transcended from being the primary means of communication to an all-in-one platform. Consequently, an increasing number of individuals are accessing online services for e-commerce and banking via smartphones instead of traditional desktop computers. However, smartphones can be easily misplaced, lost, or stolen more often than other computing devices, thereby demanding effective user authentication mechanisms for device unlocking and secured transactions. Ocular biometrics has obtained significant attention from academia and industry because of its accuracy, security, and ease of use in mobile devices. Several studies have demonstrated the efficacy of deep learning models for ocular-based user authentication on smartphones. However, these high-performing models require enormous space and computational complexity due to the millions of parameters and computations involved. These requirements make their deployment on resource-constrained smartphones challenging. To this end, a handful of studies have been proposed for compact-size ocular-based deep-learning models to facilitate on-device deployment. In this paper, we conduct a thorough analysis of the existing neural network compression techniques applied as a standalone and in combination for ocular-based user authentication. Extensive experimental validation is performed on the two latest large-scale ocular biometric datasets collected using smartphones, namely, UFPR and VISOB 2.0 datasets. This study benchmarks the results of advanced compression techniques for further research and development in lightweight models for ocular-based user authentication on smartphones.
CITATION STYLE
Almadan, A., & Rattani, A. (2023). Benchmarking Neural Network Compression Techniques for Ocular-Based User Authentication on Smartphones. IEEE Access, 11, 36550–36565. https://doi.org/10.1109/ACCESS.2023.3265357
Mendeley helps you to discover research relevant for your work.