Protection of gait data set for preserving its privacy in deep learning pipeline

3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human gait is a biometric that is being used in security systems because it is unique for each individual and helps recognise one from a distance without any intervention. To develop such a system, one needs a comprehensive data set specific to the application. If this data set somehow falls in the hands of rogue elements, they can easily access the secured system developed based on the data set. Thus, the protection of the gait data set becomes essential. It has been learnt that systems using deep learning are easily prone to hacking. Hence, maintaining the privacy of gait data sets in the deep learning pipeline becomes more difficult due to adversarial attacks or unauthorised access to the data set. One of the popular techniques for stopping access to the data set is using anonymisation. A reversible gait anonymisation pipeline that modifies gait geometry by morphing the images, that is, texture modifications, is proposed. Such modified data prevent hackers from making use of the data set for adversarial attacks. Nine layers were proposedto effect geometrical modifications, and a fixed gait texture template is used for morphing. Both these modify the gait data set so that any authentic person cannot be identified while maintaining the naturalness of the gait. The proposed method is evaluated using the similarity index as well as the recognition rate. The impact of various geometrical and texture modifications on silhouettes have been investigated to identify the modifications. The crowdsourcing and machine learning experiments were performed on the silhouette for this purpose. The obtained results in both types of experiments showed that texture modification has a stronger impact on the level of privacy protection than geometry shape modifications. In these experiments, the similarity index achieved is above 99%. These findings open new research directions regarding the adversarial attacks and privacy protection related to gait recognition data sets.

Cite

CITATION STYLE

APA

Parashar, A., & Shekhawat, R. S. (2022). Protection of gait data set for preserving its privacy in deep learning pipeline. IET Biometrics, 11(6), 557–569. https://doi.org/10.1049/bme2.12093

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free