A Genetic Attack Against Machine Learning Classifiers to Steal Biometric Actigraphy Profiles from Health Related Sensor Data

4Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work, we propose the use of a genetic-algorithm-based attack against machine learning classifiers with the aim of ‘stealing’ users’ biometric actigraphy profiles from health related sensor data. The target classification model uses daily actigraphy patterns for user identification. The biometric profiles are modeled as what we call impersonator examples which are generated based solely on the predictions’ confidence score by repeatedly querying the target classifier. We conducted experiments in a black-box setting on a public dataset that contains actigraphy profiles from 55 individuals. The data consists of daily motion patterns recorded with an actigraphy device. These patterns can be used as biometric profiles to identify each individual. Our attack was able to generate examples capable of impersonating a target user with a success rate of 94.5%. Furthermore, we found that the impersonator examples have high transferability to other classifiers trained with the same training set. We also show that the generated biometric profiles have a close resemblance to the ground truth profiles which can lead to sensitive data exposure, like revealing the time of the day an individual wakes-up and goes to bed.

Cite

CITATION STYLE

APA

Garcia-Ceja, E., Morin, B., Aguilar-Rivera, A., & Riegler, M. A. (2020). A Genetic Attack Against Machine Learning Classifiers to Steal Biometric Actigraphy Profiles from Health Related Sensor Data. Journal of Medical Systems, 44(10). https://doi.org/10.1007/s10916-020-01646-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free