MMGatorAuth: A Novel Multimodal Dataset for Authentication Interactions in Gesture and Voice

5Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The future of smart environments is likely to involve both passive and active interactions on the part of users. Depending on what sensors are available in the space, users may make use of multimodal interaction modalities such as hand gestures or voice commands. There is a shortage of robust yet controlled multimodal interaction datasets for smart environment applications. One application domain of interest based on current state-of-the-art is authentication for sensitive or private tasks, such as banking and email. We present a novel, large multimodal dataset for authentication interactions in both gesture and voice, collected from 106 volunteers who each performed 10 examples of each of a set of hand gesture and spoken voice commands chosen from prior literature (10,600 gesture samples and 13,780 voice samples). We present the data collection method, raw data and common features extracted, and a case study illustrating how this dataset could be useful to researchers. Our goal is to provide a benchmark dataset for testing future multimodal authentication solutions, enabling comparison across approaches.

Cite

CITATION STYLE

APA

Morrison-Smith, S., Aloba, A., Lu, H., Benda, B., Esmaeili, S., Flores, G., … Anthony, L. (2020). MMGatorAuth: A Novel Multimodal Dataset for Authentication Interactions in Gesture and Voice. In ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 370–377). Association for Computing Machinery, Inc. https://doi.org/10.1145/3382507.3418881

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free