Abstract
This paper introduces an adaptable, human-computer interaction method to control multi-rotor aerial vehicles in unsupervised, multi-subject environments. A region-based convolutional neural network (R-CNN) first detects subjects in a frame and their faces' regions of interest (RoIs), which are then fed to a facial recognition module to search for the main subject within the frame. The R-CNN model supplies the right-hand RoI of the main subject to a convolutional neural network (CNN) that classifies the right-hand gesture. A motion processing unit (MPU) and four flex sensors are embedded in the left hand's smart glove to produce discrete and continuous signals. Those signals are generated based on the bending of left-hand fingers and the left hand's roll angle and then fed to a support vector machine (SVM) to classify the left-hand gesture. Three validation layers have been implemented, including a human-based validation, classification validation, and the system validation. The comprehensive experimental results have validated the proposed method.
Author supplied keywords
Cite
CITATION STYLE
Haratiannejadi, K., & Selmic, R. R. (2020). Smart Glove and Hand Gesture-Based Control Interface for Multi-Rotor Aerial Vehicles in a Multi-Subject Environment. IEEE Access, 8, 227667–227677. https://doi.org/10.1109/ACCESS.2020.3045858
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.