Gesture-based human-robot interaction has been an important area of research in recent years. The primary aspect for the researchers has always been to create a gesture detection system that is insensitive to lighting and backdrop surroundings. This research proposes a 3D gesture recognition and adaption system based on Kinect for human-robot interaction. The framework comprises the following four modules: pointing gesture recognition, 3D dynamic gesture recognition, gesture adaptation, and robot navigation. The proposed dynamic gesture recognition module employs three distinct classifiers: HMM, Multiclass SVM, and CNN. The adaptation module can adapt to new and unrecognized gestures applying semi-supervised self-adaptation or user consent-based adaptation. A graphical user interface (GUI) is built for training and testing the proposed system on the fly. A simple simulator along with two different robot-navigation algorithms are developed to test robot navigation based on the recognized gestures. The framework is trained and tested through a five-fold cross-validation method with a total of 3,600 gesture instances of ten predefined gestures performed by 24 persons (three age categories: Young, Middle-aged, Adult; each with 1,200 gestures). The proposed system achieves a maximum accuracy score of 95.67% with HMM for the Middle-aged category, 92.59% with SVM for the Middle-aged category, and 89.58% with CNN for the Young category in dynamic gesture recognition. Considering all the three age categories, the system achieves average accuracies of 94.61%, 91.95%, and 88.97% in recognizing dynamic gestures with HMM, SVM, and CNN respectively. Moreover, the system recognizes pointing gestures in real-time.
CITATION STYLE
Mahmud, J. A., Das, B. C., Shin, J., Hasib, K. M., Sadik, R., & Mridha, M. F. (2022). 3D Gesture Recognition and Adaptation for Human-Robot Interaction. IEEE Access, 10, 116485–116513. https://doi.org/10.1109/ACCESS.2022.3218679
Mendeley helps you to discover research relevant for your work.