Most of the existing methods focus mainly on the extraction of shape‐based, rotationbased, and motion‐based features, usually neglecting the relationship between hands and body parts, which can provide significant information to address the problem of similar sign words based on the backhand approach. Therefore, this paper proposes four feature‐based models. The spatial– temporal body parts and hand relationship patterns are the main feature. The second model consists of the spatial–temporal finger joint angle patterns. The third model consists of the spatial–temporal 3D hand motion trajectory patterns. The fourth model consists of the spatial–temporal double‐hand relationship patterns. Then, a two‐layer bidirectional long short‐term memory method is used to deal with time‐independent data as a classifier. The performance of the method was evaluated and compared with the existing works using 26 ASL letters, with an accuracy and F1‐score of 97.34% and 97.36%, respectively. The method was further evaluated using 40 double‐hand ASL words and achieved an accuracy and F1‐score of 98.52% and 98.54%, respectively. The results demonstrated that the proposed method outperformed the existing works under consideration. However, in the analysis of 72 new ASL words, including single‐ and double‐hand words from 10 participants, the accuracy and F1‐score were approximately 96.99% and 97.00%, respectively.
CITATION STYLE
Chophuk, P., Chamnongthai, K., & Chinnasarn, K. (2022). Backhand‐Approach‐Based American Sign Language Words Recognition using Spatial‐Temporal Body Parts and Hand Relationship Pattern. Sensors, 22(12). https://doi.org/10.3390/s22124554
Mendeley helps you to discover research relevant for your work.