AI technologies have the potential to help deaf individuals communicate. Due to the complexity of sign fragmentation and the inadequacy of capturing hand gestures, the authors present a sign language recognition (SLR) system and wearable surface electromyography (sEMG) biosensing device based on a Deep SLR that converts sign language into printed message or speech, allowing people to better understand sign language and hand motions. On the forearms, two armbands containing a biosensor and multi-channel sEMG sensors are mounted to capture quite well arm and finger actions. Deep SLR was tested on an Android and iOS smartphone, and its usefulness was determined by comprehensive testing. Sign Speaker has a considerable limitation in terms of recognising two-handed signs with smartphone and smartwatch. To solve these issues, this research proposes a new real-time end-to-end SLR method. The average word error rate of continuous sentence recognition is 9.6%, and detecting signals and recognising a sentence with six sign words takes less than 0.9 s, demonstrating Deep SLR’s recognition.
CITATION STYLE
Padmanandam, K., Rajesh, M. V., Upadhyaya, A. N., Ramesh Chandra, K., Chandrashekar, B., & Sah, S. (2022). Artificial Intelligence Biosensing System on Hand Gesture Recognition for the Hearing Impaired. International Journal of Operations Research and Information Systems, 13(2). https://doi.org/10.4018/IJORIS.306194
Mendeley helps you to discover research relevant for your work.