Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors

12Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Digitizing handwriting is mostly performed using either image-based methods, such as optical character recognition, or utilizing two or more devices, such as a special stylus and a smart pad. The high-cost nature of this approach necessitates a cheaper and standalone smart pen. Therefore, in this paper, a deep-learning-based compact smart digital pen that recognizes 36 alphanumeric characters was developed. Unlike common methods, which employ only inertial data, handwriting recognition is achieved from hand motion data captured using an inertial force sensor. The developed prototype smart pen comprises an ordinary ballpoint ink chamber, three force sensors, a six-channel inertial sensor, a microcomputer, and a plastic barrel structure. Handwritten data of the characters were recorded from six volunteers. After the data was properly trimmed and restructured, it was used to train four neural networks using deep-learning methods. These included Vision transformer (ViT), DNN (deep neural network), CNN (convolutional neural network), and LSTM (long short-term memory). The ViT network outperformed the others to achieve a validation accuracy of 99.05%. The trained model was further validated in real-time where it showed promising performance. These results will be used as a foundation to extend this investigation to include more characters and subjects.

Cite

CITATION STYLE

APA

Alemayoh, T. T., Shintani, M., Lee, J. H., & Okamoto, S. (2022). Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors. Sensors, 22(20). https://doi.org/10.3390/s22207840

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free