This paper considers an input method for the wearable computer environment using a wearable video camera. An input system is proposed in which the user performs the motion of writing an alphanumeric character in the air, for input to the computer. In the proposed method, the user's hand motion is imaged by a wearable video camera, and letter input is performed by analysis of the monochrome gray-level picture in the computer. When a letter is written in the air, it is difficult to identify the start and end of the letter input by the user, or the start and end points of the segments composing the letter. Furthermore, in a wearable computer environment it is desirable that the hand motion be detected in the background, which is continually changing both in the daytime and at night. Consequently, this paper proposes the following three procedures: (1) extraction of the user's hand region in the picture frame by visible ray or infrared illumination; (2) determination of the center of gravity of the user's hand motion in the air, using the intensity difference between picture frames as a clue; (3) continuous DP matching to identify the letter. In order to solve the essential problem of letter recognition in the air, a writing format for alphanumeric characters is proposed. The proposed system is implemented on a video camera and a laptop PC, and an experiment on writing of letters in the air is performed with five subjects and 360 alphanumeric characters. A recognition rate of approximately 75% is obtained. © 2006 Wiley Periodicals, Inc.
CITATION STYLE
Sonoda, T., & Muraoka, Y. (2006). A letter input system based on handwriting gestures. Electronics and Communications in Japan, Part III: Fundamental Electronic Science (English Translation of Denshi Tsushin Gakkai Ronbunshi), 89(5), 53–64. https://doi.org/10.1002/ecjc.20239
Mendeley helps you to discover research relevant for your work.