This paper describes the development of a natural interface to a virtual environment. The interface is through a natural pointing gesture and replaces pointing devices which are normally used to interact with virtual environments. The pointing gesture is estimated in 3D using kinematic knowledge of the arm during pointing and monocular computer vision. The latter is used to extract the 2D position of the user’s hand and map it into 3D. Off-line tests show promising results with an average errors of 8cm when pointing at a screen 2m away.
CITATION STYLE
Moeslund, T. B., Störring, M., & Granum, E. (2002). A natural interface to a virtual environment through computer vision-estimated pointing gestures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2298, pp. 59–63). Springer Verlag. https://doi.org/10.1007/3-540-47873-6_6
Mendeley helps you to discover research relevant for your work.