In this paper, our purpose is extending 2D multi-touch interaction to 3D space and presenting a universal multi-touch gestures for 3D space. We described a system that allows people to use their familiar multi-touch gestures in 3D space without touching surface. We called these midair gestures in 3D as 3D multi-touch-like gestures. There is no object or surface for user to touch in 3D space, so we use depth camera to detect fingers’ state and estimate whether finger in the “click down” or “click up”, which show user’s intention to interact with system. We use machine learning to recognize hand shapes. While we do not need to precessing the recognition all the time, we only recognize hand shape between “click down” or “click up”.
CITATION STYLE
Lu, C., Zhou, L., & Tanaka, J. (2018). Realizing multi-touch-like gestures in 3D space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10904 LNCS, pp. 227–239). Springer Verlag. https://doi.org/10.1007/978-3-319-92043-6_20
Mendeley helps you to discover research relevant for your work.