LUI: A multimodal, intelligent interface for large displays

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

On large screen displays, using conventional keyboard and mouse input is difficult because small mouse movements often do not scale well with the size of the display and individual elements on screen. We propose LUI, or Large User Interface, which increases the range of dynamic surface area of interactions possible on such a display. Our model leverages real-time continuous feedback of freehanded gestures and voice to control extensible applications such as photos, videos, and 3D models. Utilizing a single stereo-camera and voice assistant, LUI does not require exhaustive calibration or a multitude of sensors to operate, and it can be easily installed and deployed on any large screen surfaces. In a user study, participants found LUI efficient and easily learnable with minimal instruction, and preferred it to more conventional interfaces. This multimodal interface can also be deployed in augmented or virtual reality spaces and autonomous vehicle displays.

Cite

CITATION STYLE

APA

Parthiban, V., & Lee, A. J. (2019). LUI: A multimodal, intelligent interface for large displays. In Proceedings - VRCAI 2019: 17th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry. Association for Computing Machinery, Inc. https://doi.org/10.1145/3359997.3365743

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free