Dynamically reconfigurable vision-based user interfaces

9Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A significant problem with vision-based user interfaces is that they are typically developed and tuned for one specific configuration - one set of interactions at one location in the world and in image space. This paper describes methods and architecture for a vision system that supports dynamic reconfiguration of interfaces, changing the form and location of the interaction on the fly. We accomplish this by decoupling the functional definition of the interface from the specification of its location in the physical environment and in the camera image. Applications create a user interface by requesting a configuration of predefined widgets. The vision system assembles a tree of image processing components to fulfill the request, using, if necessary, shared computational resources. This interface can be moved to any planar surface in the camera's field of view. We illustrate the power of such a reconfigurable vision-based interaction system in the context of a prototype application involving projected interactive displays. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Kjeldsen, R., Levas, A., & Pinhanez, C. (2003). Dynamically reconfigurable vision-based user interfaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2626, pp. 323–332). Springer Verlag. https://doi.org/10.1007/3-540-36592-3_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free