A significant problem with vision-based user interfaces is that they are typically developed and tuned for one specific configuration - one set of interactions at one location in the world and in image space. This paper describes methods and architecture for a vision system that supports dynamic reconfiguration of interfaces, changing the form and location of the interaction on the fly. We accomplish this by decoupling the functional definition of the interface from the specification of its location in the physical environment and in the camera image. Applications create a user interface by requesting a configuration of predefined widgets. The vision system assembles a tree of image processing components to fulfill the request, using, if necessary, shared computational resources. This interface can be moved to any planar surface in the camera's field of view. We illustrate the power of such a reconfigurable vision-based interaction system in the context of a prototype application involving projected interactive displays. © Springer-Verlag Berlin Heidelberg 2003.
CITATION STYLE
Kjeldsen, R., Levas, A., & Pinhanez, C. (2003). Dynamically reconfigurable vision-based user interfaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2626, pp. 323–332). Springer Verlag. https://doi.org/10.1007/3-540-36592-3_31
Mendeley helps you to discover research relevant for your work.