An adaptive vision system toward implicit human computer interaction

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In implicit human computer interaction, computers are required to understand users' actions and intentions so as to provide proactive services. Visual processing has to detect and understand human actions and then transform them as the implicit input. In this paper an adaptive vision system is presented to solve visual processing tasks in dynamic meeting context. Visual modules and dynamic context analysis tasks are organized in a bidirectional scheme. Firstly human objects are detected and tracked to generate global features. Secondly current meeting scenario is inferred based on these global features, and in some specific scenarios face and hand blob level visual processing tasks are fulfilled to extract visual information for the analysis of individual and interactive events, which can further be adopted as implicit input to the computer system. The experiments in our smart meeting room demonstrate the effectiveness of the proposed framework. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Dai, P., Tao, L., Zhang, X., Dong, L., & Xu, G. (2007). An adaptive vision system toward implicit human computer interaction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4555 LNCS, pp. 792–801). Springer Verlag. https://doi.org/10.1007/978-3-540-73281-5_87

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free