Posing to the camera: Automatic viewpoint selection for human actions

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In many scenarios a scene is filmed by multiple video cameras located at different viewing positions. The difficulty in watching multiple views simultaneously raises an immediate question - which cameras capture better views of the dynamic scene? When one can only display a single view (e.g. in TV broadcasts) a human producer manually selects the best view. In this paper we propose a method for evaluating the quality of a view, captured by a single camera. This can be used to automate viewpoint selection. We regard human actions as three-dimensional shapes induced by their silhouettes in the space-time volume. The quality of a view is evaluated by incorporating three measures that capture the visibility of the action provided by these space-time shapes. We evaluate the proposed approach both qualitatively and quantitatively. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Rudoy, D., & Zelnik-Manor, L. (2011). Posing to the camera: Automatic viewpoint selection for human actions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6495 LNCS, pp. 307–320). https://doi.org/10.1007/978-3-642-19282-1_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free