In this paper, we propose, within the field of automatic social context analysis, a novel method to identify the mutual position between two persons in images. Based on the idea that mutual information of head position, body visibility and bodies' contour shapes may lead to a good estimation of mutual position between people, a predictor is constructed to classify the relative position between both subjects. We advocate the use of superpixels as the basic unit of the human analysis framework. We construct a Support Vector Machine classifier on the feature vector for each image. The results show that this combination of features, provides a significantly low error rate with low variance in our database of 366 images. © 2012 Springer-Verlag.
CITATION STYLE
Borjas, V., Drozdzal, M., Radeva, P., & Vitrià, J. (2012). Human relative position detection based on mutual occlusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7441 LNCS, pp. 332–339). https://doi.org/10.1007/978-3-642-33275-3_41
Mendeley helps you to discover research relevant for your work.