Learning spatiotemporal T-junctions for occlusion detection

43Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The goal of motion segmentation and layer extraction can be viewed as the detection and localization of occluding surfaces. A feature that has been shown to be a particularly strong indicator of occlusion, in both computer vision and neuroscience, is the T-junction; however, little progress has been made in T-junction detection. One reason for this is the difficulty in distinguishing false T-junctions (i.e. those not on an occluding edge) and real T-junctions in cluttered images. In addition to this, their photometric profile alone is not enough for reliable detection. This paper overcomes the first problem by searching for T-junctions not in space, but in space-time. This removes many false T-junctions and creates a simpler image structure to explore. The second problem is mitigated by learning the appearance of T-junctions in these spatiotemporal images. An RVM T-junction classifier is learnt from hand-labelled data using SIFT to capture its redundancy. This detector is then demonstrated in a novel occlusion detector that fuses Canny edges and T-junctions in the spatiotemporal domain to detect occluding edges in the spatial domain. © 2005 IEEE.

Cite

CITATION STYLE

APA

Apostoloff, N., & Fitzgibbon, A. (2005). Learning spatiotemporal T-junctions for occlusion detection. In Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 (Vol. II, pp. 553–559). IEEE Computer Society. https://doi.org/10.1109/CVPR.2005.206

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free