Learning spatiotemporal T-junctions for occlusion detection

  • Apostoloff N
  • Fitzgibbon A
  • 47

    Readers

    Mendeley users who have this article in their library.
  • 35

    Citations

    Citations of this article.

Abstract

The goal of motion segmentation and layer extraction can be viewed as the detection and localization of occluding surfaces. A feature that has been shown to be a particularly strong indicator of occlusion, in both computer vision and neuroscience, is the T-junction; however, little progress has been made in T-junction detection. One reason for this is the difficulty in distinguishing false T-junctions (i.e. those not on an occluding edge) and real T-junctions in cluttered images. In addition to this, their photometric profile alone is not enough for reliable detection. This paper overcomes the first problem by searching for T-junctions not in space, but in space-time. This removes many false T-junctions and creates a simpler image struc- ture to explore. The second problem is mitigated by learn- ing the appearance of T-junctions in these spatiotemporal images. An RVM T-junction classifier is learnt from hand- labelled data using SIFT to capture its redundancy. This detector is then demonstrated in a novel occlusion detector that fuses Canny edges and T-junctions in the spatiotempo- ral domain to detect occluding edges in the spatial domain.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Nicholas Apostoloff

  • Andrew Fitzgibbon

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free