Video Google: Efficient Visual Search of Videos

  • Sivic J
  • Zisserman A
N/ACitations
Citations of this article
144Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe an approach to object retrieval which searches for and localizes all the occurrences of an object in a video, given a query image of the object. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject those that are unstable. Efficient retrieval is achieved by employing methods from statistical text retrieval, including inverted file systems, and text and document frequency weightings. This requires a visual analogy of a word which is provided here by vector quantizing the region descriptors. The final ranking also depends on the spatial layout of the regions. The result is that retrieval is immediate, returning a ranked list of shots in the manner of Google. We report results for object retrieval on the full length feature films ‘Groundhog Day’ and ‘Casablanca’.

Cite

CITATION STYLE

APA

Sivic, J., & Zisserman, A. (2006). Video Google: Efficient Visual Search of Videos (pp. 127–144). https://doi.org/10.1007/11957959_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free