Translating Images to Words for Recognizing Objects in Large Image and Video Collections

  • Duygulu P
  • Baştan M
  • Forsyth D
N/ACitations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a new approach to the object recognition problem, motivated by the recent availability of large annotated image and video collections. This approach considers object recognition as the translation of visual elements to words, similar to the translation of text from one language to another. The visual elements represented in feature space are categorized into a finite set of blobs. The correspondences between the blobs and the words are learned, using a method adapted from Statistical Machine Translation. Once learned, these correspondences can be used to predict words corresponding to particular image regions (region naming), to predict words associated with the entire images (auto-annotation), or to associate the speech transcript text with the correct video frames (video alignment). We present our results on the Corel data set which consists of annotated images and on the TRECVID 2004 data set which consists of video frames associated with speech transcript text and manual annotations.

Cite

CITATION STYLE

APA

Duygulu, P., Baştan, M., & Forsyth, D. (2006). Translating Images to Words for Recognizing Objects in Large Image and Video Collections (pp. 258–276). https://doi.org/10.1007/11957959_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free