Generalized relevance models for automatic image annotation

6Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a generalized relevance model for automatic image annotation through learning the correlations between images and annotation keywords. Unlike previous relevance models, the proposed model can perform keyword propagation not only from the training images to the test ones but also among the test images. We further give a convergence analysis of the iterative algorithm inspired by the proposed model. Moreover, our spatial Markov kernel is used to define the inter-image relations for the estimation of the joint probability of observing an image with possible annotation keywords. This kernel was originally designed for image classification, and here we apply it to image annotation. The main advantage of using our spatial Markov kernel is that we can capture the intra-image context based on 2D Markov models, which is different from the traditional bag-of-words methods. Experiments on two standard image databases demonstrate that the proposed model outperforms the state-of-the-art annotation models. © 2009 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Lu, Z., & Ip, H. H. S. (2009). Generalized relevance models for automatic image annotation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5879 LNCS, pp. 245–255). https://doi.org/10.1007/978-3-642-10467-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free