A flexible framework for the evaluation of unsupervised image annotation

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Automatic Image Annotation (AIA) consists in assigning keywords to images describing their visual content. A prevalent way to address the AIA task is based on supervised learning. However, the unsupervised approach is a new alternative that makes a lot of sense when there are not manually labeled images to train supervised techniques. AIA methods are typically evaluated using supervised learning performance measures, however applying these kind of measures to unsupervised methods is difficult and unfair. The main restriction has to do with the fact that unsupervised methods use an unrestricted annotation vocabulary while supervised methods use a restricted one. With the aim to alleviate the unfair evaluation, in this paper we propose a flexible evaluation framework that allows us to compare coverage and relevance of the assigned words by unsupervised automatic image annotation (UAIA) methods. We show the robustness of our framework through a set of experiments where we evaluated the output of both, unsupervised and supervised methods.

Cite

CITATION STYLE

APA

Pellegrin, L., Escalante, H. J., Montes-y-Gómez, M., Villegas, M., & González, F. A. (2018). A flexible framework for the evaluation of unsupervised image annotation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10657 LNCS, pp. 508–516). Springer Verlag. https://doi.org/10.1007/978-3-319-75193-1_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free