Automatic image annotation using a visual dictionary based on reliable image segmentation

7Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent approaches in Automatic Image Annotation (AIA) try to combine the expressiveness of natural language queries with approaches to minimize the manual effort for image annotation. The main idea is to infer the annotations of unseen images using a small set of manually annotated training examples. However, typically these approaches suffer from low correlation between the globally assigned annotations and the local features used to obtain annotations automatically. In this paper we propose a framework to support image annotations based on a visual dictionary that is created automatically using a set of locally annotated training images. We designed a segmentation and annotation interface to allow for easy annotation of the traing data. In order to provide a framework that is easily extendable and reusable we make broad use of the MPEG-7 standard. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Hentschel, C., Stober, S., Nürnberger, A., & Detyniecki, M. (2008). Automatic image annotation using a visual dictionary based on reliable image segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4918 LNCS, pp. 45–56). https://doi.org/10.1007/978-3-540-79860-6_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free