Incorporating prior knowledge into multi-label boosting for cross-modal image annotation and retrieval

5Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automatic image annotation (AIA) has proved to be an effective and promising solution to automatically deduce the high-level semantics from low-level visual features. In this paper, we formulate the task of image annotation as a multi-label, multi class semantic image classification problem and propose a simple yet effective joint classification framework in which probabilistic multi-label boosting and contextual semantic constraints are integrated seamlessly. We conducted experiments on a medium-sized image collection including about 5000 images from Corel Stock Photo CDs. The experimental results demonstrated that the annotation performance of our proposed method is comparable to state-of-the-art approaches, showing the effectiveness and feasibility of the proposed unified framework. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Li, W., & Sun, M. (2006). Incorporating prior knowledge into multi-label boosting for cross-modal image annotation and retrieval. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4182 LNCS, pp. 404–415). Springer Verlag. https://doi.org/10.1007/11880592_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free