Learning and integrating semantics for image indexing

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose learning and integration frameworks that extract and combine local and global semantics for image indexing and retrieval. In the supervised learning version, support vector detectors are trained on semantic support regions without image segmentation. The reconciled and aggregated detection-based indexes then serve as an input for support vector learning of image classifiers to generate class-relative image indexes. In the unsupervised learning approach, image classifiers are first trained on local image blocks from a small number of labeled images. Then local semantic patterns are discovered from clustering the image blocks with high classification output. Training samples are induced from cluster memberships for support vector learning to form local semantic pattern detectors. During retrieval, similarities based on both local and global indexes are combined to rank images. Query-by-example experiments on 2400 unconstrained consumer photos with 16 semantic queries show that the proposed approaches outperformed the fusion of color and texture features significantly in average precisions by 55% and 37% respectively. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Lim, J. H., & Jin, J. S. (2004). Learning and integrating semantics for image indexing. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3157, pp. 823–832). Springer Verlag. https://doi.org/10.1007/978-3-540-28633-2_87

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free