Generating visual concept network from large-scale weakly-tagged images

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When large-scale online images come into view, it is very attractive to incorporate visual concept network for image summarization, organization and exploration. In this paper, we have developed an automatic algorithm for visual concept network generation by determining the diverse visual similarity contexts between the image concepts. To learn more reliable inter-concept visual similarity contexts, the images with diverse visual properties are crawled from multiple sources and multiple kernels are combined to characterize the diverse visual similarity contexts between the images and handle the issue of sparse image distribution more effectively in the high-dimensional multi-modal feature space. Kernel canonical correlation analysis (KCCA) is used to characterize the diverse inter-concept visual similarity contexts more accurately, so that our visual concept network can have better coherence with human perception. A similarity-preserving visual concept network visualization technique is developed to assist users on assessing the coherence between their perceptions and the inter-concept visual similarity contexts determined by our algorithm. Our experimental results on large-scale image collections have observed very good results. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Yang, C., Luo, H., & Fan, J. (2009). Generating visual concept network from large-scale weakly-tagged images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5916 LNCS, pp. 251–261). https://doi.org/10.1007/978-3-642-11301-7_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free