Cross-modal image clustering via canonical correlation analysis

22Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

A new algorithm via Canonical Correlation Analysis (CCA) is developed in this paper to support more effective cross-modal image clustering for large-scale annotated image collections. It can be treated as a bi-media multimodal mapping problem and modeled as a correlation distribution over multimodal feature representations. It integrates the multimodal feature generation with the Locality Linear Coding (LLC) and co-occurrence association network, multimodal feature fusion with CCA, and accelerated hierarchical k-means clustering, which aims to characterize the correlations between the inter-related visual features in images and semantic features in captions, and measure their association degree more precisely. Very positive results were obtained in our experiments using a large quantity of public data.

Cite

CITATION STYLE

APA

Jin, C., Mao, W., Zhang, R., Zhang, Y., & Xue, X. (2015). Cross-modal image clustering via canonical correlation analysis. In Proceedings of the National Conference on Artificial Intelligence (Vol. 1, pp. 151–159). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9181

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free