Information fusion via multimodal hashing with discriminant canonical correlation maximization

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we introduce an effective information fusion method using multimodal hashing with discriminant canonical correlation maximization. As an effective computation method of similarity between different inputs, multimodal hashing technique has attracted increasing attentions in fast similarity search. In this paper, the proposed approach not only finds the minimum of the semantic similarity across different modalities by multimodal hashing, but also is capable of extracting the discriminant representations, which minimize the between-class correlation and maximize the within-class correlation simultaneously for information fusion. Benefiting from the combination of semantic similarity across different modalities and the discriminant representation strategy, the proposed algorithm can achieve improved performance. A prototype of the proposed method is implemented to demonstrate its performance in audio emotion recognition and cross-modal (text-image) fusion. Experimental results show that the proposed approach outperforms the related methods, in terms of accuracy.

Cite

CITATION STYLE

APA

Gao, L., & Guan, L. (2019). Information fusion via multimodal hashing with discriminant canonical correlation maximization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11663 LNCS, pp. 81–93). Springer Verlag. https://doi.org/10.1007/978-3-030-27272-2_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free