Heterogeneous features integration via semi-supervised multi-modal deep networks

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-modal features are widely used to represent objects or events in pattern recognition and vision understanding. How to effectively integrate these heterogeneous features into a unified lowdimensional feature space has become a crucial issue in machine learning. In this work, we propose a novel approach which integrates heterogeneous features via an elaborate Semi-supervised Multi-Modal Deep Network (SMMDN). The proposed model first transforms the original data to high-level abstract homogeneous features. Then these homogeneous features are integrated into a new feature vector. By this means, our model can obtain abstract fused representations with lower-dimensionality and stronger discriminative ability. A Series of experiments are conducted on two object recognition datasets. Results show that our approach can integrate heterogeneous features effectively and achieve better performance compared to other methods.

Cite

CITATION STYLE

APA

Zhao, L., Hu, Q., & Zhou, Y. (2015). Heterogeneous features integration via semi-supervised multi-modal deep networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9492, pp. 11–19). Springer Verlag. https://doi.org/10.1007/978-3-319-26561-2_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free