Feature Extraction by Using Dual-Generalized Discriminative Common Vectors

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, a dual online subspace-based learning method called dual-generalized discriminative common vectors (Dual-GDCV) is presented. The method extends incremental GDCV by exploiting simultaneously both the concepts of incremental and decremental learning for supervised feature extraction and classification. Our methodology is able to update the feature representation space without recalculating the full projection or accessing the previously processed training data. It allows both adding information and removing unnecessary data from a knowledge base in an efficient way, while retaining the previously acquired knowledge. The proposed method has been theoretically proved and empirically validated in six standard face recognition and classification datasets, under two scenarios: (1) removing and adding samples of existent classes, and (2) removing and adding new classes to a classification problem. Results show a considerable computational gain without compromising the accuracy of the model in comparison with both batch methodologies and other state-of-art adaptive methods.

Cite

CITATION STYLE

APA

Diaz-Chito, K., Martínez del Rincón, J., Rusiñol, M., & Hernández-Sabaté, A. (2019). Feature Extraction by Using Dual-Generalized Discriminative Common Vectors. Journal of Mathematical Imaging and Vision, 61(3), 331–351. https://doi.org/10.1007/s10851-018-0837-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free