From shared subspaces to shared landmarks: A robust multi-source classification approach

14Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

Training machine leaning algorithms on augmented data from different related sources is a challenging task. This problem arises in several applications, such as the Internet of Things (IoT), where data may be collected from devices with different settings. The learned model on such datasets can generalize poorly due to distribution bias. In this paper we consider the problem of classifying unseen datasets, given several labeled training samples drawn from similar distributions. We exploit the intrinsic structure of samples in a latent subspace and identify landmarks, a subset of training instances from different sources that should be similar. Incorporating subspace learning and landmark selection enhances generalization by alleviating the impact of noise and outliers, as well as improving efficiency by reducing the size of the data. However, since addressing the two issues simultaneously results in an intractable problem, we relax the objective function by leveraging the theory of nonlinear projection and solve a tractable convex optimisation. Through comprehensive analysis, we show that our proposed approach outperforms stateof-the-art results on several benchmark datasets, while keeping the computational complexity low.

Cite

CITATION STYLE

APA

Erfani, S. M., Baktashmotlagh, M., Moshtaghi, M., Nguyen, V., Leckie, C., Bailey, J., & Ramamohanarao, K. (2017). From shared subspaces to shared landmarks: A robust multi-source classification approach. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 1854–1860). AAAI press. https://doi.org/10.1609/aaai.v31i1.10870

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free