Laplacian Welsch Regularization for Robust Semi-supervised Dictionary Learning

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Semi-supervised dictionary learning aims to find a suitable dictionary by utilizing limited labeled examples and massive unlabeled examples, so that any input can be sparsely reconstructed by the atoms in a proper way. However, existing algorithms will suffer from large reconstruction error due to the presence of outliers. To enhance the robustness of existing methods, this paper introduces an upper-bounded, smooth and nonconvex Welsch loss which is able to constrain the adverse effect brought by outliers. Besides, we adopt the Laplacian regularizer to enforce similar examples to share similar reconstruction coefficients. By combining Laplacian regularizer and Welsch loss into a unified framework, we propose a novel semi-supervised dictionary learning algorithm termed “Laplacian Welsch Regularization” (LWR). To handle the model non-convexity caused by the Welsch loss, we adopt Half-Quadratic (HQ) optimization algorithm to solve the model efficiently. Experimental results on various real-world datasets show that LWR performs robustly to outliers and achieves the top-level results when compared with the existing algorithms.

Cite

CITATION STYLE

APA

Ke, J., Gong, C., & Zhao, L. (2019). Laplacian Welsch Regularization for Robust Semi-supervised Dictionary Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11936 LNCS, pp. 40–52). Springer. https://doi.org/10.1007/978-3-030-36204-1_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free