Integrated Sparse Coding with Graph Learning for Robust Data Representation

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Sparse coding is a popular technique for achieving compact data representation and has been used in many applications. However, the instability issue often causes degeneration in practice and thus attracts a lot of studies. While the traditional graph sparse coding preserves the neighborhood structure of the data, this study integrates the low-rank representation(LRR) to fix the inconsistency of sparse coding by holding the subspace structures of the high-dimensional observations. The proposed method is dubbed low-rank graph regularized sparse coding (LogSC), which learns sparse codes and low-rank representations jointly rather than the traditional two-step approach. Since the two data representations share a dictionary matrix, the resulted sparse representation on this dictionary could be benefited from LRR. We solved the optimization problem of LogSC by using the linearized alternating direction method with adaptive penalty. Experimental results show the proposed method is discriminative in feature learning and robust to various noises. This work provides a one-step approach to integrating graph embedding in representation learning.

Cite

CITATION STYLE

APA

Zhang, Y., & Liu, S. (2020). Integrated Sparse Coding with Graph Learning for Robust Data Representation. IEEE Access, 8, 161245–161260. https://doi.org/10.1109/ACCESS.2020.3021081

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free