A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning

2Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method.

Cite

CITATION STYLE

APA

Chen, Z., Jin, S., Liu, R., & Zhang, J. (2021). A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning. Frontiers in Neurorobotics, 15. https://doi.org/10.3389/fnbot.2021.701194

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free