Dropout non-negative matrix factorization for independent feature learning

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Non-negative Matrix Factorization (NMF) can learn interpretable parts-based representations of natural data, and is widely applied in data mining and machine learning area. However, NMF does not always achieve good performances as the non-negative constraint leads learned features to be non-orthogonal and overlap in semantics. How to improve the semantic independence of latent features without decreasing the interpretability of NMF is still an open research problem. In this paper, we put forward dropout NMF and its extension sequential NMF to enhance the semantic independence of NMF. Dropout NMF prevents the co-adaption of latent features to reduce ambiguity while sequential NMF can further promote the independence of individual latent features. The proposed algorithms are different from traditional regularized and weighted methods, because they require no prior knowledge and bring in no extra constraints or transformations. Extensive experiments on document clustering show that our algorithms outperform baseline methods and can be seamlessly applied to NMF based models.

Cite

CITATION STYLE

APA

He, Z., Liu, J., Liu, C., Wang, Y., Yin, A., & Huang, Y. (2016). Dropout non-negative matrix factorization for independent feature learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10102, pp. 201–212). Springer Verlag. https://doi.org/10.1007/978-3-319-50496-4_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free