Probabilistic sparse non-negative matrix factorization

11Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a probabilistic sparse non-negative matrix factorization model that extends a recently proposed variational Bayesian non-negative matrix factorization model to explicitly account for sparsity. We assess the influence of imposing sparsity within a probabilistic framework on either the loading matrix, score matrix, or both and further contrast the influence of imposing an exponential or truncated normal distribution as prior. The probabilistic methods are compared to conventional maximum likelihood based NMF and sparse NMF on three image datasets; (1) A (synthetic) swimmer dataset, (2) The CBCL face dataset, and (3) The MNIST handwritten digits dataset. We find that the probabilistic sparse NMF is able to automatically learn the level of sparsity and find that the existing probabilistic NMF as well as the proposed probabilistic sparse NMF prunes inactive components and thereby automatically learns a suitable number of components. We further find that accounting for sparsity can provide more part based representations but for the probabilistic modeling the choice of priors and how sparsity is imposed can have a strong influence on the extracted representations.

Cite

CITATION STYLE

APA

Hinrich, J. L., & Mørup, M. (2018). Probabilistic sparse non-negative matrix factorization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10891 LNCS, pp. 488–498). Springer Verlag. https://doi.org/10.1007/978-3-319-93764-9_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free