Joint majorization-Minimization for nonnegative matrix factorization with the β-divergence

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This article proposes new multiplicative updates for nonnegative matrix factorization (NMF) with the β-divergence objective function. Our new updates are derived from a joint majorization-minimization (MM) scheme, in which an auxiliary function (a tight upper bound of the objective function) is built for the two factors jointly and minimized at each iteration. This is in contrast with the classic approach in which a majorizer is derived for each factor separately. Like that classic approach, our joint MM algorithm also results in multiplicative updates that are simple to implement. They however yield a significant drop of computation time (for equally good solutions), in particular for some β-divergences of important applicative interest, such as the quadratic loss and the Kullback-Leibler or Itakura-Saito divergences. We report experimental results using diverse datasets: face images, an audio spectrogram, hyperspectral data and song play counts. Depending on the value of β and on the dataset, our joint MM approach can yield CPU time reductions from about 13% to 86% in comparison to the classic alternating scheme.

Cite

CITATION STYLE

APA

Marmin, A., Henrique de Morais Goulart, J., & Févotte, C. (2023). Joint majorization-Minimization for nonnegative matrix factorization with the β-divergence. Signal Processing, 209. https://doi.org/10.1016/j.sigpro.2023.109048

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free