Uni-orthogonal nonnegative tucker decomposition for supervised image classification

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The Tucker model with orthogonality constraints (often referred to as the HOSVD) assumes decomposition of a multi-way array into a core tensor and orthogonal factor matrices corresponding to each mode. Nonnegative Tucker Decomposition (NTD) model imposes nonnegativity constraints onto both core tensor and factor matrices. In this paper, we discuss a mixed version of the models, i.e. where one factor matrix is orthogonal and the remaining factor matrices are nonnegative. Moreover, the nonnegative factor matrices are updated with the modified Barzilai-Borwein gradient projection method that belongs to a class of quasi-Newton methods. The discussed model is efficiently applied to supervised classification of facial images, hand-written digits, and spectrograms of musical instrument sounds. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Zdunek, R. (2011). Uni-orthogonal nonnegative tucker decomposition for supervised image classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6978 LNCS, pp. 88–97). https://doi.org/10.1007/978-3-642-24085-0_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free