Matrix neural networks

15Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Traditional neural networks assume vectorial inputs as the network is arranged as layers of single line of computing units called neurons. This special structure requires the non-vectorial inputs such as matrices to be converted into vectors. This process can be problematic for loss of spatial information and huge solution space. To address these issues, we propose matrix neural networks (MatNet), which takes matrices directly as inputs. Each layer summarises and passes information through bilinear mapping. Under this structure, back prorogation and gradient descent combination can be utilised to obtain network parameters efficiently. Furthermore, it can be conveniently extended for multi-modal inputs. We apply MatNet to MNIST handwritten digits classification and image super resolution tasks to show its effectiveness. Without too much tweaking MatNet achieves comparable performance as the state-of-the-art methods in both tasks with considerably reduced complexity.

Cite

CITATION STYLE

APA

Gao, J., Guo, Y., & Wang, Z. (2017). Matrix neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10261 LNCS, pp. 313–320). Springer Verlag. https://doi.org/10.1007/978-3-319-59072-1_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free