Matrix analysis for fast learning of neural networks with application to the classification of acoustic spectra

  • Paul V
  • Nelson P
10Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Neural networks are increasingly being applied to problems in acoustics and audio signal processing. Large audio datasets are being generated for use in training machine learning algorithms, and the reduction of training times is of increasing relevance. The work presented here begins by reformulating the analysis of the classical multilayer perceptron to show the explicit dependence of network parameters on the properties of the weight matrices in the network. This analysis then allows the application of the singular value decomposition (SVD) to the weight matrices. An algorithm is presented that makes use of regular applications of the SVD to progressively reduce the dimensionality of the network. This results in significant reductions in network training times of up to 50% with very little or no loss in accuracy. The use of the algorithm is demonstrated by applying it to a number of acoustical classification problems that help quantify the extent to which closely related spectra can be distinguished by machine learning.

Cite

CITATION STYLE

APA

Paul, V. S., & Nelson, P. A. (2021). Matrix analysis for fast learning of neural networks with application to the classification of acoustic spectra. The Journal of the Acoustical Society of America, 149(6), 4119–4133. https://doi.org/10.1121/10.0005126

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free