Local Linear Approximation Algorithm for Neural Network

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

This paper aims to develop a new training strategy to improve efficiency in estimation of weights and biases in a feedforward neural network (FNN). We propose a local linear approximation (LLA) algorithm, which approximates ReLU with a linear function at the neuron level and estimate the weights and biases of one-hidden-layer neural network iteratively. We further propose the layer-wise optimized adaptive neural network (LOAN), in which we use the LLA to estimate the weights and biases in the LOAN layer by layer adaptively. We compare the performance of the LLA with the commonly-used procedures in machine learning based on seven benchmark data sets. The numerical comparison implies that the proposed algorithm may outperform the existing procedures in terms of both training time and prediction accuracy.

Cite

CITATION STYLE

APA

Zeng, M., Liao, Y., Li, R., & Sudjianto, A. (2022). Local Linear Approximation Algorithm for Neural Network. Mathematics, 10(3). https://doi.org/10.3390/math10030494

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free