Weighted nonlinear line attractor for complex manifold learning

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An artificial neural network is modeled by weighting between different neurons to form synaptic connections. The nonlinear line attractor (NLA) models the weighting architecture by a polynomial weight set, which provides stronger connections between neurons. With the connections between neurons, we desired neuron weighting based on proximity using a Gaussian weighting strategy of the neurons that should reduce computational times significantly. Instead of using proximity to the neurons, it is found that utilizing the error found from estimating the output neurons to weight the connections between the neurons would provide the best results. The polynomial weights that are trained into the neural network will be reduced using a nonlinear dimensionality reduction which preserves the locality of the weights, since the weights are Gaussian weighted. A distance measure is then used to compare the test and training data. From testing the algorithm, it is observed that the proposed weighted NLA algorithm provides better recognition than both the GNLA algorithm and the original NLA algorithm.

Cite

CITATION STYLE

APA

Aspiras, T. H., Asari, V. K., & Sakla, W. (2017). Weighted nonlinear line attractor for complex manifold learning. In Studies in Computational Intelligence (Vol. 669, pp. 371–385). Springer Verlag. https://doi.org/10.1007/978-3-319-48506-5_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free