Neuron learning machine for representation learning

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a novel neuron learning machine (NLM) which can extract hierarchical features from data. We focus on the single-layer neural network architecture and propose to model the network based on the Hebbian learning rule. Hebbian learning rule describes how synaptic weight changes with the activations of presynaptic and postsynaptic neurons. We model the learning rule as the objective function by considering the simplicity of the network and stability of solutions. We make a hypothesis and introduce a correlation based constraint according to the hypothesis. We find that this biologically inspired model has the ability of learning useful features from the perspectives of retaining abstract information. NLM can also be stacked to learn hierarchical features and reformulated into convolutional version to extract features from 2-dimensional data.

Cite

CITATION STYLE

APA

Liu, J., Gong, M., & Miao, Q. (2017). Neuron learning machine for representation learning. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 4961–4962). AAAI press. https://doi.org/10.1609/aaai.v31i1.11085

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free