Contextual neural networks which are using neurons with conditional aggregation functions were found to be efficient and useful generalizations of classical multilayer perceptron. They allow to generate neural classification models with good generalization and low activity of connections between neurons in hidden layers. Their properties suggest also that usage of contextual neurons with conditional signals aggregation can cause similar effects as dropout technique in convolutional deep neural networks. The key factor to build such solutions is achieving self-consistency between continuous values of weights of neurons’ connections and their mutually related non-continuous aggregation priorities. This allows to optimize neuron inputs aggregation priorities by simultaneous gradient-based optimization of connections’ weights with generalized BP algorithm. But such method additionally needs to perform sorting of neuron inputs by its weights after each given number of training epochs. Thus within this text we compare efficiency of training of contextual neural networks with selected sorting algorithms. On this basis we discuss the theoretical properties of analyzed training algorithm which are related not only to characteristics of used weights sorting methods but also to application of self-consistency to selection of neural scan-paths in contextual neural networks.
CITATION STYLE
Huk, M. (2018). Weights Ordering During Training of Contextual Neural Networks with Generalized Error Backpropagation: Importance and Selection of Sorting Algorithms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10752 LNAI, pp. 200–211). Springer Verlag. https://doi.org/10.1007/978-3-319-75420-8_19
Mendeley helps you to discover research relevant for your work.