Learning of Bayesian discriminant functions by a layered neural network

5Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning of Bayesian discriminant functions is a difficult task for ordinary one-hidden-layer neural networks, because the teacher signals are dichotomic random samples. When the neural network is trained, the parameters, the weights and thresholds, are usually all supposed to be optimized. However, those included in the activation functions of the hidden-layer units are optimized at the second step of the BP learning. We often experience difficulty in training such 'inner' parameters when teacher signals are dichotomic. To overcome this difficulty, we construct one-hidden-layer neural networks with a smaller number of the inner parameters to be optimized, fixing some components of the parameters. This inevitably causes increment of the hidden-layer units, but the network learns the Bayesian discriminant function better than ordinary neural networks. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ito, Y., Srinivasan, C., & Izumi, H. (2008). Learning of Bayesian discriminant functions by a layered neural network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4984 LNCS, pp. 238–247). https://doi.org/10.1007/978-3-540-69158-7_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free