Learning problem has three distinct phases, that is, model representation, learning criterion (target function) and implementation algorithm. This paper focuses on the close relation between the selection of learning criterion for committee machine and network approximation and competitive adaptation. By minimizing the KL deviation between posterior distributions, we give a general posterior modular architecture and the corresponding learning criterion form, which reflects remarkable adaptation and scalability. Besides this, we point out, from the generalized KL deviation defined on finite measure manifold in information geometry theory, that the proposed learning criterion reduces to so-called Mahalanobis deviation of which ordinary mean square error approximation is a special case, when each module is assumed Gaussian. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Yang, J., & Luo, S. (2005). Adaptive and competitive committee machine architecture. In Lecture Notes in Computer Science (Vol. 3610, pp. 322–331). Springer Verlag. https://doi.org/10.1007/11539087_38
Mendeley helps you to discover research relevant for your work.