In this paper, we propose the following interpretation: if a Bayesian network has acquired translation invariance in input images, its feedback messages from higher layers to lower layers can be interpreted as the response of complex cells in the visual system. To examine our proposal’s validity, we trained a Bayesian network to acquire translation invariance using the standard belief propagation algorithm, and confirmed its feedback messages were translation invariant and thus they can be interpreted as the response of complex cells. Unlike previous studies, our model does not require specially prepared random variables. Furthermore, our model only uses the standard belief propagation algorithm. Therefore we believe that our model is more natural than the previous ones to integrate hierarchical Hubel-Wiesel architectures for the visual system, e.g. Hierarchical MAX models, and probabilistic graphical models.
CITATION STYLE
Sano, T., & Ichisugi, Y. (2017). Translation-invariant neural responses as variational messages in a bayesian network model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10613 LNCS, pp. 163–170). Springer Verlag. https://doi.org/10.1007/978-3-319-68600-4_20
Mendeley helps you to discover research relevant for your work.