We propose a cortically inspired hierarchical feedforward model for recognition and investigate a new method for learning optimal combination-coding cells in intermediate stages of the hierarchical network. The model architecture is characterized by weight-sharing, pooling, and Winner-Take-All nonlinearities. We show that an unsupervised sparse coding learning rule can be used to obtain a recognition architecture that is competitive with other more formally abstracted recognition approaches based on supervised learning. We evaluate the performance on object and face databases. © Springer-Verlag Berlin Heidelberg 2002.
CITATION STYLE
Wersing, H., & Körner, E. (2002). Unsupervised learning of combination features for hierarchical recognition models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2415 LNCS, pp. 1225–1230). Springer Verlag. https://doi.org/10.1007/3-540-46084-5_198
Mendeley helps you to discover research relevant for your work.