Top-down attention control at feature space for robust pattern recognition

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order to improve visual pattern recognition capability, this paper focuses on top-down selective attention at feature space. The baseline recognition system consists of local feature extractors and a multi-layer Perceptron (MLP) classifier. An attention layer is added just in front of the multi-layer Perceptron. Attention gains are adjusted to cope with the top-down attention process and ellucidate expected input features. After attention adaptation, the distance between original input features and expected features becomes an important measure for the confidence of the attended class. The proposed algorithms improves recognition accuracy for handwritten digit recognition tasks, and is capable of recognizing 2 superimposed patterns one by one.

Cite

CITATION STYLE

APA

Lee, S. I., & Lee, S. Y. (2000). Top-down attention control at feature space for robust pattern recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1811, pp. 129–138). Springer Verlag. https://doi.org/10.1007/3-540-45482-9_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free