Transformation Invariance in Pattern Recognition — Tangent Distance and Tangent Propagation

  • Simard P
  • LeCun Y
  • Denker J
  • et al.
N/ACitations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In pattern recognition, statistical modeling, or regression, the amount of data is the most critical factor aaecting the performance. If the amount of data and computational resources are near innnite, many algorithms will provably converge to the optimal solution. When this is not the case, one has to introduce regularizers and a-priori knowledge to supplement t h e a vailable data in order to boost the performance. Invariance (or known dependence) with respect to transformation of the input is a frequent occurrence of such a-priori knowledge. In this chapter, we i n troduce the concept of tangent v ectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms , \Tangent distance" and \Tangent propagation", which make u s e of these invariances to improve performance.

Cite

CITATION STYLE

APA

Simard, P. Y., LeCun, Y. A., Denker, J. S., & Victorri, B. (1998). Transformation Invariance in Pattern Recognition — Tangent Distance and Tangent Propagation (pp. 239–274). https://doi.org/10.1007/3-540-49430-8_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free