Compactness Hypothesis, Potential Functions, and Rectifying Linear Space in Machine Learning

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Emmanuel Braverman was one of the very few thinkers who, during his extremely short life, managed to inseminate several seemingly completely different areas of science. This paper overviews one of the knowledge areas he essentially affected in the sixties years of the last century, namely, the area of Machine Learning. Later, Vladimir Vapnik proposed a more engineering-oriented name of this knowledge area – Estimation of Dependencies Based on Empirical Data. We shall consider these titles as synonyms. The aim of the paper is to briefly trace the way how three notions introduced by Braverman formed the core of the contemporary Machine Learning doctrine. These notions are: (1) compactness hypothesis, (2) potential function, and (3) the rectifying linear space, in which the former two have resulted. There will be little new in this paper. Almost all the constructions we are going to speak about had been published by numerous scientists. The novelty is, perhaps, only in that all these issues will be systematically considered together as immediate consequences of Braveman’s basic principles.

Cite

CITATION STYLE

APA

Mottl, V., Seredin, O., & Krasotkina, O. (2018). Compactness Hypothesis, Potential Functions, and Rectifying Linear Space in Machine Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11100 LNAI, pp. 52–102). Springer Verlag. https://doi.org/10.1007/978-3-319-99492-5_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free