Statistical learning theory make large margins an important property of linear classifiers and Support Vector Machines were designed with this target in mind. However, it has been shown that large margins can also be obtained when much simpler kernel perceptrons are used together with ad-hoc updating rules, different in principle from Rosenblatt's rule. In this work we will numerically demonstrate that, rewritten in a convex update setting and using an appropriate updating vector selection procedure, Rosenblatt's rule does indeed provide maximum margins for kernel perceptrons, although with a convergence slower than that achieved by other more sophisticated methods, such as the Schlesinger-Kozinec (SK) algorithm. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
García, D., González, A., & Dorronsoro, J. R. (2006). Convex perceptrons. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4224 LNCS, pp. 578–585). Springer Verlag. https://doi.org/10.1007/11875581_70
Mendeley helps you to discover research relevant for your work.